diff --git a/vendor/gitee.com/chunanyong/zorm/.gitignore b/vendor/gitee.com/chunanyong/zorm/.gitignore deleted file mode 100644 index 7fce6f1c..00000000 --- a/vendor/gitee.com/chunanyong/zorm/.gitignore +++ /dev/null @@ -1,34 +0,0 @@ -# idea ignore -.idea/ -*.ipr -*.iml -*.iws - -.vscode/ - -*.swp - -# temp ignore -*.log -*.cache -*.diff -*.exe -*.exe~ -*.patch -*.tmp -*debug.test -debug.test -go.sum - -# system ignore -.DS_Store -Thumbs.db - -# project -*.cert -*.key -.test -iprepo.txt - - -_output \ No newline at end of file diff --git a/vendor/gitee.com/chunanyong/zorm/CHANGELOG.md b/vendor/gitee.com/chunanyong/zorm/CHANGELOG.md deleted file mode 100644 index 9f255218..00000000 --- a/vendor/gitee.com/chunanyong/zorm/CHANGELOG.md +++ /dev/null @@ -1,286 +0,0 @@ -v1.6.6 - - 感谢 @encircles 的pr,使用FuncWrapFieldTagName函数自定义Tag列名 - - 简化查询逻辑,统一reBindSQL,不覆盖finder参数值,提升性能 - - 修复获取自增主键异常 - - 完善文档,注释 - -v1.6.5 - - TDengineInsertsColumnName TDengine批量insert语句中是否有列名.默认false没有列名,插入值和数据库列顺序保持一致,减少语句长度 - - 调整FuncGlobalTransaction函数返回值,支持seata-go - - 完善文档,注释 - -v1.6.4 - - 感谢@haifengat 的场景反馈,完善NUMBER类型的数据接收 - - RegisterCustomDriverValueConver函数的 dialectColumnType 参数修改为 Dialect.字段类型 ,例如:dm.TEXT - - 增加FuncDecimalValue函数,设置decimal类型接收值,复写函数自定义decimal实现 - - NewSelectFinder方法参数strs取值第一个字符串 - - 感谢@soldier_of_love 的场景反馈,error日志记录执行的sql和参数值 - - 清理无效的代码和注释 - - 完善文档,注释 - -v1.6.3 - - 感谢@rebens 的场景反馈,增加InsertEntityMapSlice函数,批量保存EntityMap - - 感谢@haifengat 的场景反馈,ICustomDriverValueConver增加structFieldType *reflect.Type入参 - - 感谢@zhou-a-xing 调整匿名结构体字段顺序 - - 感谢@rebens 反馈的问题,避免IEntityMap默认实现IEntityStruct接口 - - 感谢@cucuy 对www.zorm.cn官网的修改 - - 完善文档,注释 - -v1.6.2 - - 捕获panic,赋值给err,避免程序崩溃 - - 增加sqlserver 和 oracle 分页默认order by - - 录制视频教程:https://www.bilibili.com/video/BV1L24y1976U/ - - 完善文档,注释 - -v1.6.1 - - 使用RegisterCustomDriverValueConver函数替代CustomDriverValueMap变量,将 ```zorm.CustomDriverValueMap["*dm.DmClob"] = CustomDMText{}```修改为```zorm.RegisterCustomDriverValueConver("TEXT", CustomDMText{})```,达梦数据库重新复制示例代码,重新复制!!重新复制!!! - - 重写sqlRowsValues函数,支持查询单个字段,Struct类型接收 - - 简化自增序列的实现,使用string代替map[string]string - - 使用OverrideFunc重写zorm的函数,暴露WrapUpdateStructFinder函数 - - 去掉kingbase列的大写转换,修改字符串拼接方式,提升性能 - - BindContextDisableTransaction 用在不使用事务更新数据库的场景,强烈建议不要使用这个方法,更新数据库必须有事务!!! - - 增加查询没有返回列的判断,特殊情况可以使用Query执行更新语句,绕过事务检查(不建议) - - 更新官网 https://zorm.cn - - 完善文档,注释 - -v1.6.0 - - 更新漂亮的logo - - 增加db2数据支持,依赖Limit分页语法 - - DBType即将废弃,更名为Dialect,方便gorm和xorm迁移 - - FuncReadWriteStrategy和GetGTXID函数增加error返回值 - - 修改日志格式,统一加上 -> 符号 - - 曾经偷的懒还是还上吧,类型转换加上err返回值.去掉无用的日期格式转换,驱动获取的并不是[]byte - - 修复Finder.Append和GetSQL为nil的bug - - 完善文档,注释 - -v1.5.9 - - hptx已合并@小口天的pr, [hptx代理模式zorm使用示例](https://github.com/CECTC/hptx-samples/tree/main/http_proxy_zorm) 和 [zorm事务托管hptx示例](https://github.com/CECTC/hptx-samples/tree/main/http_zorm) - - 增加IsInTransaction(ctx)函数,检查是否在事务内 - - 扩展函数统一加上ctx入参,方便场景自定义扩展 - - 取消PrintSQL参数,使用SlowSQLMillis控制输出慢sql语句 - - 完善文档,注释 - -v1.5.8 -更新内容: - - 感谢 @zhou-a-xing 编写TDengine的测试用例,不允许手动拼接 '?' 单引号,强制使用?,书写统一 - - 感谢 @小口天 反馈的bug和编写hptx测试用例,修改全局事务接口方法名,避免和gtx方法名一致造成递归调用 - - 取消自动开启全局事务,必须手动zorm.BindContextEnableGlobalTransaction(ctx)开启全局事务 - - 重构 reBindSQL 函数,在SQL最后执行前统一处理 - - 吐槽很久的switch代替if else - - 完善文档,注释 - -v1.5.7 -更新内容: - - 感谢 @小口天 的辛苦付出,https://gitee.com/wuxiangege/zorm-examples 测试用例已经非常完善. - - 按照反射获取的Struct属性顺序,生成insert语句和update语句 - - 支持TDengine数据库,因TDengine驱动不支持事务,需要设置DisableTransaction=true - - 增加hptx和dbpack分布式事务的支持,细粒度控制是否使用全局事务 - - DisableTransaction用于全局禁用数据库事务,用于不支持事务的数据库驱动. - - 完善文档,注释 - -v1.5.6 -更新内容: - - 感谢@无泪发现Transaction方法返回值为nil的bug,已修复 - - 感谢社区贡献,https://zorm.cn 官网上线,很丑的logo上线 :). - - 支持已经存在的数据库连接 - - 修改panic的异常记录和主键零值判断,用于支持基础类型扩展的主键 - - 完善文档,注释 - -v1.5.5 -更新内容: - - 增加CloseDB函数,关闭数据库连接池 - - 完善文档,注释 - -v1.5.4 -更新内容: - - QueryRow如果查询一个字段,而且这个字段数据库为null,会有异常,没有赋为默认值 - - reflect.Type 类型的参数,修改为 *reflect.Type 指针,包括CustomDriverValueConver接口的参数 - - 完善文档,注释 - -v1.5.3 -更新内容: - - 感谢@Howard.TSE的建议,判断配置是否为空 - - 感谢@haming123反馈性能问题.zorm 1.2.x 版本实现了基础功能,读性能比gorm和xorm快一倍.随着功能持续增加,造成性能下降,目前读性能只快了50%. - - 性能优化,去掉不必要的反射 - - 完善文档,注释 - -v1.5.2 -更新内容: - - 感谢奔跑(@zeqjone)提供的正则,排除不在括号内的from,已经满足绝大部分场景 - - 感谢奔跑(@zeqjone) pr,修复 金仓数据库模型定义中tag数据库列标签与数据库内置关键词冲突时,加双引号处理 - - 升级 decimal 到1.3.1 - - 完善文档,注释 - -v1.5.1 -更新内容: - - 完善文档,注释 - - 注释未使用的代码 - - 先判断error,再执行defer rows.Close() - - 增加微信社区支持(负责人是八块腹肌的单身小伙 @zhou-a-xing) - - -v1.5.0 -更新内容: - - 完善文档,注释 - - 支持clickhouse,更新,删除语句使用SQL92标准语法 - - ID默认使用时间戳+随机数,代替UUID实现 - - 优化SQL提取的正则表达式 - - 集成seata-golang,支持全局托管,不修改业务代码,零侵入分布式事务 - -v1.4.9 -更新内容: - - 完善文档,注释 - - 摊牌了,不装了,就是修改注释,刷刷版本活跃度 - -v1.4.8 -更新内容: - - 完善文档,注释 - - 数据库字段和实体类额外映射时,支持 _ 下划线转驼峰 - -v1.4.7 -更新内容: - - 情人节版本,返回map时,如果无法正常转换值类型,就返回原值,而不是nil - -v1.4.6 -更新内容: - - 完善文档,注释 - - 千行代码,胜他十万,牛气冲天,zorm零依赖.(uuid和decimal这两个工具包竟然有1700行代码) - - 在涉密内网开发环境中,零依赖能减少很多麻烦,做不到请不要说没必要...... - -v1.4.5 -更新内容: - - 增强自定义类型转换的功能 - - 完善文档,注释 - - 非常感谢 @anxuanzi 完善代码生成器 - - 非常感谢 @chien_tung 增加changelog,以后版本发布都会记录changelog - -v1.4.4 -更新内容: - - 如果查询的字段在column tag中没有找到,就会根据名称(不区分大小写)映射到struct的属性上 - - 给QueryRow方法增加 has 的返回值,标识数据库是有一条记录的,各位已经使用的大佬,升级时注意修改代码,非常抱歉*3! - -v1.4.3 -更新内容: - - 正式支持南大通用(gbase)数据库,完成国产四库的适配 - - 增加设置全局事务隔离级别和单个事务的隔离级别 - - 修复触发器自增主键的逻辑bug - - 文档完善和细节调整 - -v1.4.2 -更新内容: - - 正式支持神州通用(shentong)数据库 - - 完善pgsql和kingbase的自增主键返回值支持 - - 七家公司的同学建议查询和golang sql方法命名保持统一.做了一个艰难的决定,修改zorm的部分方法名.全局依次替换字符串即可. -zorm.Query( 替换为 zorm.QueryRow( -zorm.QuerySlice( 替换为 zorm.Query( -zorm.QueryMap( 替换为 zorm.QueryRowMap( -zorm.QueryMapSlice( 替换为 zorm.QueryMap( - -v1.4.1 -更新内容: - - 支持自定义扩展字段映射逻辑 - -v1.4.0 -更新内容: - - 修改多条数据的判断逻辑 - -v1.3.9 -更新内容: - - 支持自定义数据类型,包括json/jsonb - - 非常感谢 @chien_tung 同学反馈的问题, QuerySlice方法支持*[]*struct类型,简化从xorm迁移 - - 其他代码细节优化. - -v1.3.7 -更新内容: - - 非常感谢 @zhou- a- xing 同学(八块腹肌的单身少年)的英文翻译,zorm的核心代码注释已经是中英双语了. - - 非常感谢 @chien_tung 同学反馈的问题,修复主键自增int和init64类型的兼容性. - - 其他代码细节优化. - -v1.3.6 -更新内容: - - 完善注释文档 - - 修复Delete方法的参数类型错误 - - 其他代码细节优化. - -v1.3.5 -更新内容: - - 完善注释文档 - - 兼容处理数据库为null时,基本类型取默认值,感谢@fastabler的pr - - 修复批量保存方法的一个bug:如果slice的长度为1,在pgsql和oracle会出现异常 - - 其他代码细节优化. - -v1.3.4 -更新内容: - - 完善注释文档 - - 取消分页语句必须有order by的限制 - - 支持人大金仓数据库 - - 人大金仓驱动说明: https://help.kingbase.com.cn/doc- view- 8108.html - - 人大金仓kingbase 8核心是基于postgresql 9.6,可以使用 https://github.com/lib/pq 进行测试,生产环境建议使用官方驱动 - -v1.3.3 -更新内容: - - 完善注释文档 - - 增加批量保存Struct对象方法 - - 正式支持达梦数据库 - - 基于达梦官方驱动,发布go mod项目 https://gitee.com/chunanyong/dm - -v1.3.2 -更新内容: - - 增加达梦数据的分页适配 - - 完善调整代码注释 - - 增加存储过程和函数的调用示例 - -v1.3.1 -更新内容: - - 修改方法名称,和gorm和xorm保持相似,降低迁移和学习成本 - - 更新测试用例文档 - -v1.3.0 -更新内容: - - 去掉zap日志依赖,通过复写 FuncLogError FuncLogPanic FuncPrintSQL 实现自定义日志 - - golang版本依赖调整为v1.13 - - 迁移测试用到readygo,zorm项目不依赖任何数据库驱动包 - -v1.2.9 -更新内容: - - IEntityMap支持主键自增或主键序列 - - 更新方法返回影响的行数affected - - 修复 查询IEntityMap时数据库无记录出现异常的bug - - 测试用例即文档 https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/BaseDao_test.go - -v1.2.8 -更新内容: - - 暴露FuncGenerateStringID函数,方便自定义扩展字符串主键ID - - Finder.Append 默认加一个空格,避免手误出现语法错误 - - 缓存字段信息时,使用map代替sync.Map,提高性能 - - 第三方性能压测结果 - -v1.2.6 -更新内容: - - DataSourceConfig 配置区分 DriverName 和 DBType,兼容一种数据库的多个驱动包 - - 不再显示依赖数据库驱动,由使用者确定依赖的数据库驱动包 - -v1.2.5 -更新内容: - - 分页语句必须有明确的order by,避免数据库迁移时出现分页语法不兼容. - - 修复列表查询时,page对象为nil的bug - -v1.2.3 -更新内容: - - 完善数据库支持,目前支持MySQL,SQLServer,Oracle,PostgreSQL,SQLite3 - - 简化数据库读写分离实现,暴露zorm.FuncReadWriteBaseDao函数属性,用于自定义读写分离策略 - - 精简zorm.DataSourceConfig属性,增加PrintSQL属性 - -v1.2.2 -更新内容: - - 修改NewPage()返回Page对象指针,传递时少写一个 & 符号 - - 取消GetDBConnection()方法,使用BindContextConnection()方法进行多个数据库库绑定 - - 隐藏DBConnection对象,不再对外暴露数据库对象,避免手动初始化造成的异常 - -v1.1.8 -更新内容: - - 修复UUID支持 - - 数据库连接和事务隐藏到context.Context为统一参数,符合golang规范,更好的性能 - - 封装logger实现,方便更换log包 - - 增加zorm.UpdateStructNotZeroValue 方法,只更新不为零值的字段 - - 完善测试用例 diff --git a/vendor/gitee.com/chunanyong/zorm/DBDao.go b/vendor/gitee.com/chunanyong/zorm/DBDao.go deleted file mode 100644 index 2165bcc5..00000000 --- a/vendor/gitee.com/chunanyong/zorm/DBDao.go +++ /dev/null @@ -1,1846 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -// Package zorm 使用原生的sql语句,没有对sql语法做限制.语句使用Finder作为载体 -// 占位符统一使用?,zorm会根据数据库类型,语句执行前会自动替换占位符,postgresql 把?替换成$1,$2...;mssql替换成@P1,@p2...;orace替换成:1,:2... -// zorm使用 ctx context.Context 参数实现事务传播,ctx从web层传递进来即可,例如gin的c.Request.Context() -// zorm的事务操作需要显示使用zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})开启 -// "package zorm" Use native SQL statements, no restrictions on SQL syntax. Statements use Finder as a carrier -// Use placeholders uniformly "?" "zorm" automatically replaces placeholders before statements are executed,depending on the database type. Replaced with $1, $2... ; Replace MSSQL with @p1,@p2... ; Orace is replaced by :1,:2..., -// "zorm" uses the "ctx context.Context" parameter to achieve transaction propagation,and ctx can be passed in from the web layer, such as "gin's c.Request.Context()", -// "zorm" Transaction operations need to be displayed using "zorm.transaction" (ctx, func(ctx context.context) (interface{}, error) {}) -package zorm - -import ( - "context" - "database/sql" - "database/sql/driver" - "errors" - "fmt" - "reflect" - "strconv" - "strings" - "time" -) - -// FuncReadWriteStrategy 数据库的读写分离的策略,用于外部重写实现自定义的逻辑,也可以使用ctx标识,处理多库的场景,rwType=0 read,rwType=1 write -// 不能归属到DBDao里,BindContextDBConnection已经是指定数据库的连接了,和这个函数会冲突.就作为读写分离的处理方式 -// 即便是放到DBDao里,因为是多库,BindContextDBConnection函数调用少不了,业务包装一个方法,指定一下读写获取一个DBDao效果是一样的,唯一就是需要根据业务指定一下读写,其实更灵活了 -// FuncReadWriteStrategy Single database read and write separation strategy,used for external replication to implement custom logic, rwType=0 read, rwType=1 write. -// "BindContextDBConnection" is already a connection to the specified database and will conflict with this function. As a single database read and write separation of processing -var FuncReadWriteStrategy = func(ctx context.Context, rwType int) (*DBDao, error) { - if defaultDao == nil { - return nil, errors.New("->FuncReadWriteStrategy-->defaultDao为nil,请检查数据库初始化配置是否正确,主要是DSN,DriverName和Dialect") - } - return defaultDao, nil -} - -// wrapContextStringKey 包装context的key,不直接使用string类型,避免外部直接注入使用 -type wrapContextStringKey string - -// contextDBConnectionValueKey context WithValue的key,不能是基础类型,例如字符串,包装一下 -// The key of context WithValue cannot be a basic type, such as a string, wrap it -const contextDBConnectionValueKey = wrapContextStringKey("contextDBConnectionValueKey") - -// contextTxOptionsKey 事务选项设置TxOptions的key,设置事务的隔离级别 -const contextTxOptionsKey = wrapContextStringKey("contextTxOptionsKey") - -// stringBuilderGrowLen 默认长度 -const stringBuilderGrowLen = 100 - -// DataSourceConfig 数据库连接池的配置 -// DateSourceConfig Database connection pool configuration -type DataSourceConfig struct { - // DSN dataSourceName 连接字符串 - // DSN DataSourceName Database connection string - DSN string - // DriverName 数据库驱动名称:mysql,postgres,oracle(go-ora),sqlserver,sqlite3,go_ibm_db,clickhouse,dm,kingbase,aci,taosSql|taosRestful 和Dialect对应 - // DriverName:mysql,dm,postgres,opi8,sqlserver,sqlite3,go_ibm_db,clickhouse,kingbase,aci,taosSql|taosRestful corresponds to Dialect - DriverName string - // Dialect 数据库方言:mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse,dm,kingbase,shentong,tdengine 和 DriverName 对应 - // Dialect:mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse,dm,kingbase,shentong,tdengine corresponds to DriverName - Dialect string - // Deprecated - // DBType 即将废弃,请使用Dialect属性 - // DBType is about to be deprecated, please use the Dialect property - DBType string - // SlowSQLMillis 慢sql的时间阈值,单位毫秒.小于0是禁用SQL语句输出;等于0是只输出SQL语句,不计算执行时间;大于0是计算SQL执行时间,并且>=SlowSQLMillis值 - SlowSQLMillis int - // MaxOpenConns 数据库最大连接数,默认50 - // MaxOpenConns Maximum number of database connections, Default 50 - MaxOpenConns int - // MaxIdleConns 数据库最大空闲连接数,默认50 - // MaxIdleConns The maximum number of free connections to the database default 50 - MaxIdleConns int - // ConnMaxLifetimeSecond 连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时) - // ConnMaxLifetimeSecond (Connection survival time in seconds)Destroy and rebuild the connection after the default 600 seconds (10 minutes) - // Prevent the database from actively disconnecting and causing dead connections. MySQL Default wait_timeout 28800 seconds - ConnMaxLifetimeSecond int - - // DefaultTxOptions 事务隔离级别的默认配置,默认为nil - DefaultTxOptions *sql.TxOptions - - // DisableTransaction 禁用事务,默认false,如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务.为了处理某些数据库不支持事务,比如TDengine - // 禁用事务应该有驱动伪造事务API,不应该由orm实现 - DisableTransaction bool - - // MockSQLDB 用于mock测试的入口,如果MockSQLDB不为nil,则不使用DSN,直接使用MockSQLDB - // db, mock, err := sqlmock.New() - // MockSQLDB *sql.DB - - // FuncGlobalTransaction seata/hptx全局分布式事务的适配函数,返回IGlobalTransaction接口的实现 - // 业务必须调用zorm.BindContextEnableGlobalTransaction(ctx)开启全局分布事务 - // seata-go 的ctx是统一的绑定的是struct,也不是XID字符串. hptx是分离的,所以返回了两个ctx,兼容两个库 - FuncGlobalTransaction func(ctx context.Context) (IGlobalTransaction, context.Context, context.Context, error) - - // DisableAutoGlobalTransaction 属性已废弃,请勿使用,相关注释仅作记录备忘 - // DisableAutoGlobalTransaction 禁用自动全局分布式事务,默认false,虽然设置了FuncGlobalTransaction,但是并不想全部业务自动开启全局事务 - // DisableAutoGlobalTransaction = false; ctx,_=zorm.BindContextEnableGlobalTransaction(ctx,false) 默认使用全局事务,ctx绑定为false才不开启 - // DisableAutoGlobalTransaction = true; ctx,_=zorm.BindContextEnableGlobalTransaction(ctx,true) 默认禁用全局事务,ctx绑定为true才开启 - // DisableAutoGlobalTransaction bool - - // SQLDB 使用现有的数据库连接,优先级高于DSN - SQLDB *sql.DB - - // TDengineInsertsColumnName TDengine批量insert语句中是否有列名.默认false没有列名,插入值和数据库列顺序保持一致,减少语句长度 - TDengineInsertsColumnName bool -} - -// DBDao 数据库操作基类,隔离原生操作数据库API入口,所有数据库操作必须通过DBDao进行 -// DBDao Database operation base class, isolate the native operation database API entry,all database operations must be performed through DB Dao -type DBDao struct { - config *DataSourceConfig - dataSource *dataSource -} - -var defaultDao *DBDao = nil - -// NewDBDao 创建dbDao,一个数据库要只执行一次,业务自行控制 -// 第一个执行的数据库为 defaultDao,后续zorm.xxx方法,默认使用的就是defaultDao -// NewDBDao Creates dbDao, a database must be executed only once, and the business is controlled by itself -// The first database to be executed is defaultDao, and the subsequent zorm.xxx method is defaultDao by default -func NewDBDao(config *DataSourceConfig) (*DBDao, error) { - dataSource, err := newDataSource(config) - if err != nil { - err = fmt.Errorf("->NewDBDao创建dataSource失败:%w", err) - FuncLogError(nil, err) - return nil, err - } - dbdao, err := FuncReadWriteStrategy(nil, 1) - if dbdao == nil { - defaultDao = &DBDao{config, dataSource} - return defaultDao, nil - } - if err != nil { - return dbdao, err - } - return &DBDao{config, dataSource}, nil -} - -// newDBConnection 获取一个dbConnection -// 如果参数dbConnection为nil,使用默认的datasource进行获取dbConnection -// 如果是多库,Dao手动调用newDBConnection(),获得dbConnection,WithValue绑定到子context -// newDBConnection Get a db Connection -// If the parameter db Connection is nil, use the default datasource to get db Connection. -// If it is multi-database, Dao manually calls new DB Connection() to obtain db Connection, and With Value is bound to the sub-context -func (dbDao *DBDao) newDBConnection() (*dataBaseConnection, error) { - if dbDao == nil || dbDao.dataSource == nil { - return nil, errors.New("->newDBConnection-->请不要自己创建dbDao,请使用NewDBDao方法进行创建") - } - dbConnection := new(dataBaseConnection) - dbConnection.db = dbDao.dataSource.DB - dbConnection.config = dbDao.config - return dbConnection, nil -} - -// BindContextDBConnection 多库的时候,通过dbDao创建DBConnection绑定到子context,返回的context就有了DBConnection. parent 不能为空 -// BindContextDBConnection In the case of multiple databases, create a DB Connection through db Dao and bind it to a sub-context,and the returned context will have a DB Connection. parent is not nil -func (dbDao *DBDao) BindContextDBConnection(parent context.Context) (context.Context, error) { - if parent == nil { - return nil, errors.New("->BindContextDBConnection-->context的parent不能为nil") - } - dbConnection, errDBConnection := dbDao.newDBConnection() - if errDBConnection != nil { - return parent, errDBConnection - } - ctx := context.WithValue(parent, contextDBConnectionValueKey, dbConnection) - return ctx, nil -} - -// BindContextTxOptions 绑定事务的隔离级别,参考sql.IsolationLevel,如果txOptions为nil,使用默认的事务隔离级别.parent不能为空 -// 需要在事务开启前调用,也就是zorm.Transaction方法前,不然事务开启之后再调用就无效了 -func (dbDao *DBDao) BindContextTxOptions(parent context.Context, txOptions *sql.TxOptions) (context.Context, error) { - if parent == nil { - return nil, errors.New("->BindContextTxOptions-->context的parent不能为nil") - } - - ctx := context.WithValue(parent, contextTxOptionsKey, txOptions) - return ctx, nil -} - -// CloseDB 关闭所有数据库连接 -// 请谨慎调用这个方法,会关闭所有数据库连接,用于处理特殊场景,正常使用无需手动关闭数据库连接 -func (dbDao *DBDao) CloseDB() error { - if dbDao == nil || dbDao.dataSource == nil { - return errors.New("->CloseDB-->请不要自己创建dbDao,请使用NewDBDao方法进行创建") - } - return dbDao.dataSource.Close() -} - -/* -Transaction 的示例代码 - //匿名函数return的error如果不为nil,事务就会回滚 - zorm.Transaction(ctx context.Context,func(ctx context.Context) (interface{}, error) { - - //业务代码 - - - //return的error如果不为nil,事务就会回滚 - return nil, nil - }) -*/ -// 事务方法,隔离dbConnection相关的API.必须通过这个方法进行事务处理,统一事务方式.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 -// 如果入参ctx中没有dbConnection,使用defaultDao开启事务并最后提交 -// 如果入参ctx有dbConnection且没有事务,调用dbConnection.begin()开启事务并最后提交 -// 如果入参ctx有dbConnection且有事务,只使用不提交,有开启方提交事务 -// 但是如果遇到错误或者异常,虽然不是事务的开启方,也会回滚事务,让事务尽早回滚 -// 在多库的场景,手动获取dbConnection,然后绑定到一个新的context,传入进来 -// 不要去掉匿名函数的context参数,因为如果Transaction的context中没有dbConnection,会新建一个context并放入dbConnection,此时的context指针已经变化,不能直接使用Transaction的context参数 -// bug(springrain)如果有大神修改了匿名函数内的参数名,例如改为ctx2,这样业务代码实际使用的是Transaction的context参数,如果为没有dbConnection,会抛异常,如果有dbConnection,实际就是一个对象.影响有限.也可以把匿名函数抽到外部 -// 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions -// return的error如果不为nil,事务就会回滚 -// 如果使用了分布式事务,需要设置分布式事务函数zorm.DataSourceConfig.FuncGlobalTransaction,实现IGlobalTransaction接口 -// 如果是分布式事务开启方,需要在本地事务前开启分布事务,开启之后获取XID,设值到ctx的XID和TX_XID.XID是seata/hptx MySQL驱动需要,TX_XID是gtxContext.NewRootContext需要 -// 分布式事务需要传递XID,接收方context.WithValue(ctx, "XID", XID)绑定到ctx -// 如果分支事务出现异常或者回滚,会立即回滚分布式事务 -// Transaction method, isolate db Connection related API. This method must be used for transaction processing and unified transaction mode -// If there is no db Connection in the input ctx, use default Dao to start the transaction and submit it finally -// If the input ctx has db Connection and no transaction, call db Connection.begin() to start the transaction and finally commit -// If the input ctx has a db Connection and a transaction, only use non-commit, and the open party submits the transaction -// If you encounter an error or exception, although it is not the initiator of the transaction, the transaction will be rolled back, -// so that the transaction can be rolled back as soon as possible -// In a multi-database scenario, manually obtain db Connection, then bind it to a new context and pass in -// Do not drop the anonymous function's context parameter, because if the Transaction context does not have a DBConnection, -// then a new context will be created and placed in the DBConnection -// The context pointer has changed and the Transaction context parameters cannot be used directly -// "bug (springrain)" If a great god changes the parameter name in the anonymous function, for example, change it to ctx 2, -// so that the business code actually uses the context parameter of Transaction. If there is no db Connection, -// an exception will be thrown. If there is a db Connection, the actual It is an object -// The impact is limited. Anonymous functions can also be extracted outside -// If the return error is not nil, the transaction will be rolled back -func Transaction(ctx context.Context, doTransaction func(ctx context.Context) (interface{}, error)) (interface{}, error) { - return transaction(ctx, doTransaction) -} - -var transaction = func(ctx context.Context, doTransaction func(ctx context.Context) (interface{}, error)) (info interface{}, err error) { - // 是否是dbConnection的开启方,如果是开启方,才可以提交事务 - // Whether it is the opener of db Connection, if it is the opener, the transaction can be submitted - localTxOpen := false - // 是否是分布式事务的开启方.如果ctx中没有xid,认为是开启方 - globalTxOpen := false - // 如果dbConnection不存在,则会用默认的datasource开启事务 - // If db Connection does not exist, the default datasource will be used to start the transaction - var dbConnection *dataBaseConnection - ctx, dbConnection, err = checkDBConnection(ctx, dbConnection, false, 1) - if err != nil { - FuncLogError(ctx, err) - return nil, err - } - - // 适配全局事务的函数 - funcGlobalTx := dbConnection.config.FuncGlobalTransaction - - // 实现IGlobalTransaction接口的事务对象 - var globalTransaction IGlobalTransaction - // 分布式事务的 rootContext,和业务的ctx区别开来,如果业务ctx使用WithValue,就会出现差异 - var globalRootContext context.Context - // 分布式事务的异常 - var errGlobal error - - // 如果没有事务,并且事务没有被禁用,开启事务 - // 开启本地事务前,需要拿到分布式事务对象 - if dbConnection.tx == nil && (!getContextBoolValue(ctx, contextDisableTransactionValueKey, dbConnection.config.DisableTransaction)) { - // if dbConnection.tx == nil { - // 是否使用分布式事务 - enableGlobalTransaction := funcGlobalTx != nil - if enableGlobalTransaction { // 判断ctx里是否有绑定 enableGlobalTransaction - /* - ctxGTXval := ctx.Value(contextEnableGlobalTransactionValueKey) - if ctxGTXval != nil { //如果有值 - enableGlobalTransaction = ctxGTXval.(bool) - } else { //如果ctx没有值,就取值DisableAutoGlobalTransaction - //enableGlobalTransaction = !dbConnection.config.DisableAutoGlobalTransaction - enableGlobalTransaction = false - } - */ - enableGlobalTransaction = getContextBoolValue(ctx, contextEnableGlobalTransactionValueKey, false) - } - - // 需要开启分布式事务,初始化分布式事务对象,判断是否是分布式事务入口 - if enableGlobalTransaction { - // 获取分布式事务的XID - ctxXIDval := ctx.Value("XID") - if ctxXIDval != nil { // 如果本地ctx中有XID - globalXID, _ := ctxXIDval.(string) - // 不知道为什么需要两个Key,还需要请教seata/hptx团队 - // seata/hptx mysql驱动需要 XID,gtxContext.NewRootContext 需要 TX_XID - ctx = context.WithValue(ctx, "TX_XID", globalXID) - } else { // 如果本地ctx中没有XID,也就是没有传递过来XID,认为是分布式事务的开启方.ctx中没有XID和TX_XID的值 - globalTxOpen = true - } - // 获取分布式事务实现对象,用于控制事务提交和回滚.分支事务需要ctx中TX_XID有值,将分支事务关联到主事务 - globalTransaction, ctx, globalRootContext, errGlobal = funcGlobalTx(ctx) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:Transaction FuncGlobalTransaction获取IGlobalTransaction接口实现失败:%w ", errGlobal) - FuncLogError(ctx, errGlobal) - return nil, errGlobal - } - if globalTransaction == nil || globalRootContext == nil { - errGlobal = errors.New("->Transaction-->global:Transaction FuncGlobalTransaction获取IGlobalTransaction接口的实现为nil ") - FuncLogError(ctx, errGlobal) - return nil, errGlobal - } - - } - if globalTxOpen { // 如果是分布事务开启方,启动分布式事务 - errGlobal = globalTransaction.BeginGTX(ctx, globalRootContext) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:Transaction 分布式事务开启失败:%w ", errGlobal) - FuncLogError(ctx, errGlobal) - return nil, errGlobal - } - - // 分布式事务开启成功,获取XID,设置到ctx的XID和TX_XID - // seata/hptx mysql驱动需要 XID,gtxContext.NewRootContext 需要 TX_XID - globalXID, errGlobal := globalTransaction.GetGTXID(ctx, globalRootContext) - if errGlobal != nil { - FuncLogError(ctx, errGlobal) - return nil, errGlobal - } - if globalXID == "" { - errGlobal = errors.New("->Transaction-->global:globalTransaction.Begin无异常开启后,获取的XID为空") - FuncLogError(ctx, errGlobal) - return nil, errGlobal - } - ctx = context.WithValue(ctx, "XID", globalXID) - ctx = context.WithValue(ctx, "TX_XID", globalXID) - } - - // 开启本地事务/分支事务 - errBeginTx := dbConnection.beginTx(ctx) - if errBeginTx != nil { - errBeginTx = fmt.Errorf("->Transaction 事务开启失败:%w ", errBeginTx) - FuncLogError(ctx, errBeginTx) - return nil, errBeginTx - } - // 本方法开启的事务,由本方法提交 - // The transaction opened by this method is submitted by this method - localTxOpen = true - } - - defer func() { - if r := recover(); r != nil { - //err = fmt.Errorf("->事务开启失败:%w ", err) - //记录异常日志 - //if _, ok := r.(runtime.Error); ok { - // panic(r) - //} - var errOk bool - err, errOk = r.(error) - if errOk { - err = fmt.Errorf("->Transaction-->recover异常:%w", err) - FuncLogPanic(ctx, err) - } else { - err = fmt.Errorf("->Transaction-->recover异常:%v", r) - FuncLogPanic(ctx, err) - } - //if !txOpen { //如果不是开启方,也应该回滚事务,虽然可能造成日志不准确,但是回滚要尽早 - // return - //} - //如果禁用了事务 - if getContextBoolValue(ctx, contextDisableTransactionValueKey, dbConnection.config.DisableTransaction) { - return - } - rberr := dbConnection.rollback() - if rberr != nil { - rberr = fmt.Errorf("->Transaction-->recover内事务回滚失败:%w", rberr) - FuncLogError(ctx, rberr) - } - // 任意一个分支事务回滚,分布式事务就整体回滚 - if globalTransaction != nil { - errGlobal = globalTransaction.RollbackGTX(ctx, globalRootContext) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:recover内globalTransaction事务回滚失败:%w", errGlobal) - FuncLogError(ctx, errGlobal) - } - } - - } - }() - - // 执行业务的事务函数 - info, err = doTransaction(ctx) - - if err != nil { - err = fmt.Errorf("->Transaction-->doTransaction业务执行错误:%w", err) - FuncLogError(ctx, err) - - // 如果禁用了事务 - if getContextBoolValue(ctx, contextDisableTransactionValueKey, dbConnection.config.DisableTransaction) { - return info, err - } - - // 不是开启方回滚事务,有可能造成日志记录不准确,但是回滚最重要了,尽早回滚 - // It is not the start party to roll back the transaction, which may cause inaccurate log records,but rollback is the most important, roll back as soon as possible - errRollback := dbConnection.rollback() - if errRollback != nil { - errRollback = fmt.Errorf("->Transaction-->rollback事务回滚失败:%w", errRollback) - FuncLogError(ctx, errRollback) - } - // 任意一个分支事务回滚,分布式事务就整体回滚 - if globalTransaction != nil { - errGlobal = globalTransaction.RollbackGTX(ctx, globalRootContext) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:Transaction-->rollback globalTransaction事务回滚失败:%w", errGlobal) - FuncLogError(ctx, errGlobal) - } - } - return info, err - } - // 如果是事务开启方,提交事务 - // If it is the transaction opener, commit the transaction - if localTxOpen { - errCommit := dbConnection.commit() - // 本地事务提交成功,如果是全局事务的开启方,提交分布式事务 - if errCommit == nil && globalTxOpen { - errGlobal = globalTransaction.CommitGTX(ctx, globalRootContext) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:Transaction-->commit globalTransaction 事务提交失败:%w", errGlobal) - FuncLogError(ctx, errGlobal) - } - } - if errCommit != nil { - errCommit = fmt.Errorf("->Transaction-->commit事务提交失败:%w", errCommit) - FuncLogError(ctx, errCommit) - // 任意一个分支事务回滚,分布式事务就整体回滚 - if globalTransaction != nil { - errGlobal = globalTransaction.RollbackGTX(ctx, globalRootContext) - if errGlobal != nil { - errGlobal = fmt.Errorf("->Transaction-->global:Transaction-->commit失败,然后回滚globalTransaction事务也失败:%w", errGlobal) - FuncLogError(ctx, errGlobal) - } - } - - return info, errCommit - } - - } - - return info, err -} - -var errQueryRow = errors.New("->QueryRow查询出多条数据") - -// QueryRow 不要偷懒调用Query返回第一条,问题1.需要构建一个slice,问题2.调用方传递的对象其他值会被抛弃或者覆盖. -// 只查询一个字段,需要使用这个字段的类型进行接收,目前不支持整个struct对象接收 -// 根据Finder和封装为指定的entity类型,entity必须是*struct类型或者基础类型的指针.把查询的数据赋值给entity,所以要求指针类型 -// context必须传入,不能为空 -// 如果数据库是null,基本类型不支持,会返回异常,不做默认值处理,Query因为是列表,会设置为默认值 -// QueryRow Don't be lazy to call Query to return the first one -// Question 1. A selice needs to be constructed, and question 2. Other values ​​of the object passed by the caller will be discarded or overwritten -// context must be passed in and cannot be empty -func QueryRow(ctx context.Context, finder *Finder, entity interface{}) (bool, error) { - return queryRow(ctx, finder, entity) -} - -var queryRow = func(ctx context.Context, finder *Finder, entity interface{}) (has bool, err error) { - typeOf, errCheck := checkEntityKind(entity) - if errCheck != nil { - errCheck = fmt.Errorf("->QueryRow-->checkEntityKind类型检查错误:%w", errCheck) - FuncLogError(ctx, errCheck) - return has, errCheck - } - // 从contxt中获取数据库连接,可能为nil - // Get database connection from contxt, may be nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - FuncLogError(ctx, errFromContxt) - return has, errFromContxt - } - // 自己构建的dbConnection - // dbConnection built by yourself - if dbConnection != nil && dbConnection.db == nil { - FuncLogError(ctx, errDBConnection) - return has, errDBConnection - } - - config, errConfig := getConfigFromConnection(ctx, dbConnection, 0) - if errConfig != nil { - FuncLogError(ctx, errConfig) - return has, errConfig - } - dialect := config.Dialect - - // 获取到sql语句 - // Get the sql statement - sqlstr, errSQL := wrapQuerySQL(dialect, finder, nil) - if errSQL != nil { - errSQL = fmt.Errorf("->QueryRow-->wrapQuerySQL获取查询SQL语句错误:%w", errSQL) - FuncLogError(ctx, errSQL) - return has, errSQL - } - - // 检查dbConnection.有可能会创建dbConnection或者开启事务,所以要尽可能的接近执行时检查 - // Check db Connection. It is possible to create a db Connection or start a transaction, so check it as close as possible to the execution - var errDbConnection error - ctx, dbConnection, errDbConnection = checkDBConnection(ctx, dbConnection, false, 0) - if errDbConnection != nil { - FuncLogError(ctx, errDbConnection) - return has, errDbConnection - } - - // 根据语句和参数查询 - // Query based on statements and parameters - rows, errQueryContext := dbConnection.queryContext(ctx, &sqlstr, &finder.values) - if errQueryContext != nil { - errQueryContext = fmt.Errorf("->QueryRow-->queryContext查询数据库错误:%w", errQueryContext) - FuncLogError(ctx, errQueryContext) - return has, errQueryContext - } - // 先判断error 再关闭 - defer func() { - // 先判断error 再关闭 - rows.Close() - // 捕获panic,赋值给err,避免程序崩溃 - if r := recover(); r != nil { - has = false - var errOk bool - err, errOk = r.(error) - if errOk { - err = fmt.Errorf("->QueryRow-->recover异常:%w", err) - FuncLogPanic(ctx, err) - } else { - err = fmt.Errorf("->QueryRow-->recover异常:%v", r) - FuncLogPanic(ctx, err) - } - } - }() - - // typeOf := reflect.TypeOf(entity).Elem() - - // 数据库字段类型 - columnTypes, errColumnTypes := rows.ColumnTypes() - if errColumnTypes != nil { - errColumnTypes = fmt.Errorf("->QueryRow-->rows.ColumnTypes数据库类型错误:%w", errColumnTypes) - FuncLogError(ctx, errColumnTypes) - return has, errColumnTypes - } - // 查询的字段长度 - ctLen := len(columnTypes) - // 是否只有一列,而且可以直接赋值 - oneColumnScanner := false - if ctLen < 1 { // 没有返回列 - errColumn0 := errors.New("->QueryRow-->ctLen<1,没有返回列") - FuncLogError(ctx, errColumn0) - return has, errColumn0 - } else if ctLen == 1 { // 如果只查询一个字段 - // 是否是可以直接扫描的类型 - _, oneColumnScanner = entity.(sql.Scanner) - if !oneColumnScanner { - pkgPath := typeOf.PkgPath() - if pkgPath == "" || pkgPath == "time" { // 系统内置变量和time包 - oneColumnScanner = true - } - } - - } - var dbColumnFieldMap map[string]reflect.StructField - var exportFieldMap map[string]reflect.StructField - if !oneColumnScanner { // 如果不是一个直接可以映射的字段,默认为是sturct - // 获取到类型的字段缓存 - // Get the type field cache - dbColumnFieldMap, exportFieldMap, err = getDBColumnExportFieldMap(&typeOf) - if err != nil { - err = fmt.Errorf("->QueryRow-->getDBColumnFieldMap获取字段缓存错误:%w", err) - return has, err - } - } - - // 反射获取 []driver.Value的值,用于处理nil值和自定义类型 - driverValue := reflect.Indirect(reflect.ValueOf(rows)) - driverValue = driverValue.FieldByName("lastcols") - - // 循环遍历结果集 - // Loop through the result set - for i := 0; rows.Next(); i++ { - has = true - if i > 0 { - FuncLogError(ctx, errQueryRow) - return has, errQueryRow - } - if oneColumnScanner { - err = sqlRowsValues(ctx, dialect, nil, &typeOf, rows, &driverValue, columnTypes, entity, &dbColumnFieldMap, &exportFieldMap) - } else { - pv := reflect.ValueOf(entity) - err = sqlRowsValues(ctx, dialect, &pv, &typeOf, rows, &driverValue, columnTypes, nil, &dbColumnFieldMap, &exportFieldMap) - } - - // pv = pv.Elem() - // scan赋值.是一个指针数组,已经根据struct的属性类型初始化了,sql驱动能感知到参数类型,所以可以直接赋值给struct的指针.这样struct的属性就有值了 - // scan assignment. It is an array of pointers that has been initialized according to the attribute type of the struct,The sql driver can perceive the parameter type,so it can be directly assigned to the pointer of the struct. In this way, the attributes of the struct have values - // scanerr := rows.Scan(values...) - if err != nil { - err = fmt.Errorf("->Query-->sqlRowsValues错误:%w", err) - FuncLogError(ctx, err) - return has, err - } - - } - - return has, err -} - -var errQuerySlice = errors.New("->Query数组必须是*[]struct类型或者*[]*struct或者基础类型数组的指针") - -// Query 不要偷懒调用QueryMap,需要处理sql驱动支持的sql.Nullxxx的数据类型,也挺麻烦的 -// 只查询一个字段,需要使用这个字段的类型进行接收,目前不支持整个struct对象接收 -// 根据Finder和封装为指定的entity类型,entity必须是*[]struct类型,已经初始化好的数组,此方法只Append元素,这样调用方就不需要强制类型转换了 -// context必须传入,不能为空.如果想不分页,查询所有数据,page传入nil -// Query:Don't be lazy to call QueryMap, you need to deal with the sql,Nullxxx data type supported by the sql driver, which is also very troublesome -// According to the Finder and encapsulation for the specified entity type, the entity must be of the *[]struct type, which has been initialized,This method only Append elements, so the caller does not need to force type conversion -// context must be passed in and cannot be empty -var Query = func(ctx context.Context, finder *Finder, rowsSlicePtr interface{}, page *Page) error { - return query(ctx, finder, rowsSlicePtr, page) -} - -var query = func(ctx context.Context, finder *Finder, rowsSlicePtr interface{}, page *Page) (err error) { - if rowsSlicePtr == nil { // 如果为nil - FuncLogError(ctx, errQuerySlice) - return errQuerySlice - } - - pvPtr := reflect.ValueOf(rowsSlicePtr) - if pvPtr.Kind() != reflect.Ptr { // 如果不是指针 - FuncLogError(ctx, errQuerySlice) - return errQuerySlice - } - - sliceValue := reflect.Indirect(pvPtr) - - // 如果不是数组 - // If it is not an array. - if sliceValue.Kind() != reflect.Slice { - FuncLogError(ctx, errQuerySlice) - return errQuerySlice - } - // 获取数组内的元素类型 - // Get the element type in the array - sliceElementType := sliceValue.Type().Elem() - - // slice数组里是否是指针,实际参数类似 *[]*struct,兼容这种类型 - sliceElementTypePtr := false - // 如果数组里还是指针类型 - if sliceElementType.Kind() == reflect.Ptr { - sliceElementTypePtr = true - sliceElementType = sliceElementType.Elem() - } - - //如果不是struct - //if !(sliceElementType.Kind() == reflect.Struct || allowBaseTypeMap[sliceElementType.Kind()]) { - // return errors.New("->Query数组必须是*[]struct类型或者*[]*struct或者基础类型数组的指针") - //} - //从contxt中获取数据库连接,可能为nil - //Get database connection from contxt, may be nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - FuncLogError(ctx, errFromContxt) - return errFromContxt - } - // 自己构建的dbConnection - // dbConnection built by yourself - if dbConnection != nil && dbConnection.db == nil { - FuncLogError(ctx, errDBConnection) - return errDBConnection - } - config, errConfig := getConfigFromConnection(ctx, dbConnection, 0) - if errConfig != nil { - FuncLogError(ctx, errConfig) - return errConfig - } - dialect := config.Dialect - - sqlstr, errSQL := wrapQuerySQL(dialect, finder, page) - if errSQL != nil { - errSQL = fmt.Errorf("->Query-->wrapQuerySQL获取查询SQL语句错误:%w", errSQL) - FuncLogError(ctx, errSQL) - return errSQL - } - - // 检查dbConnection.有可能会创建dbConnection或者开启事务,所以要尽可能的接近执行时检查 - // Check db Connection. It is possible to create a db Connection or start a transaction, so check it as close as possible to the execution - var errDbConnection error - ctx, dbConnection, errDbConnection = checkDBConnection(ctx, dbConnection, false, 0) - if errDbConnection != nil { - FuncLogError(ctx, errDbConnection) - return errDbConnection - } - - // 根据语句和参数查询 - // Query based on statements and parameters - rows, errQueryContext := dbConnection.queryContext(ctx, &sqlstr, &finder.values) - if errQueryContext != nil { - errQueryContext = fmt.Errorf("->Query-->queryContext查询rows错误:%w", errQueryContext) - FuncLogError(ctx, errQueryContext) - return errQueryContext - } - // 先判断error 再关闭 - defer func() { - // 先判断error 再关闭 - rows.Close() - // 捕获panic,赋值给err,避免程序崩溃 - if r := recover(); r != nil { - var errOk bool - err, errOk = r.(error) - if errOk { - err = fmt.Errorf("->Query-->recover异常:%w", err) - FuncLogPanic(ctx, err) - } else { - err = fmt.Errorf("->Query-->recover异常:%v", r) - FuncLogPanic(ctx, err) - } - } - }() - - //_, ok := reflect.New(sliceElementType).Interface().(sql.Scanner) - - // 数据库返回的字段类型 - columnTypes, errColumnTypes := rows.ColumnTypes() - if errColumnTypes != nil { - errColumnTypes = fmt.Errorf("->Query-->rows.ColumnTypes数据库类型错误:%w", errColumnTypes) - FuncLogError(ctx, errColumnTypes) - return errColumnTypes - } - // 查询的字段长度 - ctLen := len(columnTypes) - // 是否只有一列,而且可以直接赋值 - oneColumnScanner := false - if ctLen < 1 { // 没有返回列 - errColumn0 := errors.New("->Query-->ctLen<1,没有返回列") - FuncLogError(ctx, errColumn0) - return errColumn0 - } else if ctLen == 1 { // 如果只查询一个字段 - // 是否是可以直接扫描的类型 - _, oneColumnScanner = reflect.New(sliceElementType).Interface().(sql.Scanner) - if !oneColumnScanner { - pkgPath := sliceElementType.PkgPath() - if pkgPath == "" || pkgPath == "time" { // 系统内置变量和time包 - oneColumnScanner = true - } - } - - } - var dbColumnFieldMap map[string]reflect.StructField - var exportFieldMap map[string]reflect.StructField - if !oneColumnScanner { // 如果不是一个直接可以映射的字段,默认为是sturct - // 获取到类型的字段缓存 - // Get the type field cache - dbColumnFieldMap, exportFieldMap, err = getDBColumnExportFieldMap(&sliceElementType) - if err != nil { - err = fmt.Errorf("->Query-->getDBColumnFieldMap获取字段缓存错误:%w", err) - return err - } - } - // 反射获取 []driver.Value的值,用于处理nil值和自定义类型 - driverValue := reflect.Indirect(reflect.ValueOf(rows)) - driverValue = driverValue.FieldByName("lastcols") - // TODO 在这里确定字段直接接收或者struct反射,sqlRowsValues 就不再额外处理了,直接映射数据,提升性能 - // 循环遍历结果集 - // Loop through the result set - for rows.Next() { - pv := reflect.New(sliceElementType) - if oneColumnScanner { - err = sqlRowsValues(ctx, dialect, nil, &sliceElementType, rows, &driverValue, columnTypes, pv.Interface(), &dbColumnFieldMap, &exportFieldMap) - } else { - err = sqlRowsValues(ctx, dialect, &pv, &sliceElementType, rows, &driverValue, columnTypes, nil, &dbColumnFieldMap, &exportFieldMap) - } - - // err = sqlRowsValues(ctx, dialect, &pv, rows, &driverValue, columnTypes, oneColumnScanner, structType, &dbColumnFieldMap, &exportFieldMap) - pv = pv.Elem() - // scan赋值.是一个指针数组,已经根据struct的属性类型初始化了,sql驱动能感知到参数类型,所以可以直接赋值给struct的指针.这样struct的属性就有值了 - // scan assignment. It is an array of pointers that has been initialized according to the attribute type of the struct,The sql driver can perceive the parameter type,so it can be directly assigned to the pointer of the struct. In this way, the attributes of the struct have values - // scanerr := rows.Scan(values...) - if err != nil { - err = fmt.Errorf("->Query-->sqlRowsValues错误:%w", err) - FuncLogError(ctx, err) - return err - } - - // values[i] = f.Addr().Interface() - // 通过反射给slice添加元素 - // Add elements to slice through reflection - if sliceElementTypePtr { // 如果数组里是指针地址,*[]*struct - sliceValue.Set(reflect.Append(sliceValue, pv.Addr())) - } else { - sliceValue.Set(reflect.Append(sliceValue, pv)) - } - - } - - // 查询总条数 - // Query total number - if finder.SelectTotalCount && page != nil { - count, errCount := selectCount(ctx, finder) - if errCount != nil { - errCount = fmt.Errorf("->Query-->selectCount查询总条数错误:%w", errCount) - FuncLogError(ctx, errCount) - return errCount - } - page.setTotalCount(count) - } - - return nil -} - -var ( - errQueryRowMapFinder = errors.New("->QueryRowMap-->finder参数不能为nil") - errQueryRowMapMany = errors.New("->QueryRowMap查询出多条数据") -) - -// QueryRowMap 根据Finder查询,封装Map -// context必须传入,不能为空 -// QueryRowMap encapsulates Map according to Finder query -// context must be passed in and cannot be empty -func QueryRowMap(ctx context.Context, finder *Finder) (map[string]interface{}, error) { - return queryRowMap(ctx, finder) -} - -var queryRowMap = func(ctx context.Context, finder *Finder) (map[string]interface{}, error) { - if finder == nil { - FuncLogError(ctx, errQueryRowMapFinder) - return nil, errQueryRowMapFinder - } - resultMapList, errList := QueryMap(ctx, finder, nil) - if errList != nil { - errList = fmt.Errorf("->QueryRowMap-->QueryMap查询错误:%w", errList) - FuncLogError(ctx, errList) - return nil, errList - } - if resultMapList == nil { - return nil, nil - } - if len(resultMapList) > 1 { - FuncLogError(ctx, errQueryRowMapMany) - return resultMapList[0], errQueryRowMapMany - } else if len(resultMapList) == 0 { // 数据库不存在值 - return nil, nil - } - return resultMapList[0], nil -} - -var errQueryMapFinder = errors.New("->QueryMap-->finder参数不能为nil") - -// QueryMap 根据Finder查询,封装Map数组 -// 根据数据库字段的类型,完成从[]byte到Go类型的映射,理论上其他查询方法都可以调用此方法,但是需要处理sql.Nullxxx等驱动支持的类型 -// context必须传入,不能为空 -// QueryMap According to Finder query, encapsulate Map array -// According to the type of database field, the mapping from []byte to Go type is completed. In theory,other query methods can call this method, but need to deal with types supported by drivers such as sql.Nullxxx -// context must be passed in and cannot be empty -func QueryMap(ctx context.Context, finder *Finder, page *Page) ([]map[string]interface{}, error) { - return queryMap(ctx, finder, page) -} - -var queryMap = func(ctx context.Context, finder *Finder, page *Page) (resultMapList []map[string]interface{}, err error) { - if finder == nil { - FuncLogError(ctx, errQueryMapFinder) - return nil, errQueryMapFinder - } - // 从contxt中获取数据库连接,可能为nil - // Get database connection from contxt, may be nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - FuncLogError(ctx, errFromContxt) - return nil, errFromContxt - } - // 自己构建的dbConnection - // dbConnection built by yourself - if dbConnection != nil && dbConnection.db == nil { - FuncLogError(ctx, errDBConnection) - return nil, errDBConnection - } - - config, errConfig := getConfigFromConnection(ctx, dbConnection, 0) - if errConfig != nil { - FuncLogError(ctx, errConfig) - return nil, errConfig - } - dialect := config.Dialect - sqlstr, errSQL := wrapQuerySQL(dialect, finder, page) - if errSQL != nil { - errSQL = fmt.Errorf("->QueryMap -->wrapQuerySQL查询SQL语句错误:%w", errSQL) - FuncLogError(ctx, errSQL) - return nil, errSQL - } - - // 检查dbConnection.有可能会创建dbConnection或者开启事务,所以要尽可能的接近执行时检查 - // Check db Connection. It is possible to create a db Connection or start a transaction, so check it as close as possible to the execution - var errDbConnection error - ctx, dbConnection, errDbConnection = checkDBConnection(ctx, dbConnection, false, 0) - if errDbConnection != nil { - return nil, errDbConnection - } - - // 根据语句和参数查询 - // Query based on statements and parameters - rows, errQueryContext := dbConnection.queryContext(ctx, &sqlstr, &finder.values) - if errQueryContext != nil { - errQueryContext = fmt.Errorf("->QueryMap-->queryContext查询rows错误:%w", errQueryContext) - FuncLogError(ctx, errQueryContext) - return nil, errQueryContext - } - // 先判断error 再关闭 - defer func() { - // 先判断error 再关闭 - rows.Close() - // 捕获panic,赋值给err,避免程序崩溃 - if r := recover(); r != nil { - var errOk bool - err, errOk = r.(error) - if errOk { - err = fmt.Errorf("->QueryMap-->recover异常:%w", err) - FuncLogPanic(ctx, err) - } else { - err = fmt.Errorf("->QueryMap-->recover异常:%v", r) - FuncLogPanic(ctx, err) - } - } - }() - - // 数据库返回的列类型 - // The types returned by column Type.scan Type are all []byte, use column Type.database Type to judge one by one - columnTypes, errColumnTypes := rows.ColumnTypes() - if errColumnTypes != nil { - errColumnTypes = fmt.Errorf("->QueryMap-->rows.ColumnTypes数据库返回列名错误:%w", errColumnTypes) - FuncLogError(ctx, errColumnTypes) - return nil, errColumnTypes - } - // 反射获取 []driver.Value的值 - driverValue := reflect.Indirect(reflect.ValueOf(rows)) - driverValue = driverValue.FieldByName("lastcols") - resultMapList = make([]map[string]interface{}, 0) - columnTypeLen := len(columnTypes) - // 循环遍历结果集 - // Loop through the result set - for rows.Next() { - // 接收数据库返回的数据,需要使用指针接收 - // To receive the data returned by the database, you need to use the pointer to receive - values := make([]interface{}, columnTypeLen) - // 使用指针类型接收字段值,需要使用interface{}包装一下 - // To use the pointer type to receive the field value, you need to use interface() to wrap it - result := make(map[string]interface{}) - - // 记录需要类型转换的字段信息 - var fieldTempDriverValueMap map[int]*driverValueInfo - if iscdvm { - fieldTempDriverValueMap = make(map[int]*driverValueInfo) - } - - // 给数据赋值初始化变量 - // Initialize variables by assigning values ​​to data - for i, columnType := range columnTypes { - dv := driverValue.Index(i) - if dv.IsValid() && dv.InterfaceData()[0] == 0 { // 该字段的数据库值是null,不再处理,使用默认值 - values[i] = new(interface{}) - continue - } - // 类型转换的接口实现 - var customDriverValueConver ICustomDriverValueConver - // 是否需要类型转换 - var converOK bool = false - // 类型转换的临时值 - var tempDriverValue driver.Value - // 根据接收的类型,获取到类型转换的接口实现,优先匹配指定的数据库类型 - databaseTypeName := strings.ToUpper(columnType.DatabaseTypeName()) - // 判断是否有自定义扩展,避免无意义的反射 - if iscdvm { - customDriverValueConver, converOK = customDriverValueMap[dialect+"."+databaseTypeName] - if !converOK { - customDriverValueConver, converOK = customDriverValueMap[databaseTypeName] - } - } - var errGetDriverValue error - // 如果需要类型转换 - if converOK { - // 获取需要转的临时值 - tempDriverValue, errGetDriverValue = customDriverValueConver.GetDriverValue(ctx, columnType, nil) - if errGetDriverValue != nil { - errGetDriverValue = fmt.Errorf("->QueryMap-->customDriverValueConver.GetDriverValue错误:%w", errGetDriverValue) - FuncLogError(ctx, errGetDriverValue) - return nil, errGetDriverValue - } - // 返回值为nil,不做任何处理,使用原始逻辑 - if tempDriverValue == nil { - values[i] = new(interface{}) - } else { // 如果需要类型转换 - values[i] = tempDriverValue - dvinfo := driverValueInfo{} - dvinfo.customDriverValueConver = customDriverValueConver - dvinfo.columnType = columnType - dvinfo.tempDriverValue = tempDriverValue - fieldTempDriverValueMap[i] = &dvinfo - } - - continue - } - - switch databaseTypeName { - - case "CHAR", "NCHAR", "VARCHAR", "NVARCHAR", "VARCHAR2", "NVARCHAR2", "TINYTEXT", "MEDIUMTEXT", "TEXT", "NTEXT", "LONGTEXT", "LONG", "CHARACTER", "MEMO": - values[i] = new(string) - case "INT", "INT4", "INTEGER", "SERIAL", "SERIAL4", "SERIAL2", "TINYINT", "MEDIUMINT", "SMALLINT", "SMALLSERIAL", "INT2", "VARBIT", "AUTONUMBER": - values[i] = new(int) - case "BIGINT", "BIGSERIAL", "INT8", "SERIAL8": - values[i] = new(int64) - case "FLOAT", "REAL", "FLOAT4", "SINGLE": - values[i] = new(float32) - case "DOUBLE", "FLOAT8": - values[i] = new(float64) - case "DATE", "TIME", "DATETIME", "TIMESTAMP", "TIMESTAMPTZ", "TIMETZ", "INTERVAL", "DATETIME2", "SMALLDATETIME", "DATETIMEOFFSET": - values[i] = new(time.Time) - case "NUMBER": - precision, scale, isDecimal := columnType.DecimalSize() - if isDecimal || precision > 18 || precision-scale > 18 { // 如果是Decimal类型 - values[i] = FuncDecimalValue(ctx, dialect) - } else if scale > 0 { // 有小数位,默认使用float64接收 - values[i] = new(float64) - } else if precision-scale > 9 { // 超过9位,使用int64 - values[i] = new(int64) - } else { // 默认使用int接收 - values[i] = new(int) - } - - case "DECIMAL", "NUMERIC", "DEC": - values[i] = FuncDecimalValue(ctx, dialect) - case "BOOLEAN", "BOOL", "BIT": - values[i] = new(bool) - default: - // 不需要类型转换,正常赋值 - values[i] = new(interface{}) - } - } - // scan赋值 - // scan assignment - errScan := rows.Scan(values...) - if errScan != nil { - errScan = fmt.Errorf("->QueryMap-->rows.Scan错误:%w", errScan) - FuncLogError(ctx, errScan) - return nil, errScan - } - - // 循环 需要类型转换的字段,把临时值赋值给实际的接收对象 - for i, driverValueInfo := range fieldTempDriverValueMap { - // driverValueInfo := *driverValueInfoPtr - // 根据列名,字段类型,新值 返回符合接收类型值的指针,返回值是个指针,指针,指针!!!! - rightValue, errConverDriverValue := driverValueInfo.customDriverValueConver.ConverDriverValue(ctx, driverValueInfo.columnType, driverValueInfo.tempDriverValue, nil) - if errConverDriverValue != nil { - errConverDriverValue = fmt.Errorf("->QueryMap-->customDriverValueConver.ConverDriverValue错误:%w", errConverDriverValue) - FuncLogError(ctx, errConverDriverValue) - return nil, errConverDriverValue - } - // result[driverValueInfo.columnType.Name()] = reflect.ValueOf(rightValue).Elem().Interface() - values[i] = rightValue - } - - // 获取每一列的值 - // Get the value of each column - for i, columnType := range columnTypes { - - // 取到指针下的值,[]byte格式 - // Get the value under the pointer, []byte format - // v := *(values[i].(*interface{})) - v := reflect.ValueOf(values[i]).Elem().Interface() - // 从[]byte转化成实际的类型值,例如string,int - // Convert from []byte to actual type value, such as string, int - // v = converValueColumnType(v, columnType) - // 赋值到Map - // Assign to Map - result[columnType.Name()] = v - - } - - // 添加Map到数组 - // Add Map to the array - resultMapList = append(resultMapList, result) - - } - - // 查询总条数 - // Query total number - if finder.SelectTotalCount && page != nil { - count, errCount := selectCount(ctx, finder) - if errCount != nil { - errCount = fmt.Errorf("->QueryMap-->selectCount查询总条数错误:%w", errCount) - FuncLogError(ctx, errCount) - return resultMapList, errCount - } - page.setTotalCount(count) - } - - return resultMapList, nil -} - -// UpdateFinder 更新Finder语句 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -// UpdateFinder Update Finder statement -// ctx cannot be nil, refer to zorm.Transaction method to pass in ctx. Don't build DB Connection yourself -// The number of rows affected by affected, if it is abnormal or the driver does not support it, return -1 -func UpdateFinder(ctx context.Context, finder *Finder) (int, error) { - return updateFinder(ctx, finder) -} - -var updateFinder = func(ctx context.Context, finder *Finder) (int, error) { - affected := -1 - if finder == nil { - return affected, errors.New("->UpdateFinder-->finder不能为空") - } - sqlstr, err := finder.GetSQL() - if err != nil { - err = fmt.Errorf("->UpdateFinder-->finder.GetSQL()错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - _, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, finder.values, nil) - if errexec != nil { - errexec = fmt.Errorf("->UpdateFinder-->wrapExecUpdateValuesAffected执行更新错误:%w", errexec) - FuncLogError(ctx, errexec) - } - - return affected, errexec -} - -// Insert 保存Struct对象,必须是IEntityStruct类型 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -// Insert saves the Struct object, which must be of type IEntityStruct -// ctx cannot be nil, refer to zorm.Transaction method to pass in ctx. Don't build dbConnection yourself -// The number of rows affected by affected, if it is abnormal or the driver does not support it, return -1 -func Insert(ctx context.Context, entity IEntityStruct) (int, error) { - return insert(ctx, entity) -} - -var insert = func(ctx context.Context, entity IEntityStruct) (int, error) { - affected := -1 - if entity == nil { - return affected, errors.New("->Insert-->entity对象不能为空") - } - typeOf, columns, values, columnAndValueErr := columnAndValue(entity) - if columnAndValueErr != nil { - columnAndValueErr = fmt.Errorf("->Insert-->columnAndValue获取实体类的列和值错误:%w", columnAndValueErr) - FuncLogError(ctx, columnAndValueErr) - return affected, columnAndValueErr - } - if len(columns) < 1 { - return affected, errors.New("->Insert-->没有tag信息,请检查struct中 column 的tag") - } - // 从contxt中获取数据库连接,可能为nil - // Get database connection from contxt, may be nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - return affected, errFromContxt - } - // 自己构建的dbConnection - // dbConnection built by yourself - if dbConnection != nil && dbConnection.db == nil { - return affected, errDBConnection - } - - // SQL语句 - // SQL statement - sqlstr, autoIncrement, pktype, err := wrapInsertSQL(ctx, &typeOf, entity, &columns, &values) - if err != nil { - err = fmt.Errorf("->Insert-->wrapInsertSQL获取保存语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - - // oracle 12c+ 支持IDENTITY属性的自增列,因为分页也要求12c+的语法,所以数据库就IDENTITY创建自增吧 - // 处理序列产生的自增主键,例如oracle,postgresql等 - var lastInsertID, zormSQLOutReturningID *int64 - // var zormSQLOutReturningID *int64 - // 如果是postgresql的SERIAL自增,需要使用 RETURNING 返回主键的值 - if autoIncrement > 0 { - config, errConfig := getConfigFromConnection(ctx, dbConnection, 1) - if errConfig != nil { - return affected, errConfig - } - dialect := config.Dialect - lastInsertID, zormSQLOutReturningID = wrapAutoIncrementInsertSQL(entity.GetPKColumnName(), &sqlstr, dialect, &values) - - } - - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - res, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, lastInsertID) - if errexec != nil { - errexec = fmt.Errorf("->Insert-->wrapExecUpdateValuesAffected执行保存错误:%w", errexec) - FuncLogError(ctx, errexec) - return affected, errexec - } - - // 如果是自增主键 - // If it is an auto-incrementing primary key - if autoIncrement > 0 { - // 如果是oracle,shentong 的返回自增主键 - if lastInsertID == nil && zormSQLOutReturningID != nil { - lastInsertID = zormSQLOutReturningID - } - - var autoIncrementIDInt64 int64 - var err error - if lastInsertID != nil { - autoIncrementIDInt64 = *lastInsertID - } else { - // 需要数据库支持,获取自增主键 - // Need database support, get auto-incrementing primary key - autoIncrementIDInt64, err = (*res).LastInsertId() - } - - // 数据库不支持自增主键,不再赋值给struct属性 - // The database does not support self-incrementing primary keys, and no longer assigns values ​​to struct attributes - if err != nil { - err = fmt.Errorf("->Insert-->LastInsertId数据库不支持自增主键,不再赋值给struct属性:%w", err) - FuncLogError(ctx, err) - return affected, nil - } - pkName := entity.GetPKColumnName() - if pktype == "int" { - // int64 转 int - // int64 to int - autoIncrementIDInt, _ := typeConvertInt64toInt(autoIncrementIDInt64) - // 设置自增主键的值 - // Set the value of the auto-incrementing primary key - err = setFieldValueByColumnName(entity, pkName, autoIncrementIDInt) - } else if pktype == "int64" { - // 设置自增主键的值 - // Set the value of the auto-incrementing primary key - err = setFieldValueByColumnName(entity, pkName, autoIncrementIDInt64) - } - - if err != nil { - err = fmt.Errorf("->Insert-->setFieldValueByColumnName反射赋值数据库返回的自增主键错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - } - - return affected, nil -} - -// InsertSlice 批量保存Struct Slice 数组对象,必须是[]IEntityStruct类型,使用IEntityStruct接口,兼容Struct实体类 -// 如果是自增主键,无法对Struct对象里的主键属性赋值 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -func InsertSlice(ctx context.Context, entityStructSlice []IEntityStruct) (int, error) { - return insertSlice(ctx, entityStructSlice) -} - -var insertSlice = func(ctx context.Context, entityStructSlice []IEntityStruct) (int, error) { - affected := -1 - if entityStructSlice == nil || len(entityStructSlice) < 1 { - return affected, errors.New("->InsertSlice-->entityStructSlice对象数组不能为空") - } - // 第一个对象,获取第一个Struct对象,用于获取数据库字段,也获取了值 - entity := entityStructSlice[0] - typeOf, columns, values, columnAndValueErr := columnAndValue(entity) - if columnAndValueErr != nil { - columnAndValueErr = fmt.Errorf("->InsertSlice-->columnAndValue获取实体类的列和值错误:%w", columnAndValueErr) - FuncLogError(ctx, columnAndValueErr) - return affected, columnAndValueErr - } - if len(columns) < 1 { - return affected, errors.New("->InsertSlice-->columns没有tag信息,请检查struct中 column 的tag") - } - // 从contxt中获取数据库连接,可能为nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - return affected, errFromContxt - } - // 自己构建的dbConnection - if dbConnection != nil && dbConnection.db == nil { - return affected, errDBConnection - } - config, errConfig := getConfigFromConnection(ctx, dbConnection, 1) - if errConfig != nil { - return affected, errConfig - } - // SQL语句 - sqlstr, _, err := wrapInsertSliceSQL(ctx, config, &typeOf, entityStructSlice, &columns, &values) - if err != nil { - err = fmt.Errorf("->InsertSlice-->wrapInsertSliceSQL获取保存语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - _, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, nil) - if errexec != nil { - errexec = fmt.Errorf("->InsertSlice-->wrapExecUpdateValuesAffected执行保存错误:%w", errexec) - FuncLogError(ctx, errexec) - } - - return affected, errexec -} - -// Update 更新struct所有属性,必须是IEntityStruct类型 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -func Update(ctx context.Context, entity IEntityStruct) (int, error) { - return update(ctx, entity) -} - -var update = func(ctx context.Context, entity IEntityStruct) (int, error) { - finder, err := WrapUpdateStructFinder(ctx, entity, false) - if err != nil { - err = fmt.Errorf("->Update-->WrapUpdateStructFinder包装Finder错误:%w", err) - FuncLogError(ctx, err) - return 0, err - } - return UpdateFinder(ctx, finder) -} - -// UpdateNotZeroValue 更新struct不为默认零值的属性,必须是IEntityStruct类型,主键必须有值 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -func UpdateNotZeroValue(ctx context.Context, entity IEntityStruct) (int, error) { - return updateNotZeroValue(ctx, entity) -} - -var updateNotZeroValue = func(ctx context.Context, entity IEntityStruct) (int, error) { - finder, err := WrapUpdateStructFinder(ctx, entity, true) - if err != nil { - err = fmt.Errorf("->UpdateNotZeroValue-->WrapUpdateStructFinder包装Finder错误:%w", err) - FuncLogError(ctx, err) - return 0, err - } - return UpdateFinder(ctx, finder) -} - -// Delete 根据主键删除一个对象.必须是IEntityStruct类型 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -func Delete(ctx context.Context, entity IEntityStruct) (int, error) { - return delete(ctx, entity) -} - -var delete = func(ctx context.Context, entity IEntityStruct) (int, error) { - affected := -1 - typeOf, checkerr := checkEntityKind(entity) - if checkerr != nil { - return affected, checkerr - } - - pkName, pkNameErr := entityPKFieldName(entity, &typeOf) - - if pkNameErr != nil { - pkNameErr = fmt.Errorf("->Delete-->entityPKFieldName获取主键名称错误:%w", pkNameErr) - FuncLogError(ctx, pkNameErr) - return affected, pkNameErr - } - - value, e := structFieldValue(entity, pkName) - if e != nil { - e = fmt.Errorf("->Delete-->structFieldValue获取主键值错误:%w", e) - FuncLogError(ctx, e) - return affected, e - } - - // SQL语句 - sqlstr, err := wrapDeleteSQL(entity) - if err != nil { - err = fmt.Errorf("->Delete-->wrapDeleteSQL获取SQL语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - values := make([]interface{}, 1) - values[0] = value - _, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, nil) - if errexec != nil { - errexec = fmt.Errorf("->Delete-->wrapExecUpdateValuesAffected执行删除错误:%w", errexec) - FuncLogError(ctx, errexec) - } - - return affected, errexec -} - -// InsertEntityMap 保存*IEntityMap对象.使用Map保存数据,用于不方便使用struct的场景,如果主键是自增或者序列,不要entityMap.Set主键的值 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -func InsertEntityMap(ctx context.Context, entity IEntityMap) (int, error) { - return insertEntityMap(ctx, entity) -} - -var insertEntityMap = func(ctx context.Context, entity IEntityMap) (int, error) { - affected := -1 - // 检查是否是指针对象 - _, checkerr := checkEntityKind(entity) - if checkerr != nil { - return affected, checkerr - } - - // 从contxt中获取数据库连接,可能为nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - return affected, errFromContxt - } - - // 自己构建的dbConnection - if dbConnection != nil && dbConnection.db == nil { - return affected, errDBConnection - } - - // SQL语句 - sqlstr, values, autoIncrement, err := wrapInsertEntityMapSQL(entity) - if err != nil { - err = fmt.Errorf("->InsertEntityMap-->wrapInsertEntityMapSQL获取SQL语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - - // 处理序列产生的自增主键,例如oracle,postgresql等 - var lastInsertID, zormSQLOutReturningID *int64 - // 如果是postgresql的SERIAL自增,需要使用 RETURNING 返回主键的值 - if autoIncrement && entity.GetPKColumnName() != "" { - config, errConfig := getConfigFromConnection(ctx, dbConnection, 1) - if errConfig != nil { - return affected, errConfig - } - dialect := config.Dialect - lastInsertID, zormSQLOutReturningID = wrapAutoIncrementInsertSQL(entity.GetPKColumnName(), &sqlstr, dialect, &values) - } - - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - res, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, lastInsertID) - if errexec != nil { - errexec = fmt.Errorf("->InsertEntityMap-->wrapExecUpdateValuesAffected执行保存错误:%w", errexec) - FuncLogError(ctx, errexec) - return affected, errexec - } - - // 如果是自增主键 - if autoIncrement { - // 如果是oracle,shentong 的返回自增主键 - if lastInsertID == nil && zormSQLOutReturningID != nil { - lastInsertID = zormSQLOutReturningID - } - - var autoIncrementIDInt64 int64 - var e error - if lastInsertID != nil { - autoIncrementIDInt64 = *lastInsertID - } else { - // 需要数据库支持,获取自增主键 - // Need database support, get auto-incrementing primary key - autoIncrementIDInt64, e = (*res).LastInsertId() - } - if e != nil { // 数据库不支持自增主键,不再赋值给struct属性 - e = fmt.Errorf("->InsertEntityMap数据库不支持自增主键,不再赋值给IEntityMap:%w", e) - FuncLogError(ctx, e) - return affected, nil - } - // int64 转 int - strInt64 := strconv.FormatInt(autoIncrementIDInt64, 10) - autoIncrementIDInt, _ := strconv.Atoi(strInt64) - // 设置自增主键的值 - entity.Set(entity.GetPKColumnName(), autoIncrementIDInt) - } - - return affected, nil -} - -// InsertEntityMapSlice 保存[]IEntityMap对象.使用Map保存数据,用于不方便使用struct的场景,如果主键是自增或者序列,不要entityMap.Set主键的值 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -func InsertEntityMapSlice(ctx context.Context, entityMapSlice []IEntityMap) (int, error) { - return insertEntityMapSlice(ctx, entityMapSlice) -} - -var insertEntityMapSlice = func(ctx context.Context, entityMapSlice []IEntityMap) (int, error) { - affected := -1 - // 从contxt中获取数据库连接,可能为nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - return affected, errFromContxt - } - // 自己构建的dbConnection - if dbConnection != nil && dbConnection.db == nil { - return affected, errDBConnection - } - config, errConfig := getConfigFromConnection(ctx, dbConnection, 1) - if errConfig != nil { - return affected, errConfig - } - // SQL语句 - sqlstr, values, err := wrapInsertEntityMapSliceSQL(ctx, config, entityMapSlice) - if err != nil { - err = fmt.Errorf("->InsertEntityMapSlice-->wrapInsertEntityMapSliceSQL获取SQL语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - _, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, nil) - if errexec != nil { - errexec = fmt.Errorf("->InsertEntityMapSlice-->wrapExecUpdateValuesAffected执行保存错误:%w", errexec) - FuncLogError(ctx, errexec) - return affected, errexec - } - return affected, errexec -} - -// UpdateEntityMap 更新IEntityMap对象.用于不方便使用struct的场景,主键必须有值 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// affected影响的行数,如果异常或者驱动不支持,返回-1 -// UpdateEntityMap Update IEntityMap object. Used in scenarios where struct is not convenient, the primary key must have a value -// ctx cannot be nil, refer to zorm.Transaction method to pass in ctx. Don't build DB Connection yourself -// The number of rows affected by "affected", if it is abnormal or the driver does not support it, return -1 -func UpdateEntityMap(ctx context.Context, entity IEntityMap) (int, error) { - return updateEntityMap(ctx, entity) -} - -var updateEntityMap = func(ctx context.Context, entity IEntityMap) (int, error) { - affected := -1 - // 检查是否是指针对象 - // Check if it is a pointer - _, checkerr := checkEntityKind(entity) - if checkerr != nil { - return affected, checkerr - } - // 从contxt中获取数据库连接,可能为nil - // Get database connection from contxt, it may be nil - dbConnection, errFromContxt := getDBConnectionFromContext(ctx) - if errFromContxt != nil { - return affected, errFromContxt - } - // 自己构建的dbConnection - // dbConnection built by yourself - if dbConnection != nil && dbConnection.db == nil { - return affected, errDBConnection - } - - // SQL语句 - // SQL statement - sqlstr, values, err := wrapUpdateEntityMapSQL(entity) - if err != nil { - err = fmt.Errorf("->UpdateEntityMap-->wrapUpdateEntityMapSQL获取SQL语句错误:%w", err) - FuncLogError(ctx, err) - return affected, err - } - // 包装update执行,赋值给影响的函数指针变量,返回*sql.Result - _, errexec := wrapExecUpdateValuesAffected(ctx, &affected, &sqlstr, values, nil) - if errexec != nil { - errexec = fmt.Errorf("->UpdateEntityMap-->wrapExecUpdateValuesAffected执行更新错误:%w", errexec) - FuncLogError(ctx, errexec) - } - - return affected, errexec -} - -// IsInTransaction 检查ctx是否包含事务 -func IsInTransaction(ctx context.Context) (bool, error) { - dbConnection, err := getDBConnectionFromContext(ctx) - if err != nil { - return false, err - } - if dbConnection != nil && dbConnection.tx != nil { - return true, err - } - return false, err -} - -// WrapUpdateStructFinder 返回更新IEntityStruct的Finder对象 -// ctx不能为nil,参照使用zorm.Transaction方法传入ctx.也不要自己构建DBConnection -// Finder为更新执行的Finder,更新语句统一使用Finder执行 -// updateStructFunc Update object -// ctx cannot be nil, refer to zorm.Transaction method to pass in ctx. Don't build DB Connection yourself -// Finder is the Finder that executes the update, and the update statement is executed uniformly using the Finder -func WrapUpdateStructFinder(ctx context.Context, entity IEntityStruct, onlyUpdateNotZero bool) (*Finder, error) { - // affected := -1 - if entity == nil { - return nil, errors.New("->WrapUpdateStructFinder-->entity对象不能为空") - } - - typeOf, columns, values, columnAndValueErr := columnAndValue(entity) - if columnAndValueErr != nil { - return nil, columnAndValueErr - } - - // SQL语句 - // SQL statement - sqlstr, err := wrapUpdateSQL(&typeOf, entity, &columns, &values, onlyUpdateNotZero) - if err != nil { - return nil, err - } - // finder对象 - finder := NewFinder() - finder.sqlstr = sqlstr - finder.sqlBuilder.WriteString(sqlstr) - finder.values = values - return finder, nil -} - -// selectCount 根据finder查询总条数 -// context必须传入,不能为空 -// selectCount Query the total number of items according to finder -// context must be passed in and cannot be empty -func selectCount(ctx context.Context, finder *Finder) (int, error) { - if finder == nil { - return -1, errors.New("->selectCount-->finder参数为nil") - } - // 自定义的查询总条数Finder,主要是为了在group by等复杂情况下,为了性能,手动编写总条数语句 - // Customized query total number Finder,mainly for the sake of performance in complex situations such as group by, manually write the total number of statements - if finder.CountFinder != nil { - count := -1 - _, err := QueryRow(ctx, finder.CountFinder, &count) - if err != nil { - return -1, err - } - return count, nil - } - - countsql, counterr := finder.GetSQL() - if counterr != nil { - return -1, counterr - } - - // 查询order by 的位置 - // Query the position of order by - locOrderBy := findOrderByIndex(&countsql) - // 如果存在order by - // If there is order by - if len(locOrderBy) > 0 { - countsql = countsql[:locOrderBy[0]] - } - s := strings.ToLower(countsql) - gbi := -1 - locGroupBy := findGroupByIndex(&countsql) - if len(locGroupBy) > 0 { - gbi = locGroupBy[0] - } - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - // 特殊关键字,包装SQL - // Special keywords, wrap SQL - if strings.Contains(s, " distinct ") || strings.Contains(s, " union ") || gbi > -1 { - // countsql = "SELECT COUNT(*) frame_row_count FROM (" + countsql + ") temp_frame_noob_table_name WHERE 1=1 " - sqlBuilder.WriteString("SELECT COUNT(*) frame_row_count FROM (") - sqlBuilder.WriteString(countsql) - sqlBuilder.WriteString(") temp_frame_noob_table_name WHERE 1=1 ") - } else { - locFrom := findSelectFromIndex(&countsql) - // 没有找到FROM关键字,认为是异常语句 - // The FROM keyword was not found, which is considered an abnormal statement - if len(locFrom) == 0 { - return -1, errors.New("->selectCount-->findFromIndex没有FROM关键字,语句错误") - } - // countsql = "SELECT COUNT(*) " + countsql[locFrom[0]:] - sqlBuilder.WriteString("SELECT COUNT(*) ") - sqlBuilder.WriteString(countsql[locFrom[0]:]) - } - countsql = sqlBuilder.String() - countFinder := NewFinder() - countFinder.Append(countsql) - countFinder.values = finder.values - - count := -1 - _, cerr := QueryRow(ctx, countFinder, &count) - if cerr != nil { - return -1, cerr - } - return count, nil -} - -// getDBConnectionFromContext 从Conext中获取数据库连接 -// getDBConnectionFromContext Get database connection from Conext -func getDBConnectionFromContext(ctx context.Context) (*dataBaseConnection, error) { - if ctx == nil { - return nil, errors.New("->getDBConnectionFromContext-->context不能为空") - } - // 获取数据库连接 - // Get database connection - value := ctx.Value(contextDBConnectionValueKey) - if value == nil { - return nil, nil - } - dbConnection, isdb := value.(*dataBaseConnection) - if !isdb { // 不是数据库连接 - return nil, errors.New("->getDBConnectionFromContext-->context传递了错误的*DBConnection类型值") - } - return dbConnection, nil -} - -// 变量名建议errFoo这样的驼峰 -// The variable name suggests a hump like "errFoo" -var errDBConnection = errors.New("更新操作需要使用zorm.Transaction开启事务.读取操作如果ctx没有dbConnection,使用FuncReadWriteStrategy(ctx,rwType).newDBConnection(),如果dbConnection有事务,就使用事务查询") - -// checkDBConnection 检查dbConnection.有可能会创建dbConnection或者开启事务,所以要尽可能的接近执行时检查 -// context必须传入,不能为空.rwType=0 read,rwType=1 write -// checkDBConnection It is possible to create a db Connection or open a transaction, so check it as close as possible to execution -// The context must be passed in and cannot be empty. rwType=0 read, rwType=1 write -func checkDBConnection(ctx context.Context, dbConnection *dataBaseConnection, hastx bool, rwType int) (context.Context, *dataBaseConnection, error) { - var errFromContext error - if dbConnection == nil { - dbConnection, errFromContext = getDBConnectionFromContext(ctx) - if errFromContext != nil { - return ctx, nil, errFromContext - } - } - - // dbConnection为空 - // dbConnection is nil - if dbConnection == nil { - dbdao, err := FuncReadWriteStrategy(ctx, rwType) - if err != nil { - return ctx, nil, err - } - // 是否禁用了事务 - disabletx := getContextBoolValue(ctx, contextDisableTransactionValueKey, dbdao.config.DisableTransaction) - // 如果要求有事务,事务需要手动zorm.Transaction显示开启.如果自动开启,就会为了偷懒,每个操作都自动开启,事务就失去意义了 - if hastx && (!disabletx) { - // if hastx { - return ctx, nil, errDBConnection - } - - // 如果要求没有事务,实例化一个默认的dbConnection - // If no transaction is required, instantiate a default db Connection - var errGetDBConnection error - - dbConnection, errGetDBConnection = dbdao.newDBConnection() - if errGetDBConnection != nil { - return ctx, nil, errGetDBConnection - } - // 把dbConnection放入context - // Put db Connection into context - ctx = context.WithValue(ctx, contextDBConnectionValueKey, dbConnection) - - } else { // 如果dbConnection存在 - // If db Connection exists - if dbConnection.db == nil { // 禁止外部构建 - return ctx, dbConnection, errDBConnection - } - if dbConnection.tx == nil && hastx && (!getContextBoolValue(ctx, contextDisableTransactionValueKey, dbConnection.config.DisableTransaction)) { - // if dbConnection.tx == nil && hastx { //如果要求有事务,事务需要手动zorm.Transaction显示开启.如果自动开启,就会为了偷懒,每个操作都自动开启,事务就失去意义了 - return ctx, dbConnection, errDBConnection - } - } - return ctx, dbConnection, nil -} - -// wrapExecUpdateValuesAffected 包装update执行,赋值给影响的函数指针变量,返回*sql.Result -func wrapExecUpdateValuesAffected(ctx context.Context, affected *int, sqlstrptr *string, values []interface{}, lastInsertID *int64) (*sql.Result, error) { - // 必须要有dbConnection和事务.有可能会创建dbConnection放入ctx或者开启事务,所以要尽可能的接近执行时检查 - // There must be a db Connection and transaction.It is possible to create a db Connection into ctx or open a transaction, so check as close as possible to the execution - var dbConnectionerr error - var dbConnection *dataBaseConnection - ctx, dbConnection, dbConnectionerr = checkDBConnection(ctx, dbConnection, true, 1) - if dbConnectionerr != nil { - return nil, dbConnectionerr - } - - var res *sql.Result - var errexec error - if lastInsertID != nil { - sqlrow, errrow := dbConnection.queryRowContext(ctx, sqlstrptr, &values) - if errrow != nil { - return res, errrow - } - errexec = sqlrow.Scan(lastInsertID) - if errexec == nil { // 如果插入成功,返回 - *affected = 1 - return res, errexec - } - } else { - res, errexec = dbConnection.execContext(ctx, sqlstrptr, &values) - } - - if errexec != nil { - return res, errexec - } - // 影响的行数 - // Number of rows affected - - rowsAffected, errAffected := (*res).RowsAffected() - if errAffected == nil { - *affected, errAffected = typeConvertInt64toInt(rowsAffected) - } else { // 如果不支持返回条数,设置位nil,影响的条数设置成-1 - *affected = -1 - errAffected = nil - } - return res, errAffected -} - -// contextSQLHintValueKey 把sql hint放到context里使用的key -const contextSQLHintValueKey = wrapContextStringKey("contextSQLHintValueKey") - -// BindContextSQLHint context中绑定sql的hint,使用这个Context的方法都会传播hint传播的语句 -// hint 是完整的sql片段, 例如: hint:="/*+ XID('gs/aggregationSvc/2612341069705662465') */" -func BindContextSQLHint(parent context.Context, hint string) (context.Context, error) { - if parent == nil { - return nil, errors.New("->BindContextSQLHint-->context的parent不能为nil") - } - if hint == "" { - return nil, errors.New("->BindContextSQLHint-->hint不能为空") - } - - ctx := context.WithValue(parent, contextSQLHintValueKey, hint) - return ctx, nil -} - -// contextEnableGlobalTransactionValueKey 是否使用分布式事务放到context里使用的key -const contextEnableGlobalTransactionValueKey = wrapContextStringKey("contextEnableGlobalTransactionValueKey") - -// BindContextEnableGlobalTransaction context启用分布式事务,不再自动设置,必须手动启用分布式事务,必须放到本地事务开启之前调用 -func BindContextEnableGlobalTransaction(parent context.Context) (context.Context, error) { - if parent == nil { - return nil, errors.New("->BindContextEnableGlobalTransaction-->context的parent不能为nil") - } - ctx := context.WithValue(parent, contextEnableGlobalTransactionValueKey, true) - return ctx, nil -} - -// contextDisableTransactionValueKey 是否禁用事务放到context里使用的key -const contextDisableTransactionValueKey = wrapContextStringKey("contextDisableTransactionValueKey") - -// BindContextDisableTransaction context禁用事务,必须放到事务开启之前调用.用在不使用事务更新数据库的场景,强烈建议不要使用这个方法,更新数据库必须有事务!!! -func BindContextDisableTransaction(parent context.Context) (context.Context, error) { - if parent == nil { - return nil, errors.New("->BindContextDisableTransaction-->context的parent不能为nil") - } - ctx := context.WithValue(parent, contextDisableTransactionValueKey, true) - return ctx, nil -} - -// getContextBoolValue 从ctx中获取key的bool值,ctx如果没有值使用defaultValue -func getContextBoolValue(ctx context.Context, key wrapContextStringKey, defaultValue bool) bool { - boolValue := false - ctxBoolValue := ctx.Value(key) - if ctxBoolValue != nil { // 如果有值 - boolValue = ctxBoolValue.(bool) - } else { // ctx如果没有值使用defaultValue - boolValue = defaultValue - } - return boolValue -} diff --git a/vendor/gitee.com/chunanyong/zorm/Finder.go b/vendor/gitee.com/chunanyong/zorm/Finder.go deleted file mode 100644 index 1aae23c0..00000000 --- a/vendor/gitee.com/chunanyong/zorm/Finder.go +++ /dev/null @@ -1,184 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "errors" - "strings" -) - -// Finder 查询数据库的载体,所有的sql语句都要通过Finder执行. -// Finder To query the database carrier, all SQL statements must be executed through Finder -type Finder struct { - // 拼接SQL - // Splicing SQL. - sqlBuilder strings.Builder - // SQL的参数值 - // SQL parameter values. - values []interface{} - // 注入检查,默认true 不允许SQL注入的 ' 单引号 - // Injection check, default true does not allow SQL injection single quote - InjectionCheck bool - // CountFinder 自定义的查询总条数'Finder',使用指针默认为nil.主要是为了在'group by'等复杂情况下,为了性能,手动编写总条数语句 - // CountFinder The total number of custom queries is'Finder', and the pointer is nil by default. It is mainly used to manually write the total number of statements for performance in complex situations such as'group by' - CountFinder *Finder - // 是否自动查询总条数,默认true.同时需要Page不为nil,才查询总条数 - // Whether to automatically query the total number of entries, the default is true. At the same time, the Page is not nil to query the total number of entries - SelectTotalCount bool - // SQL语句 - // SQL statement - sqlstr string -} - -// NewFinder 初始化一个Finder,生成一个空的Finder -// NewFinder Initialize a Finder and generate an empty Finder -func NewFinder() *Finder { - finder := Finder{} - finder.sqlBuilder.Grow(stringBuilderGrowLen) - finder.SelectTotalCount = true - finder.InjectionCheck = true - // slice扩容会生成新的slice,最后要值复制接收.问:为什么cap是3?答:经验 - finder.values = make([]interface{}, 0, 3) - return &finder -} - -// NewSelectFinder 根据表名初始化查询的Finder,strs 只取第一个字符串,用数组类型是为了可以不传入,默认为 * | Finder that initializes the query based on the table name -// NewSelectFinder("tableName") SELECT * FROM tableName -// NewSelectFinder("tableName", "id,name") SELECT id,name FROM tableName -func NewSelectFinder(tableName string, strs ...string) *Finder { - strsLen := len(strs) - if strsLen > 1 { // 不支持多个参数 - return nil - } - finder := NewFinder() - finder.sqlBuilder.WriteString("SELECT ") - if strsLen == 1 { // 只取值第一个字符串 - finder.sqlBuilder.WriteString(strs[0]) - } else { - finder.sqlBuilder.WriteByte('*') - } - finder.sqlBuilder.WriteString(" FROM ") - finder.sqlBuilder.WriteString(tableName) - return finder -} - -// NewUpdateFinder 根据表名初始化更新的Finder, UPDATE tableName SET -// NewUpdateFinder Initialize the updated Finder according to the table name, UPDATE tableName SET -func NewUpdateFinder(tableName string) *Finder { - finder := NewFinder() - finder.sqlBuilder.WriteString("UPDATE ") - finder.sqlBuilder.WriteString(tableName) - finder.sqlBuilder.WriteString(" SET ") - return finder -} - -// NewDeleteFinder 根据表名初始化删除的'Finder', DELETE FROM tableName -// NewDeleteFinder Finder for initial deletion based on table name. DELETE FROM tableName -func NewDeleteFinder(tableName string) *Finder { - finder := NewFinder() - finder.sqlBuilder.WriteString("DELETE FROM ") - finder.sqlBuilder.WriteString(tableName) - // 所有的 WHERE 都不加,规则统一,好记 - // No WHERE is added, the rules are unified, easy to remember - // finder.sqlBuilder.WriteString(" WHERE ") - return finder -} - -// Append 添加SQL和参数的值,第一个参数是语句,后面的参数[可选]是参数的值,顺序要正确 -// 例如: finder.Append(" and id=? and name=? ",23123,"abc") -// 只拼接SQL,例如: finder.Append(" and name=123 ") -// Append:Add SQL and parameter values, the first parameter is the statement, and the following parameter (optional) is the value of the parameter, in the correct order -// E.g: finder.Append(" and id=? and name=? ",23123,"abc") -// Only splice SQL, E.g: finder.Append(" and name=123 ") -func (finder *Finder) Append(s string, values ...interface{}) *Finder { - // 不要自己构建finder,使用NewFinder()方法 - // Don't build finder by yourself, use NewFinder() method - if finder == nil || finder.values == nil { - return nil - } - - if s != "" { - if finder.sqlstr != "" { - finder.sqlstr = "" - } - // 默认加一个空格,避免手误两个字符串连接再一起 - // A space is added by default to avoid hand mistakes when connecting two strings together - finder.sqlBuilder.WriteByte(' ') - - finder.sqlBuilder.WriteString(s) - - } - if values == nil || len(values) < 1 { - return finder - } - - finder.values = append(finder.values, values...) - return finder -} - -// AppendFinder 添加另一个Finder finder.AppendFinder(f) -// AppendFinder Add another Finder . finder.AppendFinder(f) -func (finder *Finder) AppendFinder(f *Finder) (*Finder, error) { - if finder == nil { - return finder, errors.New("->finder-->AppendFinder()finder对象为nil") - } - if f == nil { - return finder, errors.New("->finder-->AppendFinder()参数是nil") - } - - // 不要自己构建finder,使用NewFinder()方法 - // Don't build finder by yourself, use NewFinder() method - if finder.values == nil { - return finder, errors.New("->finder-->AppendFinder()不要自己构建finder,使用NewFinder()方法") - } - - // 添加f的SQL - // SQL to add f - sqlstr, err := f.GetSQL() - if err != nil { - return finder, err - } - finder.sqlstr = "" - finder.sqlBuilder.WriteString(sqlstr) - // 添加f的值 - // Add the value of f - finder.values = append(finder.values, f.values...) - return finder, nil -} - -// GetSQL 返回Finder封装的SQL语句 -// GetSQL Return the SQL statement encapsulated by the Finder -func (finder *Finder) GetSQL() (string, error) { - // 不要自己构建finder,使用NewFinder方法 - // Don't build finder by yourself, use NewFinder method - if finder == nil || finder.values == nil { - return "", errors.New("->finder-->GetSQL()不要自己构建finder,使用NewFinder()方法") - } - if len(finder.sqlstr) > 0 { - return finder.sqlstr, nil - } - sqlstr := finder.sqlBuilder.String() - // 包含单引号,属于非法字符串 - // Contains single quotes, which are illegal strings - if finder.InjectionCheck && (strings.Contains(sqlstr, "'")) { - return "", errors.New(`->finder-->GetSQL()SQL语句请不要直接拼接字符串参数,容易注入!!!请使用问号占位符,例如 finder.Append("and id=?","stringId"),如果必须拼接字符串,请设置 finder.InjectionCheck = false `) - } - finder.sqlstr = sqlstr - return sqlstr, nil -} diff --git a/vendor/gitee.com/chunanyong/zorm/ICustomDriverValueConver.go b/vendor/gitee.com/chunanyong/zorm/ICustomDriverValueConver.go deleted file mode 100644 index c0373884..00000000 --- a/vendor/gitee.com/chunanyong/zorm/ICustomDriverValueConver.go +++ /dev/null @@ -1,141 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "database/sql" - "database/sql/driver" - "errors" - "reflect" - "strings" -) - -// customDriverValueMap 用于配置数据库字段类型的处理关系,key是 Dialect.字段类型,例如 dm.TEXT -var customDriverValueMap = make(map[string]ICustomDriverValueConver) - -// iscdvm 是否有自定义的DriverValueMap -var iscdvm bool - -// ICustomDriverValueConver 自定义类型转化接口,用于解决 类似达梦 text --> dm.DmClob --> string类型接收的问题 -type ICustomDriverValueConver interface { - // GetDriverValue 根据数据库列类型,返回driver.Value的实例,struct属性类型 - // map接收或者字段不存在,无法获取到structFieldType,会传入nil - GetDriverValue(ctx context.Context, columnType *sql.ColumnType, structFieldType *reflect.Type) (driver.Value, error) - - // ConverDriverValue 数据库列类型,GetDriverValue返回的driver.Value的临时接收值,struct属性类型 - // map接收或者字段不存在,无法获取到structFieldType,会传入nil - // 返回符合接收类型值的指针,指针,指针!!!! - ConverDriverValue(ctx context.Context, columnType *sql.ColumnType, tempDriverValue driver.Value, structFieldType *reflect.Type) (interface{}, error) -} - -// RegisterCustomDriverValueConver 注册自定义的字段处理逻辑,用于驱动无法直接转换的场景,例如达梦的 TEXT 无法直接转化成 string -// dialectColumnType 值是 Dialect.字段类型,例如: dm.TEXT -// 一般是放到init方法里进行注册 -func RegisterCustomDriverValueConver(dialectColumnType string, customDriverValueConver ICustomDriverValueConver) error { - if len(dialectColumnType) < 1 { - return errors.New("->RegisterCustomDriverValueConver-->dialectColumnType为空") - } - dialectColumnTypes := strings.Split(dialectColumnType, ".") - if len(dialectColumnTypes) < 2 { - customDriverValueMap[strings.ToUpper(dialectColumnType)] = customDriverValueConver - err := errors.New("->RegisterCustomDriverValueConver-->dialectColumnType 值是 Dialect.字段类型,例如: dm.TEXT ,本次正常运行,请尽快修改") - FuncLogError(nil, err) - } else { - customDriverValueMap[strings.ToLower(dialectColumnTypes[0])+"."+strings.ToUpper(dialectColumnTypes[1])] = customDriverValueConver - } - iscdvm = true - return nil -} - -type driverValueInfo struct { - customDriverValueConver ICustomDriverValueConver - columnType *sql.ColumnType - tempDriverValue interface{} - structFieldType *reflect.Type -} - -/** - -import ( - // 00.引入数据库驱动 - "gitee.com/chunanyong/dm" - "io" -) - -// CustomDMText 实现ICustomDriverValueConver接口,扩展自定义类型,例如 达梦数据库TEXT类型,映射出来的是dm.DmClob类型,无法使用string类型直接接收 -type CustomDMText struct{} - -// GetDriverValue 根据数据库列类型,返回driver.Value的实例,struct属性类型 -// map接收或者字段不存在,无法获取到structFieldType,会传入nil -func (dmtext CustomDMText) GetDriverValue(ctx context.Context, columnType *sql.ColumnType, structFieldType *reflect.Type) (driver.Value, error) { - // 如果需要使用structFieldType,需要先判断是否为nil - // if structFieldType != nil { - // } - - return &dm.DmClob{}, nil -} - -// ConverDriverValue 数据库列类型,GetDriverValue返回的driver.Value的临时接收值,struct属性类型 -// map接收或者字段不存在,无法获取到structFieldType,会传入nil -// 返回符合接收类型值的指针,指针,指针!!!! -func (dmtext CustomDMText) ConverDriverValue(ctx context.Context, columnType *sql.ColumnType, tempDriverValue driver.Value, structFieldType *reflect.Type) (interface{}, error) { - // 如果需要使用structFieldType,需要先判断是否为nil - // if structFieldType != nil { - // } - - // 类型转换 - dmClob, isok := tempDriverValue.(*dm.DmClob) - if !isok { - return tempDriverValue, errors.New("->ConverDriverValue-->转换至*dm.DmClob类型失败") - } - if dmClob == nil || !dmClob.Valid { - return new(string), nil - } - // 获取长度 - dmlen, errLength := dmClob.GetLength() - if errLength != nil { - return dmClob, errLength - } - - // int64转成int类型 - strInt64 := strconv.FormatInt(dmlen, 10) - dmlenInt, errAtoi := strconv.Atoi(strInt64) - if errAtoi != nil { - return dmClob, errAtoi - } - - // 读取字符串 - str, errReadString := dmClob.ReadString(1, dmlenInt) - - // 处理空字符串或NULL造成的EOF错误 - if errReadString == io.EOF { - return new(string), nil - } - - return &str, errReadString -} -// RegisterCustomDriverValueConver 注册自定义的字段处理逻辑,用于驱动无法直接转换的场景,例如达梦的 TEXT 无法直接转化成 string -// 一般是放到init方法里进行注册 -func init() { - // dialectColumnType 值是 Dialect.字段类型 ,例如 dm.TEXT - zorm.RegisterCustomDriverValueConver("dm.TEXT", CustomDMText{}) -} - -**/ diff --git a/vendor/gitee.com/chunanyong/zorm/IEntity.go b/vendor/gitee.com/chunanyong/zorm/IEntity.go deleted file mode 100644 index 7e9699a4..00000000 --- a/vendor/gitee.com/chunanyong/zorm/IEntity.go +++ /dev/null @@ -1,160 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -// IEntityStruct "struct"实体类的接口,所有的struct实体类都要实现这个接口 -// IEntityStruct The interface of the "struct" entity class, all struct entity classes must implement this interface -type IEntityStruct interface { - // 获取表名称 - // Get the table name. - GetTableName() string - - // 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称 - // Get the primary key field name of the database table. Because it is compatible with Map, it can only be the field name of the database - GetPKColumnName() string - - // GetPkSequence 主键序列 - // GetPkSequence Primary key sequence - GetPkSequence() string -} - -// IEntityMap 使用Map保存数据,用于不方便使用struct的场景,如果主键是自增或者序列,不要"entityMap.Set"主键的值 -// IEntityMap Use Map to save data for scenarios where it is not convenient to use struct -// If the primary key is auto-increment or sequence, do not "entity Map.Set" the value of the primary key -type IEntityMap interface { - // 获取表名称 - // Get the table name - GetTableName() string - - // 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称. - // Get the primary key field name of the database table. Because it is compatible with Map, it can only be the field name of the database. - GetPKColumnName() string - - // GetEntityMapPkSequence 主键序列,不能使用GetPkSequence方法名,避免默认实现了IEntityStruct接口 - // GetEntityMapPkSequence primary key sequence, you cannot use the GetPkSequence method name, to avoid the default implementation of IEntityStruct interface - GetEntityMapPkSequence() string - - // GetDBFieldMap 针对Map类型,记录数据库字段 - // GetDBFieldMap For Map type, record database fields. - GetDBFieldMap() map[string]interface{} - - // GetDBFieldMapKey 按照Set的先后顺序记录key值,也就是数据库字段,用于SQL排序 - // GetDBFieldMapKey records the key value, that is, the database field, in the order of the Set, which is used for SQL sorting - GetDBFieldMapKey() []string - // 设置数据库字段的值 - // Set the value of a database field. - Set(key string, value interface{}) map[string]interface{} -} - -// EntityStruct "IBaseEntity" 的基础实现,所有的实体类都匿名注入.这样就类似实现继承了,如果接口增加方法,调整这个默认实现即可 -// EntityStruct The basic implementation of "IBaseEntity", all entity classes are injected anonymously -// This is similar to implementation inheritance. If the interface adds methods, adjust the default implementation -type EntityStruct struct{} - -// 默认数据库的主键列名 -// Primary key column name of the default database -const defaultPkName = "id" - -//GetTableName 获取表名称,必须有具体的Struct实现,类似java的抽象方法,避免手误忘记写表名.如果有扩展需求,建议使用接口进行扩展,不要默认实现GetTableName -/* -func (entity *EntityStruct) GetTableName() string { - return "" -} -*/ - -// GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称 -// GetPKColumnName Get the primary key field name of the database table -// Because it is compatible with Map, it can only be the field name of the database -func (entity *EntityStruct) GetPKColumnName() string { - return defaultPkName -} - -// var defaultPkSequence = make(map[string]string, 0) - -// GetPkSequence 主键序列 -// GetPkSequence Primary key sequence -func (entity *EntityStruct) GetPkSequence() string { - return "" -} - -//-------------------------------------------------------------------------// - -// EntityMap IEntityMap的基础实现,可以直接使用或者匿名注入 -type EntityMap struct { - // 表名 - tableName string - // 主键列名 - PkColumnName string - // 主键序列,如果有值,优先级最高 - PkSequence string - // 数据库字段,不暴露外部 - dbFieldMap map[string]interface{} - // 列名,记录顺序 - dbFieldMapKey []string -} - -// NewEntityMap 初始化Map,必须传入表名称 -func NewEntityMap(tbName string) *EntityMap { - entityMap := EntityMap{} - entityMap.dbFieldMap = map[string]interface{}{} - entityMap.tableName = tbName - entityMap.PkColumnName = defaultPkName - entityMap.dbFieldMapKey = make([]string, 0) - return &entityMap -} - -// GetTableName 获取表名称 -func (entity *EntityMap) GetTableName() string { - return entity.tableName -} - -// GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称 -func (entity *EntityMap) GetPKColumnName() string { - return entity.PkColumnName -} - -// GetEntityMapPkSequence 主键序列,不能使用GetPkSequence方法名,避免默认实现了IEntityStruct接口 -// GetEntityMapPkSequence primary key sequence, you cannot use the GetPkSequence method name, to avoid the default implementation of IEntityStruct interface -func (entity *EntityMap) GetEntityMapPkSequence() string { - return entity.PkSequence -} - -// GetDBFieldMap 针对Map类型,记录数据库字段 -// GetDBFieldMap For Map type, record database fields -func (entity *EntityMap) GetDBFieldMap() map[string]interface{} { - return entity.dbFieldMap -} - -// GetDBFieldMapKey 按照Set的先后顺序记录key值,也就是数据库字段,用于SQL排序 -// GetDBFieldMapKey records the key value, that is, the database field, in the order of the Set, which is used for SQL sorting -func (entity *EntityMap) GetDBFieldMapKey() []string { - return entity.dbFieldMapKey -} - -// Set 设置数据库字段 -// Set Set database fields -func (entity *EntityMap) Set(key string, value interface{}) map[string]interface{} { - _, ok := entity.dbFieldMap[key] - if !ok { // 如果不存在 - entity.dbFieldMapKey = append(entity.dbFieldMapKey, key) - } - entity.dbFieldMap[key] = value - - return entity.dbFieldMap -} diff --git a/vendor/gitee.com/chunanyong/zorm/IGlobalTransaction.go b/vendor/gitee.com/chunanyong/zorm/IGlobalTransaction.go deleted file mode 100644 index 090e75cd..00000000 --- a/vendor/gitee.com/chunanyong/zorm/IGlobalTransaction.go +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import "context" - -// IGlobalTransaction 托管全局分布式事务接口 -type IGlobalTransaction interface { - // BeginGTX 开启全局分布式事务 - BeginGTX(ctx context.Context, globalRootContext context.Context) error - - // CommitGTX 提交全局分布式事务.不能命名为 Commit,不然就和gtx的Commit一致了,就递归调用自己了....... - CommitGTX(ctx context.Context, globalRootContext context.Context) error - - // RollbackGTX 回滚全局分布式事务 - RollbackGTX(ctx context.Context, globalRootContext context.Context) error - - // GetGTXID 获取全局分布式事务的XID - GetGTXID(ctx context.Context, globalRootContext context.Context) (string, error) - - // 重新包装为 seata/hptx 的context.RootContext - // context.RootContext 如果后续使用了 context.WithValue,类型就是context.valueCtx 就会造成无法再类型断言为 context.RootContext - // 所以DBDao里使用了 globalRootContext变量,区分业务的ctx和分布式事务的RootContext - // NewRootContext(ctx context.Context) context.Context -} diff --git a/vendor/gitee.com/chunanyong/zorm/LICENSE b/vendor/gitee.com/chunanyong/zorm/LICENSE deleted file mode 100644 index 261eeb9e..00000000 --- a/vendor/gitee.com/chunanyong/zorm/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/gitee.com/chunanyong/zorm/Logger.go b/vendor/gitee.com/chunanyong/zorm/Logger.go deleted file mode 100644 index fe3a0033..00000000 --- a/vendor/gitee.com/chunanyong/zorm/Logger.go +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "fmt" - "log" -) - -func init() { - // 设置默认的日志显示信息,显示文件和行号 - // Set the default log display information, display file and line number. - log.SetFlags(log.Llongfile | log.LstdFlags) -} - -// LogCallDepth 记录日志调用层级,用于定位到业务层代码 -// Log Call Depth Record the log call level, used to locate the business layer code -var LogCallDepth = 4 - -// FuncLogError 记录error日志.NewDBDao方法里的异常,ctx为nil,扩展时请注意 -// FuncLogError Record error log -var FuncLogError func(ctx context.Context, err error) = defaultLogError - -// FuncLogPanic 记录panic日志,默认使用"defaultLogError"实现 -// FuncLogPanic Record panic log, using "defaultLogError" by default -var FuncLogPanic func(ctx context.Context, err error) = defaultLogPanic - -// FuncPrintSQL 打印sql语句,参数和执行时间,小于0是禁用日志输出;等于0是只输出日志,不计算SQ执行时间;大于0是计算执行时间,并且大于指定值 -// FuncPrintSQL Print sql statement and parameters -var FuncPrintSQL func(ctx context.Context, sqlstr string, args []interface{}, execSQLMillis int64) = defaultPrintSQL - -func defaultLogError(ctx context.Context, err error) { - log.Output(LogCallDepth, fmt.Sprintln(err)) -} - -func defaultLogPanic(ctx context.Context, err error) { - defaultLogError(ctx, err) -} - -func defaultPrintSQL(ctx context.Context, sqlstr string, args []interface{}, execSQLMillis int64) { - if args != nil { - log.Output(LogCallDepth, fmt.Sprintln("sql:", sqlstr, ",args:", args, ",execSQLMillis:", execSQLMillis)) - } else { - log.Output(LogCallDepth, fmt.Sprintln("sql:", sqlstr, ",args: [] ", ",execSQLMillis:", execSQLMillis)) - } -} diff --git a/vendor/gitee.com/chunanyong/zorm/Page.go b/vendor/gitee.com/chunanyong/zorm/Page.go deleted file mode 100644 index ba6e42fa..00000000 --- a/vendor/gitee.com/chunanyong/zorm/Page.go +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -// Page 分页对象 -// Page Pagination object -type Page struct { - // 当前页码,从1开始 - // Current page number, starting from 1 - PageNo int - - // 每页多少条,默认20条 - // How many items per page, 20 items by default - PageSize int - - // 数据总条数 - // Total number of data - TotalCount int - - // 共多少页 - // How many pages - PageCount int - - // 是否是第一页 - // Is it the first page - FirstPage bool - - // 是否有上一页 - // Whether there is a previous page - HasPrev bool - - // 是否有下一页 - // Is there a next page - HasNext bool - - // 是否是最后一页 - // Is it the last page - LastPage bool -} - -// NewPage 创建Page对象 -// NewPage Create Page object -func NewPage() *Page { - page := Page{} - page.PageNo = 1 - page.PageSize = 20 - return &page -} - -// setTotalCount 设置总条数,计算其他值 -// setTotalCount Set the total number of bars, calculate other values -func (page *Page) setTotalCount(total int) { - page.TotalCount = total - page.PageCount = (page.TotalCount + page.PageSize - 1) / page.PageSize - if page.PageNo >= page.PageCount { - page.LastPage = true - } else { - page.HasNext = true - } - if page.PageNo > 1 { - page.HasPrev = true - } else { - page.FirstPage = true - } -} diff --git a/vendor/gitee.com/chunanyong/zorm/README.md b/vendor/gitee.com/chunanyong/zorm/README.md deleted file mode 100644 index e94bae82..00000000 --- a/vendor/gitee.com/chunanyong/zorm/README.md +++ /dev/null @@ -1,1094 +0,0 @@ -## Introduction -![zorm logo](zorm-logo.png) -This is a lightweight ORM,zero dependency, that supports DM,Kingbase,shentong,TDengine,mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse... - -Official website:https://zorm.cn -source code address:https://gitee.com/chunanyong/zorm -test case: https://gitee.com/wuxiangege/zorm-examples/ -Video tutorial: https://www.bilibili.com/video/BV1L24y1976U/ - - -``` -go get gitee.com/chunanyong/zorm -``` - -* Based on native SQL statements, the learning cost is lower -* [Code generator](https://gitee.com/zhou-a-xing/zorm-generate-struct) -* The code is concise, the main body is 2500 lines, zero dependency 4000 lines, detailed comments, easy to customize and modify -* Support for transaction propagation, which was the main reason for the birth of ZORM -* Support dm (dameng), kingbase (jincang), shentong (Shentong), gbase (Nantong), TDengine, mysql, postgresql, oracle, mssql, sqlite, db2, clickhouse... -* Supports multi-database and read/write splitting -* Joint primary keys are not supported, workarounds are assumed to be no primary keys, and business control is implemented (difficult trade-offs) -* Support seata, HPTX, dbpack distributed transactions, support global transaction hosting, do not modify business code, zero intrusion distributed transactions -* Support clickhouse, update, delete statements using SQL92 standard syntax.clickhouse-go official driver does not support batch insert syntax, it is recommended to use https://github.com/mailru/go-clickhouse - -## Transaction propagation -Transaction propagation is the core function of ZORM and the main reason why all methods of ZORM have ctx parameters. -ZORM transaction operations need ```zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})``` to be explicitly enabled, check transactions before executing closure functions, join transactions if there are transactions in ctx, and create new transactions if there are no transactions in ctx, so you only need to pass the same ctx object to achieve transaction propagation. In special scenarios, if you do not want transaction synchronization, you can declare a new ctx object to do transaction isolation. - -## Description of the source repository -The main libraries of the open source projects I led are all in GitHub, and there are project descriptions on GitHub to guide the jump to GitHub, which also causes the slow growth of the project, after all, there are more GitHub users. -**Open source has no borders, but developers have their own homeland.** -Strictly speaking, GitHub is governed by US law https://www.infoq.cn/article/SA72SsSeZBpUSH_ZH8XB -do my best to support the domestic open source community, don't like it, don't spray, thank you! - -## Support domestic database -ZORM spares no effort in adapting to domestic databases, and if you encounter domestic databases that are not adapted or have problems, please feedback to the community and work together to build a domestic software ecosystem - -### Da Meng (DM) -- Configure zorm.DataSourceConfig ```DriverName:dm ,Dialect:dm``` -- Damon Database Driver: gitee.com/chunanyong/dm -- The TEXT type of Damon will be mapped to ```dm.DmClob```, string cannot be received, and zorm needs to be implemented ```ICustomDriverValueConver``` interface, custom extension processing -```go -import ( - // 00. Introduce the database driver - "gitee.com/chunanyong/dm" - "io" -) - -// CustomDMText implements ICustomDriverValueConver interface to extend custom types. For example, the TEXT type is mapped to dm.DmClob and cannot be directly received using string -type CustomDMText struct{} - -// GetDriverValue Returns an instance of driver.Value, the struct attribute type, based on the database column type -// The structFieldType is passed nil because the map received or field does not exist -func (dmtext CustomDMText) GetDriverValue(ctx context.Context, columnType *sql.ColumnType, structFieldType *reflect.Type) (driver.Value, error) { - // If you want to use the structFieldType, you need to determine if it is nil - // if structFieldType != nil { - // } - - return &dm.DmClob{}, nil -} - -// ConverDriverValue database column type, temporary received Value of driver. value returned by GetDriverValue,struct attribute type -// The structFieldType is passed nil because the map received or field does not exist -// Returns a pointer, pointer, pointer that matches the received type value!!!! -func (dmtext CustomDMText) ConverDriverValue(ctx context.Context, columnType *sql.ColumnType, tempDriverValue driver.Value, structFieldType *reflect.Type) (interface{}, error) { - // If you want to use the structFieldType, you need to determine if it is nil - // if structFieldType != nil { - // } - - // Type conversion - dmClob, isok := tempDriverValue.(*dm.DmClob) - if !isok { - return tempDriverValue, errors.New("->ConverDriverValue--> Failed to convert to *dm.DmClob") - } - if dmClob == nil || !dmClob.Valid { - return new(string), nil - } - // Get the length - dmlen, errLength := dmClob.GetLength() - if errLength != nil { - return dmClob, errLength - } - - // int64 is converted to an int - strInt64 := strconv.FormatInt(dmlen, 10) - dmlenInt, errAtoi := strconv.Atoi(strInt64) - if errAtoi != nil { - return dmClob, errAtoi - } - - // Read the string - str, errReadString := dmClob.ReadString(1, dmlenInt) - - // Handle EOF errors caused by empty strings or NULL value - if errReadString == io.EOF { - return new(string), nil - } - - return &str, errReadString -} -// RegisterCustomDriverValueConver registered custom field processing logic, used to drive not directly convert scenarios, such as the TEXT of the dream cannot directly into a string -// It's usually registered in the init method -func init() { - // dialectColumnType is a Dialect.FieldType, such as dm.TEXT - zorm.RegisterCustomDriverValueConver("dm.TEXT", CustomDMText{}) -} -``` -### Kingbase -- Configure zorm.DataSourceConfig ```DriverName:kingbase ,Dialect:kingbase``` -- Golden warehouse official drive: https://www.kingbase.com.cn/qd/index.htmhttps://bbs.kingbase.com.cn/thread-14457-1-1.html?_dsign=87f12756 -- The Kingbase 8 core is based on PostgreSQL 9.6 and can be tested using https://github.com/lib/pq, and the official driver is recommended for production environments -- Note that ora_input_emptystr_isnull = false or ora_input_emptystr_isnull = on in the data/kingbase.conf of the database (according to the version), because golang does not have a null value, the general database is not null, golang's string defaults to '', if this is set to true, The database will set the value to null, which conflicts with the field property not null, so an error is reported. - After the configuration file is modified, restart the database. -- Thanks to [@Jin](https://gitee.com/GOODJIN) for testing and suggestions. - -### Shentong (shentong) -It is recommended to use official driver, configure zorm.DataSourceConfig ```DriverName:aci ,Dialect:shentong``` - -### Nantong (gbase) -~~The official Go driver has not been found yet. Please configure it zorm.DataSourceConfig DriverName:gbase ,Dialect:gbase~~ -Use odbc driver for the time being, ```DriverName:odbc ,Dialect:gbase``` - -### TDengine -- Since the TDengine driver does not support transactions, you need to set this setting ```DisableTransaction=true``` -- Configure zorm.DataSourceConfig ```DriverName:taosSql/taosRestful, Dialect:tdengine``` -- zorm.DataSourceConfig```TDengineInsertsColumnName```TDengine batch insert statement whether there is a column name. The default false has no column name, and the insertion value and database column order are consistent, reducing the length of the statement -- Test case: https://www.yuque.com/u27016943/nrgi00/dnru3f -- TDengine is included: https://github.com/taosdata/awesome-tdengine/#orm - -## Database scripts and entity classes -Generate entity classes or write them manually, we recommend using a code generator https://gitee.com/zhou-a-xing/zorm-generate-struct - -```go - -package testzorm - -import ( - "time" - - "gitee.com/chunanyong/zorm" -) - -// Build a list sentence - -/* - -DROP TABLE IF EXISTS `t_demo`; -CREATE TABLE `t_demo` ( -`id` varchar(50) NOT NULL COMMENT 'primary key ', -`userName` varchar(30) NOT NULL COMMENT 'name ', -`password` varchar(50) NOT NULL COMMENT 'password ', -`createTime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP(0), -`active` int COMMENT 'Whether it is valid (0 no,1 yes)', - PRIMARY KEY (`id`) -ENGINE = InnoDB CHARACTER SET = utf8mb4 COMMENT = 'example'; - -*/ - -// demoStructTableName Table name constant for direct call -const demoStructTableName = "t_demo" - -// demoStruct example -type demoStruct struct { - // Introduce the default struct to isolate method changes to the IEntityStruct - zorm.EntityStruct - - // Id Primary key - Id string `column:"id"` - - // UserName Specifies the name - UserName string `column:"userName"` - - // Password Password - Password string `column:"password"` - - // CreateTime - CreateTime time.Time `column:"createTime"` - - // Active Whether it is valid (0 No,1 yes) - // Active int `column:"active"` - - // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - end of the database field, custom fields to write in the following -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- // - // If the queried field is not found in the column tag, it is mapped to the struct property by name (case insensitive, support _ _ to _ hump) - - // Simulates the custom field Active - Active int -} - -// GetTableName Gets the table name -// IEntityStruct interface method, entity class needs to implement!! -func (entity *demoStruct) GetTableName() string { - return demoStructTableName -} - -// GetPKColumnName Gets the name of the primary key field of the database table. Because to be compatible with Map, can only be the database field name -// Joint primary key is not supported. It can be considered that there is no primary key and service control can be realized (hard choice). -// If you do not have a primary key, you need to implement this method as well -// IEntityStruct interface method, entity class needs to implement!! -func (entity *demoStruct) GetPKColumnName() string { - // If there is no primary key - // return "" - return "id" -} - -// newDemoStruct creates a default object -func newDemoStruct() demoStruct { - demo := demoStruct{ - // if Id == ", "save zorm will call zorm.FuncGenerateStringID(ctx), the default time stamp + random number, also can define your own implementations, such as zorm.FuncGenerateStringID = funcmyId - Id: zorm.FuncGenerateStringID(ctx), - UserName: "defaultUserName", - Password: "defaultPassword", - Active: 1, - CreateTime: time.Now(), - } - return demo -} -``` - -## Test cases are documents -https://gitee.com/wuxiangege/zorm-examples -```go - -// testzorm uses native sql statements with no restrictions on sql syntax. The statement uses Finder as the carrier -// Universal use of placeholders? zorm automatically replaces placeholders based on the database type, such as the postgresql database? Replace it with $1,$2... -// zorm uses the ctx context.Context parameter to propagate the transaction. ctx is passed in from the web layer. For example, gin's c.Request.Context() -// Transaction must be explicitly enabled using zorm.Transaction(ctx, func(ctx context.context) (interface{}, error) {}) -package testzorm - -import ( - "context" - "fmt" - "testing" - "time" - - "gitee.com/chunanyong/zorm" - - // 00. Introduce the database driver - _ "github.com/go-sql-driver/mysql" -) - -// DBDAOs represent one database. If there are multiple databases, multiple DBDAOs are declared -var dbDao *zorm.DBDao - -// 01. Initialize the DBDao -func init() { - - // Customize zorm log output - // zorm.LogCallDepth = 4 // Level of log calls - // zorm.FuncLogError = myFuncLogError // Function to log exceptions - // zorm.FuncLogPanic = myFuncLogPanic // To log panic, the default is defaultLogError - // zorm.FuncPrintSQL = myFuncPrintSQL // A function that prints sql - - // Reassign the FuncPrintSQL function to a custom log output format - // log.SetFlags(log.LstdFlags) - // zorm.FuncPrintSQL = zorm.FuncPrintSQL - - // Custom primary key generation - // zorm.FuncGenerateStringID=funcmyId - - // Customize the Tag column name - // zorm.FuncWrapFieldTagName=funcmyTagName - - // Custom decimal type implementation - // zorm.FuncDecimalValue=funcmyDecimal - - // the Go database driver list: https://github.com/golang/go/wiki/SQLDrivers - - // dbDaoConfig Configure the database. This is just a simulation, the production should be reading the configuration configuration file and constructing the DataSourceConfig - dbDaoConfig := zorm.DataSourceConfig{ - // DSN database connection string. parseTime=true is automatically converted to time format. The default query is the []byte array - DSN: "root:root@tcp(127.0.0.1:3306)/zorm?charset=utf8&parseTime=true", - // DriverName database driver name: mysql, postgres, oracle(go-ora), essentially, sqlite3, go_ibm_db, clickhouse, dm, kingbase, aci, taosSql | taosRestful Correspond to Dialect - // sql.Open(DriverName,DSN) DriverName is the first string parameter of the sql.Open of the driver. The value can be obtained according to the actual conditions of the driver - DriverName: "mysql", - // the Dialect database Dialect: mysql, postgresql, oracle, MSSQL, sqlite, db2, clickhouse, dm, kingbase, shentong, tdengine and DriverName corresponding - Dialect: "mysql", - // MaxOpenConns The default maximum number of database connections is 50 - MaxOpenConns: 50, - // MaxIdleConns The default maximum number of idle connections is 50 - MaxIdleConns: 50, - // ConnMaxLifetimeSecond Connection survival seconds. Default 600(10 minutes) after the connection is destroyed and rebuilt. Prevent the database from voluntarily disconnecting, resulting in dead connections. MySQL default wait_timeout 28800 seconds (8 hours) - ConnMaxLifetimeSecond: 600, - // SlowSQLMillis slow sql time threshold, in milliseconds. A value less than 0 disables SQL statement output. If the value is equal to 0, only SQL statements are output and the execution time is not calculated. A value greater than 0 is used to calculate the SQL execution time and >=SlowSQLMillis value - SlowSQLMillis: 0, - // DefaultTxOptions Default configuration of transaction isolation level, which defaults to nil - // DefaultTxOptions: nil, - // If distributed transactions are used, the default configuration is recommended - // DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - - // FuncGlobalTransaction seata/hptx An adaptation function of a globally distributed transaction that returns the implementation of the IGlobalTransaction interface - // business must call ctx, _ = zorm.BindContextEnableGlobalTransaction (ctx) on the global distribution of transactions - // FuncGlobalTransaction : MyFuncGlobalTransaction, - - // SQLDB uses an existing database connection and has a higher priority than DSN - // SQLDB : nil, - - // DisableTransaction disables transactions. The default value is false. If DisableTransaction=true is set, the Transaction method becomes invalid and no transaction is required. Some databases, such as TDengine, do not support transactions - // Disable transactions should have the driver forgery transaction API, there should be no orm implementation,clickhouse's driver does just that - // DisableTransaction :false, - - // TDengineInsertsColumnName Whether there are column names in the TDengine batch insert statement. The default false has no column name, and the insertion value and database column order are consistent, reducing the length of the statement - // TDengineInsertsColumnName :false, - } - - // Create dbDao based on dbDaoConfig. Perform this operation once for each database. The first database is defaultDao and the subsequent zorm.xxx method uses defaultDao by default - dbDao, _ = zorm.NewDBDao(&dbDaoConfig) -} - -// TestInsert 02. Test save the Struct object -func TestInsert(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // Create a demo object - demo := newDemoStruct() - - // Save the object. The parameter is a pointer to the object. If the primary key is increment, the value is assigned to the primary key property of the object - _, err := zorm.Insert(ctx, &demo) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - // Mark the test failed - if err != nil { - t.Errorf("Error:%v", err) - } -} - -// TestInsertSlice 03. Tests batch save Struct object Slice -// The primary key property in the Struct object cannot be assigned if the primary key is autoincrement -func TestInsertSlice(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // slice stores the type zorm.IEntityStruct!!! Use the IEntityStruct interface, compatible with Struct entity classes - demoSlice := make([]zorm.IEntityStruct,0) - - // Create object 1 - demo1 := newDemoStruct() - demo1.UserName = "demo1" - // Create object 2 - demo2 := newDemoStruct() - demo2.UserName = "demo2" - - demoSlice = append(demoSlice, &demo1, &demo2) - - // Batch save objects. If the primary key is auto-increment, the auto-increment ID cannot be saved to the object. - _, err := zorm.InsertSlice(ctx, demoSlice) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - // Mark the test failed - if err != nil { - t.Errorf("Error:%v", err) - } -} - -// TestInsertEntityMap 04. Test to save an EntityMap object for scenarios where it is not convenient to use struct. Use Map as the carrier -func TestInsertEntityMap(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // To create an EntityMap, pass in the table name - entityMap := zorm.NewEntityMap(demoStructTableName) - // Set the primary key name - entityMap.PkColumnName = "id" - // If it is an increment sequence, set the value of the sequence - // entityMap.PkSequence = "mySequence" - - // Set Sets the field values of the database - // If the primary key is an increment or sequence, do not set the value of the entityMap.Set primary key - entityMap.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap.Set("userName", "entityMap-userName") - entityMap.Set("password", "entityMap-password") - entityMap.Set("createTime", time.Now()) - entityMap.Set("active", 1) - - // Execute - _, err := zorm.InsertEntityMap(ctx, entityMap) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - // Mark the test failed - if err != nil { - t.Errorf("Error:%v", err) - } -} - - -// TestInsertEntityMapSlice 05. Test batch save []IEntityMap for scenarios where it is not convenient to use struct, using Map as carrier -func TestInsertEntityMapSlice(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - _, err := Transaction(ctx, func(ctx context.Context) (interface{}, error) { - entityMapSlice := make([]IEntityMap, 0) - entityMap1 := NewEntityMap(demoStructTableName) - entityMap1.PkColumnName = "id" - entityMap1.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap1.Set("userName", "entityMap-userName1") - entityMap1.Set("password", "entityMap-password1") - entityMap1.Set("createTime", time.Now()) - entityMap1.Set("active", 1) - - entityMap2 := NewEntityMap(demoStructTableName) - entityMap2.PkColumnName = "id" - entityMap2.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap2.Set("userName", "entityMap-userName2") - entityMap2.Set("password", "entityMap-password2") - entityMap2.Set("createTime", time.Now()) - entityMap2.Set("active", 2) - - entityMapSlice = append(entityMapSlice, entityMap1 ,entityMap2) - - // Execute - _, err := zorm.InsertEntityMapSlice(ctx, entityMapSlice) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - // Mark the test failed - if err != nil { - t.Errorf("Error:%v", err) - } -} - -// TestQueryRow 06. Test query a struct object -func TestQueryRow(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // Declares a pointer to an object that holds the returned data - demo := demoStruct{} - - // finder used to construct the query - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - // finder := zorm.NewSelectFinder(demoStructTableName, "id,userName") // select id,userName from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // finder by default, sql injection checking is enabled to disallow concatenation of 'single quotes in statements. You can set finder.injectioncheck = false to undo the restriction - - // finder.Append The first argument is the statement and the following arguments are the corresponding values in the correct order. Uniform use of statements? zorm handles database differences - // in (?) Arguments must have () parentheses, not in? - finder.Append("WHERE id=? and active in(?) ", "20210630163227149563000042432429", []int{0, 1}) - - // How do I use like - // finder.Append("WHERE id like ? ", "20210630163227149563000042432429%") - - // If the value of "has" is true, the database has data - has, err := zorm.QueryRow(ctx, finder, &demo) - - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - // Print the result - fmt.Println(demo) -} - -// TestQueryRowMap 07. Test query map receives results. It is flexible for scenarios that are not suitable for structs -func TestQueryRowMap(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // finder used to construct the query - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // finder.Append The first argument is the statement and the following arguments are the corresponding values in the correct order. Uniform use of statements? zorm handles database differences - // in (?) Arguments must have () parentheses, not in? - finder.Append("WHERE id=? and active in(?) ", "20210630163227149563000042432429", []int{0, 1}) - // Run the query - resultMap, err := zorm.QueryRowMap(ctx, finder) - - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - // Print the result - fmt.Println(resultMap) -} - -// TestQuery 08. Test the list of query objects -func TestQuery(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // Create a slice for receiving results - list := make([]demoStruct, 0) - - // finder used to construct the query - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT id FROM " + demoStructTableName) // select * from t_demo - // Create a paging object. After the query is complete, the page object can be directly used by the front-end paging component - page := zorm.NewPage() - page.PageNo = 1 // Query page 1. The default value is 1 - page.PageSize = 20 // 20 per page. The default is 20 - - // The total number of entries is not queried - // finder.SelectTotalCount = false - - // You can manually specify paging statements if they are particularly complex statements that cause count statement construction to fail - // countFinder := zorm.NewFinder().Append("select count(*) from (") - // countFinder.AppendFinder(finder) - // countFinder.Append(") tempcountfinder") - // finder.CountFinder = countFinder - - // Run the query - err := zorm.Query(ctx, finder, &list, page) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - // Print the result - fmt.Println("Total number of items :", page.TotalCount, "List :", list) -} - -// TestQueryMap 09. Test query map list. Used in the scenario where struct is not convenient -func TestQueryMap(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // finder used to construct the query - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // Create a paging object. After the query is complete, the page object can be directly used by the front-end paging component - page := zorm.NewPage() - page.PageNo = 1 // Query page 1. The default value is 1 - page.PageSize = 20 // 20 per page. The default is 20 - - // The total number of entries is not queried - // finder.SelectTotalCount = false - - // You can manually specify paging statements if they are particularly complex statements that cause count statement construction to fail - // countFinder := zorm.NewFinder().Append("select count(*) from (") - // countFinder.AppendFinder(finder) - // countFinder.Append(") tempcountfinder") - // finder.CountFinder = countFinder - - // Run the query - listMap, err := zorm.QueryMap(ctx, finder, page) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - // Print the result - fmt.Println("Total number of items :", page.TotalCount, "List :", listMap) -} - -// TestUpdateNotZeroValue 10. Update the struct object with only the non-zero fields. The primary key must have a value -func TestUpdateNotZeroValue(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // Declares a pointer to an object used to update data - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - demo.UserName = "UpdateNotZeroValue" - - // UPDATE "sql":"UPDATE t_demo SET userName=? WHERE id=?" ,"args":["UpdateNotZeroValue","20210630163227149563000042432429"] - _, err := zorm.UpdateNotZeroValue(ctx, &demo) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - -} - -// TestUpdate 11. Update the struct object, updating all fields. The primary key must have a value -func TestUpdate(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Declares a pointer to an object used to update data - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - demo.UserName = "TestUpdate" - - _, err := zorm.Update(ctx, &demo) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } -} - -// TestUpdateFinder 12. With finder update,zorm's most flexible way of writing any update statement, even manually writing insert statements -func TestUpdateFinder(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // finder := zorm.NewUpdateFinder(demoStructTableName) // UPDATE t_demo SET - // finder := zorm.NewDeleteFinder(demoStructTableName) // DELETE FROM t_demo - finder := zorm.NewFinder().Append("UPDATE").Append(demoStructTableName).Append("SET") // UPDATE t_demo SET - finder.Append("userName=? ,active=?", "TestUpdateFinder", 1).Append("WHERE id=?", "20210630163227149563000042432429") - - // UPDATE "sql":"UPDATE t_demo SET userName=? ,active=? WHERE id=?" ,"args":["TestUpdateFinder",1,"20210630163227149563000042432429"] - _, err := zorm.UpdateFinder(ctx, finder) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - -} - -// TestUpdateEntityMap 13. Update an EntityMap. The primary key must have a value -func TestUpdateEntityMap(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // To create an EntityMap, pass in the table name - entityMap := zorm.NewEntityMap(demoStructTableName) - // Set the primary key name - entityMap.PkColumnName = "id" - // Set Sets the field value of the database. The primary key must have a value - entityMap.Set("id", "20210630163227149563000042432429") - entityMap.Set("userName", "TestUpdateEntityMap") - // UPDATE "sql":"UPDATE t_demo SET userName=? WHERE id=?" ,"args":["TestUpdateEntityMap","20210630163227149563000042432429"] - _, err := zorm.UpdateEntityMap(ctx, entityMap) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - -} - -// TestDelete 14. Delete a struct object. The primary key must have a value -func TestDelete(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // You need to start the transaction manually. If the error returned by the anonymous function is not nil, the transaction will be rolled back. If the DisableTransaction=true parameter is set, the Transaction method becomes invalid and no transaction is required - // if zorm.DataSourceConfig.DefaultTxOptions configuration does not meet the requirements, can be in zorm, Transaction before Transaction method set the Transaction isolation level - // such as ctx, _ := dbDao BindContextTxOptions (ctx, & SQL TxOptions {Isolation: SQL LevelDefault, ReadOnly: False}), if txOptions is nil, the use of zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - - // "sql":"DELETE FROM t_demo WHERE id=?" ,"args":["20210630163227149563000042432429"] - _, err := zorm.Delete(ctx, &demo) - - // If err is not returned nil, the transaction is rolled back - return nil, err - }) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - -} - -// TestProc 15. Test calls the stored procedure -func TestProc(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - demo := demoStruct{} - finder := zorm.NewFinder().Append("call testproc(?)", "u_10001") - zorm.QueryRow(ctx, finder, &demo) - fmt.Println(demo) -} - -// TestFunc 16. Test calls custom functions -func TestFunc(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - userName := "" - finder := zorm.NewFinder().Append("select testfunc(?)", "u_10001") - zorm.QueryRow(ctx, finder, &userName) - fmt.Println(userName) -} - -// TestOther 17. Some other instructions. Thank you very much for seeing this line -func TestOther(t *testing.T) { - // ctx is generally a request for one ctx, normally there should be a web layer in, such as gin's c. Request.Context() - var ctx = context.Background() - - // Scenario 1. Multiple databases. The dbDao of the corresponding database calls BindContextDBConnection, binds the database connection to the returned ctx, and passes ctx to zorm's function - // You can also rewrite the FuncReadWriteStrategy function to return the DBDao of the specified database by setting a different key via ctx - newCtx, err := dbDao.BindContextDBConnection(ctx) - if err != nil { // Mark the test failed - t.Errorf("Error:%v", err) - } - - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // Pass the new newCtx to zorm's function - list, _ := zorm.QueryMap(newCtx, finder, nil) - fmt.Println(list) - - // Scenario 2. Read/write separation of a single database. Set the read-write separation policy function. - zorm.FuncReadWriteStrategy = myReadWriteStrategy - - // Scenario 3. If multiple databases exist and read and write data are separated from each other, perform this operation according to Scenario 1. - // You can also rewrite the FuncReadWriteStrategy function to return the DBDao of the specified database by setting a different key via ctx - -} - -// myReadWriteStrategy Database read-write strategy rwType=0 read,rwType=1 write -// You can also set different keys through ctx to return the DBDao of the specified database -func myReadWriteStrategy(ctx context.Context, rwType int) (*zorm.DBDao, error) { - // Return the required read/write dao based on your business scenario. This function is called every time a database connection is needed - // if rwType == 0 { - // return dbReadDao - // } - // return dbWriteDao - - return dbDao, nil -} - -// -------------------------------------------- -// ICustomDriverValueConver interface, see examples of DaMeng - -// -------------------------------------------- -// OverrideFunc Rewrite the functions of ZORM, when you use this function, you have to know what you are doing - -``` -## Global transaction -### seata-go CallbackWithCtx function mode -```go -// DataSourceConfig configures DefaultTxOptions -// DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - -// Import the seata-go dependency package -import ( - "context" - "fmt" - "time" - - "github.com/seata/seata-go/pkg/client" - "github.com/seata/seata-go/pkg/tm" - seataSQL "github.com/seata/seata-go/pkg/datasource/sql" //Note: zorm's DriverName: seataSQL.SeataATMySQLDriver, !!!! -) - -// Path of the configuration file -var configPath = "./conf/client.yml" - -func main() { - - // Initialize the configuration - conf := config.InitConf(configPath) - // Initialize the zorm database - // note: zorm DriverName: seataSQL SeataATMySQLDriver,!!!!!!!!!! - initZorm() - - // Start distributed transactions - tm.WithGlobalTx(context.Background(), &tm.GtxConfig{ - Name: "ATSampleLocalGlobalTx", - Timeout: time.Second * 30, - }, CallbackWithCtx) - // CallbackWithCtx business callback definition - // type CallbackWithCtx func(ctx context.Context) error - - - // Get the XID after the transaction is started. This can be passed through gin's header, or otherwise - // xid:=tm.GetXID(ctx) - // tm.SetXID(ctx, xid) - - // If the gin framework is used, middleware binding parameters can be used - // r.Use(ginmiddleware.TransactionMiddleware()) -} - -``` - -### seata-go transaction hosting mode - -```go -// Do not use CallbackWithCtx function,zorm to achieve transaction management, no modification of business code, zero intrusion to achieve distributed transactions - - -// The distributed transaction must be started manually and must be invoked before the local transaction is started -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// Distributed transaction sample code -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Get the XID of the current distributed transaction. Don't worry about how, if it is a distributed transaction environment, the value will be set automatically - // xid := ctx.Value("XID").(string) - - // Pass the xid to the third party application - // req.Header.Set("XID", xid) - - // If err is not returned nil, local and distributed transactions are rolled back - return nil, err -}) - -// /---------- Third-party application -------/ // - - // Do not use the middleware provided by seata-go by default, just ctx binding XID!!! - //// r.Use(ginmiddleware.TransactionMiddleware()) - xid := c.GetHeader(constant.XidKey) - ctx = context.WithValue(ctx, "XID", xid) - - // The distributed transaction must be started manually and must be invoked before the local transaction is started - ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) - // ctx invokes the business transaction after binding the XID - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Business code...... - - // If err is not returned nil, local and distributed transactions are rolled back - return nil, err -}) - -// It is recommended that the following code be placed in a separate file -// ... // - -// ZormGlobalTransaction packaging seata *tm.GlobalTransactionManager, zorm.IGlobalTransaction interface -type ZormGlobalTransaction struct { - *tm.GlobalTransactionManager -} - -// MyFuncGlobalTransaction zorm A function that ADAPTS a seata globally distributed transaction -// important!!!! Need to configure the zorm.DataSourceConfig.FuncGlobalTransaction = MyFuncGlobalTransaction important!!!!!! -func MyFuncGlobalTransaction(ctx context.Context) (zorm.IGlobalTransaction, context.Context, context.Context, error) { - // Create a seata-go transaction - globalTx := tm.GetGlobalTransactionManager() - // Use the zorm.IGlobalTransaction interface object to wrap distributed transactions and isolate the seata-go dependencies - globalTransaction := &ZormGlobalTransaction{globalTx} - - if tm.IsSeataContext(ctx) { - return globalTransaction, ctx, ctx, nil - } - // open global transaction for the first time - ctx = tm.InitSeataContext(ctx) - // There is a request to come in, manually get the XID - xidObj := ctx.Value("XID") - if xidObj ! = nil { - xid := xidObj.(string) - tm.SetXID(ctx, xid) - } - tm.SetTxName(ctx, "ATSampleLocalGlobalTx") - - // use new context to process current global transaction. - if tm.IsGlobalTx(ctx) { - globalRootContext := transferTx(ctx) - return globalTransaction, ctx, globalRootContext, nil - } - return globalTransaction, ctx, ctx, nil -} - -// IGlobalTransaction managed global distributed transaction interface (zorm.IGlobalTransaction). seata and hptx currently implement the same code, only the reference implementation package is different - -// BeginGTX Starts global distributed transactions -func (gtx *ZormGlobalTransaction) BeginGTX(ctx context.Context, globalRootContext context.Context) error { - //tm.SetTxStatus(globalRootContext, message.GlobalStatusBegin) - err := gtx.Begin(globalRootContext, time.Second*30) - return err -} - -// CommitGTX Commit global distributed transactions -func (gtx *ZormGlobalTransaction) CommitGTX(ctx context.Context, globalRootContext context.Context) error { - gtr := tm.GetTx(globalRootContext) - return gtx.Commit(globalRootContext, gtr) -} - -// RollbackGTX rolls back globally distributed transactions -func (gtx *ZormGlobalTransaction) RollbackGTX(ctx context.Context, globalRootContext context.Context) error { - gtr := tm.GetTx(globalRootContext) - // If it is the Participant role, change it to the Launcher role to allow branch transactions to submit global transactions. - if gtr.TxRole != tm.Launcher { - gtr.TxRole = tm.Launcher - } - return gtx.Rollback(globalRootContext, gtr) -} -// GetGTXID Gets the XID of the globally distributed transaction -func (gtx *ZormGlobalTransaction) GetGTXID(ctx context.Context, globalRootContext context.Context) (string.error) { - return tm.GetXID(globalRootContext), nil -} - -// transferTx transfer the gtx into a new ctx from old ctx. -// use it to implement suspend and resume instead of seata java -func transferTx(ctx context.Context) context.Context { - newCtx := tm.InitSeataContext(context.Background()) - tm.SetXID(newCtx, tm.GetXID(ctx)) - return newCtx -} - -// ... // -``` - - -### hptx proxy mode -[in hptx proxy mode for zorm use example](https://github.com/CECTC/hptx-samples/tree/main/http_proxy_zorm) -```go -// DataSourceConfig configures DefaultTxOptions -// DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - -// Introduce the hptx dependency package -import ( - "github.com/cectc/hptx" - "github.com/cectc/hptx/pkg/config" - "github.com/cectc/hptx/pkg/resource" - "github.com/cectc/mysql" - "github.com/cectc/hptx/pkg/tm" - - gtxContext "github.com/cectc/hptx/pkg/base/context" -) - -// Path of the configuration file -var configPath = "./conf/config.yml" - -func main() { - - // Initialize the configuration - hptx.InitFromFile(configPath) - - // Register the mysql driver - mysql.RegisterResource(config.GetATConfig().DSN) - resource.InitATBranchResource(mysql.GetDataSourceManager()) - // sqlDB, err := sql.Open("mysql", config.GetATConfig().DSN) - - - // After the normal initialization of zorm, be sure to put it after the hptx mysql initialization!! - - // ... // - // tm register transaction service, refer to the official example (transaction hosting is mainly to remove proxy, zero intrusion on the business) - tm.Implement(svc.ProxySvc) - // ... // - - - // Get the hptx rootContext - // rootContext := gtxContext.NewRootContext(ctx) - // rootContext := ctx.(*gtxContext.RootContext) - - // Create an hptx transaction - // globalTx := tm.GetCurrentOrCreate(rootContext) - - // Start the transaction - // globalTx. BeginWithTimeoutAndName (int32 (6000), "name of the transaction," rootContext) - - // Get the XID after the transaction is started. This can be passed through the gin header, or otherwise - // xid:=rootContext.GetXID() - - // If using gin frame, get ctx - // ctx := c.Request.Context() - - // Accept the XID passed and bind it to the local ctx - // ctx =context.WithValue(ctx,mysql.XID,xid) -} -``` - -### hptx transaction hosting mode -[zorm transaction hosting hptx example](https://github.com/CECTC/hptx-samples/tree/main/http_zorm) -```go -// Do not use proxy proxy mode,zorm to achieve transaction management, no modification of business code, zero intrusion to achieve distributed transactions -// tm.Implement(svc.ProxySvc) - -// The distributed transaction must be started manually and must be invoked before the local transaction is started -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// Distributed transaction sample code -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Get the XID of the current distributed transaction. Don't worry about how, if it is a distributed transaction environment, the value will be set automatically - // xid := ctx.Value("XID").(string) - - // Pass the xid to the third party application - // req.Header.Set("XID", xid) - - // If err is not returned nil, local and distributed transactions are rolled back - return nil, err -}) - -// /---------- Third-party application -------// / - -// Before third-party applications can start transactions,ctx needs to bind Xids, such as gin framework - -// Accept the XID passed and bind it to the local ctx -// xid:=c.Request.Header.Get("XID") -// ctx is obtained -// ctx := c.Request.Context() -// ctx = context.WithValue(ctx,"XID",xid) - -// The distributed transaction must be started manually and must be invoked before the local transaction is started -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// ctx invokes the business transaction after binding the XID -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Business code...... - - // If err is not returned nil, local and distributed transactions are rolled back - return nil, err -}) - - - -// It is recommended that the following code be placed in a separate file -// ... // - -// ZormGlobalTransaction packaging hptx *tm.DefaultGlobalTransaction, zorm.IGlobalTransaction interface -type ZormGlobalTransaction struct { - *tm.DefaultGlobalTransaction -} - -// MyFuncGlobalTransaction zorm A function that ADAPTS a hptx globally distributed transaction -// important!!!! Need to configure the zorm.DataSourceConfig.FuncGlobalTransaction = MyFuncGlobalTransaction important!!!!!! -func MyFuncGlobalTransaction(ctx context.Context) (zorm.IGlobalTransaction, context.Context, context.Context, error) { - // Obtain the hptx rootContext - rootContext := gtxContext.NewRootContext(ctx) - // Create a hptx transaction - globalTx := tm.GetCurrentOrCreate(rootContext) - // Use the zorm.IGlobalTransaction interface object to wrap distributed transactions and isolate hptx dependencies - globalTransaction := &ZormGlobalTransaction{globalTx} - - return globalTransaction, ctx, rootContext, nil -} - -// IGlobalTransaction managed global distributed transaction interface (zorm.IGlobalTransaction). seata and hptx currently implement the same code, only the reference implementation package is different - -// BeginGTX Starts global distributed transactions -func (gtx *ZormGlobalTransaction) BeginGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - return gtx.BeginWithTimeout(int32(6000), rootContext) -} - -// CommitGTX Commit global distributed transactions -func (gtx *ZormGlobalTransaction) CommitGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - return gtx.Commit(rootContext) -} - -// RollbackGTX rolls back globally distributed transactions -func (gtx *ZormGlobalTransaction) RollbackGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - // If it is the Participant role, change it to the Launcher role to allow branch transactions to submit global transactions. - if gtx.Role != tm.Launcher { - gtx.Role = tm.Launcher - } - return gtx.Rollback(rootContext) -} -// GetGTXID Gets the XID of the globally distributed transaction -func (gtx *ZormGlobalTransaction) GetGTXID(ctx context.Context, globalRootContext context.Context) (string.error) { - rootContext := globalRootContext.(*gtxContext.RootContext) - return rootContext.GetXID(), nil -} - -// ... // -``` -### dbpack distributed transactions -```dbpack``` document: https://cectc.github.io/dbpack-doc/#/README deployment with a Mesh, the application integration is simple, just need to get xid, in a hint of SQL statements -```go -// Before starting dbpack transactions,ctx needs to bind sql hints, such as using the gin framework to obtain the xid passed by the header -xid := c.Request.Header.Get("xid") -// Generate sql hint content using xid, and then bind the hint to ctx -hint := fmt.Sprintf("/*+ XID('%s') */", xid) -// ctx is obtained -ctx := c.Request.Context() -// Bind the hint to ctx -ctx,_ = zorm.BindContextSQLHint(ctx, hint) - -// After ctx binds the sql hint, the business transaction is invoked and ctx is transmitted to realize the propagation of the distributed transaction -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // Business code...... - - // If err is not returned nil, local and distributed transactions are rolled back - return nil, err -}) -``` diff --git a/vendor/gitee.com/chunanyong/zorm/README_zh.md b/vendor/gitee.com/chunanyong/zorm/README_zh.md deleted file mode 100644 index d3e36d96..00000000 --- a/vendor/gitee.com/chunanyong/zorm/README_zh.md +++ /dev/null @@ -1,1111 +0,0 @@ -## 介绍 -![zorm logo](zorm-logo.png) -Go轻量ORM,零依赖,零侵入分布式事务,支持达梦(dm),金仓(kingbase),神通(shentong),南通(gbase),TDengine,mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse... - -官网: https://zorm.cn -源码地址: https://gitee.com/chunanyong/zorm -测试用例: https://gitee.com/wuxiangege/zorm-examples/ -视频教程: https://www.bilibili.com/video/BV1L24y1976U/ - -交流QQ群:[727723736]() 添加进入社区群聊,问题交流,技术探讨 -社区微信: [LAUV927]() - -``` -go get gitee.com/chunanyong/zorm -``` -* 基于原生sql语句,学习成本更低 -* [代码生成器](https://gitee.com/zhou-a-xing/zorm-generate-struct) -* 代码精简,主体2500行,零依赖4000行,注释详细,方便定制修改 -* 支持事务传播,这是zorm诞生的主要原因 -* 支持dm(达梦),kingbase(金仓),shentong(神通),gbase(南通),TDengine,mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse... -* 支持多库和读写分离 -* 不支持联合主键,变通认为无主键,业务控制实现(艰难取舍) -* 支持seata,hptx,dbpack分布式事务,支持全局事务托管,不修改业务代码,零侵入分布式事务 -* 支持clickhouse,更新,删除语句使用SQL92标准语法.clickhouse-go官方驱动不支持批量insert语法,建议使用https://github.com/mailru/go-clickhouse - -## 事务传播 -事务传播是zorm的核心功能,也是zorm所有方法都有ctx入参的主要原因. -zorm的事务操作需要显式使用```zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})```开启,在执行闭包函数前检查事务,如果ctx里有事务就加入事务,如果ctx里没事务就创建新的事务,所以只需要传递同一个ctx对象,就可以实现事务传播.特殊场景如果不想事务同步,就可以声明一个新的ctx对象,做事务隔离. - -## 源码仓库说明 -我主导的开源项目主库都在gitee,github上留有项目说明,引导跳转到gitee,这样也造成了项目star增长缓慢,毕竟github用户多些. -**开源没有国界,开发者却有自己的祖国.** -严格意义上,github是受美国法律管辖的 https://www.infoq.cn/article/SA72SsSeZBpUSH_ZH8XB -尽我所能,支持国内开源社区,不喜勿喷,谢谢! - -## 支持国产数据库 -zorm对国产数据库的适配不遗余力,遇到没有适配或者有问题的国产数据库,请反馈到社区,携手共建国产软件生态. -### 达梦(dm) -- 配置zorm.DataSourceConfig的 ```DriverName:dm ,Dialect:dm``` -- 达梦数据库驱动: gitee.com/chunanyong/dm -- 达梦的TEXT类型会映射为dm.DmClob,string不能接收,需要实现zorm.ICustomDriverValueConver接口,自定义扩展处理 -- 达梦开启等保参数 COMM_ENCRYPT_NAME = AES128_ECB , 会导致驱动连接异常 -```go -import ( - // 00.引入数据库驱动 - "gitee.com/chunanyong/dm" - "io" -) - -// CustomDMText 实现ICustomDriverValueConver接口,扩展自定义类型,例如 达梦数据库TEXT类型,映射出来的是dm.DmClob类型,无法使用string类型直接接收 -type CustomDMText struct{} - -// GetDriverValue 根据数据库列类型,返回driver.Value的实例,struct属性类型 -// map接收或者字段不存在,无法获取到structFieldType,会传入nil -func (dmtext CustomDMText) GetDriverValue(ctx context.Context, columnType *sql.ColumnType, structFieldType *reflect.Type) (driver.Value, error) { - // 如果需要使用structFieldType,需要先判断是否为nil - // if structFieldType != nil { - // } - - return &dm.DmClob{}, nil -} - -// ConverDriverValue 数据库列类型,GetDriverValue返回的driver.Value的临时接收值,struct属性类型 -// map接收或者字段不存在,无法获取到structFieldType,会传入nil -// 返回符合接收类型值的指针,指针,指针!!!! -func (dmtext CustomDMText) ConverDriverValue(ctx context.Context, columnType *sql.ColumnType, tempDriverValue driver.Value, structFieldType *reflect.Type) (interface{}, error) { - // 如果需要使用structFieldType,需要先判断是否为nil - // if structFieldType != nil { - // } - - // 类型转换 - dmClob, isok := tempDriverValue.(*dm.DmClob) - if !isok { - return tempDriverValue, errors.New("->ConverDriverValue-->转换至*dm.DmClob类型失败") - } - if dmClob == nil || !dmClob.Valid { - return new(string), nil - } - // 获取长度 - dmlen, errLength := dmClob.GetLength() - if errLength != nil { - return dmClob, errLength - } - - // int64转成int类型 - strInt64 := strconv.FormatInt(dmlen, 10) - dmlenInt, errAtoi := strconv.Atoi(strInt64) - if errAtoi != nil { - return dmClob, errAtoi - } - - // 读取字符串 - str, errReadString := dmClob.ReadString(1, dmlenInt) - - // 处理空字符串或NULL造成的EOF错误 - if errReadString == io.EOF { - return new(string), nil - } - - return &str, errReadString -} -// RegisterCustomDriverValueConver 注册自定义的字段处理逻辑,用于驱动无法直接转换的场景,例如达梦的 TEXT 无法直接转化成 string -// 一般是放到init方法里进行注册 -func init() { - // dialectColumnType 值是 Dialect.字段类型 ,例如 dm.TEXT - zorm.RegisterCustomDriverValueConver("dm.TEXT", CustomDMText{}) -} -``` - -### 金仓(kingbase) -- 配置zorm.DataSourceConfig的 ```DriverName:kingbase ,Dialect:kingbase``` -- 金仓官方驱动: https://www.kingbase.com.cn/qd/index.htm https://bbs.kingbase.com.cn/thread-14457-1-1.html?_dsign=87f12756 -- 金仓kingbase 8核心是基于postgresql 9.6,可以使用 https://github.com/lib/pq 进行测试,生产环境建议使用官方驱动. -- 注意修改数据库的 data/kingbase.conf中 ora_input_emptystr_isnull = false 或者是ora_input_emptystr_isnull = on (根据版本进行区分),因为golang没有null值,一般数据库都是not null,golang的string默认是'',如果这个设置为true,数据库就会把值设置为null,和字段属性not null 冲突,因此报错. - 配置文件修改后,进行数据库的重启. -- 感谢[@Jin](https://gitee.com/GOODJIN) 的测试与建议。 - -### 神通(shentong) -建议使用官方驱动,配置zorm.DataSourceConfig的 ```DriverName:aci ,Dialect:shentong``` - -### 南通(gbase) -~~暂时还未找到官方Go驱动,配置zorm.DataSourceConfig的 DriverName:gbase ,Dialect:gbase~~ -暂时先使用odbc驱动,```DriverName:odbc ,Dialect:gbase``` - -### TDengine -- 因TDengine驱动不支持事务,需要设置```DisableTransaction=true``` -- 配置zorm.DataSourceConfig的 ```DriverName:taosSql或者taosRestful, Dialect:tdengine``` -- zorm.DataSourceConfig的```TDengineInsertsColumnName ```TDengine批量insert语句中是否有列名.默认false没有列名,插入值和数据库列顺序保持一致,减少语句长度 -- 测试用例: https://www.yuque.com/u27016943/nrgi00/dnru3f -- TDengine已收录: https://github.com/taosdata/awesome-tdengine/#orm - -## 数据库脚本和实体类 -生成实体类或手动编写,建议使用代码生成器 https://gitee.com/zhou-a-xing/zorm-generate-struct -```go - -package testzorm - -import ( - "time" - - "gitee.com/chunanyong/zorm" -) - -// 建表语句 - -/* - -DROP TABLE IF EXISTS `t_demo`; -CREATE TABLE `t_demo` ( - `id` varchar(50) NOT NULL COMMENT '主键', - `userName` varchar(30) NOT NULL COMMENT '姓名', - `password` varchar(50) NOT NULL COMMENT '密码', - `createTime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP(0), - `active` int COMMENT '是否有效(0否,1是)', - PRIMARY KEY (`id`) -) ENGINE = InnoDB CHARACTER SET = utf8mb4 COMMENT = '例子' ; - -*/ - -// demoStructTableName 表名常量,方便直接调用 -const demoStructTableName = "t_demo" - -// demoStruct 例子 -type demoStruct struct { - // 引入默认的struct,隔离IEntityStruct的方法改动 - zorm.EntityStruct - - // Id 主键 - Id string `column:"id"` - - // UserName 姓名 - UserName string `column:"userName"` - - // Password 密码 - Password string `column:"password"` - - // CreateTime - CreateTime time.Time `column:"createTime"` - - // Active 是否有效(0否,1是) - // Active int `column:"active"` - - // ------------------数据库字段结束,自定义字段写在下面---------------// - // 如果查询的字段在column tag中没有找到,就会根据名称(不区分大小写,支持 _ 下划线转驼峰)映射到struct的属性上 - - // 模拟自定义的字段Active - Active int -} - -// GetTableName 获取表名称 -// IEntityStruct 接口的方法,实体类需要实现!!! -func (entity *demoStruct) GetTableName() string { - return demoStructTableName -} - -// GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称 -// 不支持联合主键,变通认为无主键,业务控制实现(艰难取舍) -// 如果没有主键,也需要实现这个方法, return "" 即可 -// IEntityStruct 接口的方法,实体类需要实现!!! -func (entity *demoStruct) GetPKColumnName() string { - // 如果没有主键 - // return "" - return "id" -} - -// newDemoStruct 创建一个默认对象 -func newDemoStruct() demoStruct { - demo := demoStruct{ - // 如果Id=="",保存时zorm会调用zorm.FuncGenerateStringID(ctx),默认时间戳+随机数,也可以自己定义实现方式,例如 zorm.FuncGenerateStringID=funcmyId - Id: zorm.FuncGenerateStringID(ctx), - UserName: "defaultUserName", - Password: "defaultPassword", - Active: 1, - CreateTime: time.Now(), - } - return demo -} - - -``` - -## 测试用例即文档 -测试用例: https://gitee.com/wuxiangege/zorm-examples - -```go - -// testzorm 使用原生的sql语句,没有对sql语法做限制.语句使用Finder作为载体 -// 占位符统一使用?,zorm会根据数据库类型,自动替换占位符,例如postgresql数据库把?替换成$1,$2... -// zorm使用 ctx context.Context 参数实现事务传播,ctx从web层传递进来即可,例如gin的c.Request.Context() -// zorm的事务操作需要显式使用zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})开启 -package testzorm - -import ( - "context" - "fmt" - "testing" - "time" - - "gitee.com/chunanyong/zorm" - - // 00.引入数据库驱动 - _ "github.com/go-sql-driver/mysql" -) - -// dbDao 代表一个数据库,如果有多个数据库,就对应声明多个DBDao -var dbDao *zorm.DBDao - -// 01.初始化DBDao -func init() { - - // 自定义zorm日志输出 - // zorm.LogCallDepth = 4 // 日志调用的层级 - // zorm.FuncLogError = myFuncLogError // 记录异常日志的函数 - // zorm.FuncLogPanic = myFuncLogPanic // 记录panic日志,默认使用defaultLogError实现 - // zorm.FuncPrintSQL = myFuncPrintSQL // 打印sql的函数 - - // 自定义日志输出格式,把FuncPrintSQL函数重新赋值 - // log.SetFlags(log.LstdFlags) - // zorm.FuncPrintSQL = zorm.FuncPrintSQL - - // 自定义主键生成 - // zorm.FuncGenerateStringID=funcmyId - - // 自定义Tag列名 - // zorm.FuncWrapFieldTagName=funcmyTagName - - // 自定义decimal类型实现,例如github.com/shopspring/decimal - // zorm.FuncDecimalValue=funcmyDecimal - - // Go数据库驱动列表:https://github.com/golang/go/wiki/SQLDrivers - - // dbDaoConfig 数据库的配置.这里只是模拟,生产应该是读取配置配置文件,构造DataSourceConfig - dbDaoConfig := zorm.DataSourceConfig{ - // DSN 数据库的连接字符串,parseTime=true会自动转换为time格式,默认查询出来的是[]byte数组 - DSN: "root:root@tcp(127.0.0.1:3306)/zorm?charset=utf8&parseTime=true", - // DriverName 数据库驱动名称:mysql,postgres,oracle(go-ora),sqlserver,sqlite3,go_ibm_db,clickhouse,dm,kingbase,aci,taosSql|taosRestful 和Dialect对应 - // sql.Open(DriverName,DSN) DriverName就是驱动的sql.Open第一个字符串参数,根据驱动实际情况获取 - DriverName: "mysql", - // Dialect 数据库方言:mysql,postgresql,oracle,mssql,sqlite,db2,clickhouse,dm,kingbase,shentong,tdengine 和 DriverName 对应 - Dialect: "mysql", - // MaxOpenConns 数据库最大连接数 默认50 - MaxOpenConns: 50, - // MaxIdleConns 数据库最大空闲连接数 默认50 - MaxIdleConns: 50, - // ConnMaxLifetimeSecond 连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时) - ConnMaxLifetimeSecond: 600, - // SlowSQLMillis 慢sql的时间阈值,单位毫秒.小于0是禁用SQL语句输出;等于0是只输出SQL语句,不计算执行时间;大于0是计算SQL执行时间,并且>=SlowSQLMillis值 - SlowSQLMillis: 0, - // DefaultTxOptions 事务隔离级别的默认配置,默认为nil - // DefaultTxOptions: nil, - // 如果是使用分布式事务,建议使用默认配置 - // DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - - // FuncGlobalTransaction seata/hptx全局分布式事务的适配函数,返回IGlobalTransaction接口的实现 - // 业务必须调用 ctx,_=zorm.BindContextEnableGlobalTransaction(ctx) 开启全局分布事务 - // FuncGlobalTransaction : MyFuncGlobalTransaction, - - // SQLDB 使用现有的数据库连接,优先级高于DSN - // SQLDB : nil, - - // DisableTransaction 禁用事务,默认false,如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务,为了处理某些数据库不支持事务,比如TDengine - // 禁用事务应该有驱动伪造事务API,不应该有orm实现,clickhouse的驱动就是这样做的 - // DisableTransaction :false, - - // TDengineInsertsColumnName TDengine批量insert语句中是否有列名.默认false没有列名,插入值和数据库列顺序保持一致,减少语句长度 - // TDengineInsertsColumnName :false, - } - - // 根据dbDaoConfig创建dbDao, 一个数据库只执行一次,第一个执行的数据库为 defaultDao,后续zorm.xxx方法,默认使用的就是defaultDao - dbDao, _ = zorm.NewDBDao(&dbDaoConfig) -} - -// TestInsert 02.测试保存Struct对象 -func TestInsert(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // 创建一个demo对象 - demo := newDemoStruct() - - // 保存对象,参数是对象指针.如果主键是自增,会赋值到对象的主键属性 - _, err := zorm.Insert(ctx, &demo) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - // 标记测试失败 - if err != nil { - t.Errorf("错误:%v", err) - } -} - -// TestInsertSlice 03.测试批量保存Struct对象的Slice -// 如果是自增主键,无法对Struct对象里的主键属性赋值 -func TestInsertSlice(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // slice存放的类型是zorm.IEntityStruct!!!使用IEntityStruct接口,兼容Struct实体类 - demoSlice := make([]zorm.IEntityStruct, 0) - - // 创建对象1 - demo1 := newDemoStruct() - demo1.UserName = "demo1" - // 创建对象2 - demo2 := newDemoStruct() - demo2.UserName = "demo2" - - demoSlice = append(demoSlice, &demo1, &demo2) - - // 批量保存对象,如果主键是自增,无法保存自增的ID到对象里. - _, err := zorm.InsertSlice(ctx, demoSlice) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - // 标记测试失败 - if err != nil { - t.Errorf("错误:%v", err) - } -} - -// TestInsertEntityMap 04.测试保存EntityMap对象,用于不方便使用struct的场景,使用Map作为载体 -func TestInsertEntityMap(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // 创建一个EntityMap,需要传入表名 - entityMap := zorm.NewEntityMap(demoStructTableName) - // 设置主键名称 - entityMap.PkColumnName = "id" - // 如果是自增序列,设置序列的值 - // entityMap.PkSequence = "mySequence" - - // Set 设置数据库的字段值 - // 如果主键是自增或者序列,不要entityMap.Set主键的值 - entityMap.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap.Set("userName", "entityMap-userName") - entityMap.Set("password", "entityMap-password") - entityMap.Set("createTime", time.Now()) - entityMap.Set("active", 1) - - // 执行 - _, err := zorm.InsertEntityMap(ctx, entityMap) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - // 标记测试失败 - if err != nil { - t.Errorf("错误:%v", err) - } -} - - -// TestInsertEntityMapSlice 05.测试批量保存[]IEntityMap,用于不方便使用struct的场景,使用Map作为载体 -func TestInsertEntityMapSlice(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - _, err := Transaction(ctx, func(ctx context.Context) (interface{}, error) { - entityMapSlice := make([]IEntityMap, 0) - entityMap1 := NewEntityMap(demoStructTableName) - entityMap1.PkColumnName = "id" - entityMap1.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap1.Set("userName", "entityMap-userName1") - entityMap1.Set("password", "entityMap-password1") - entityMap1.Set("createTime", time.Now()) - entityMap1.Set("active", 1) - - entityMap2 := NewEntityMap(demoStructTableName) - entityMap2.PkColumnName = "id" - entityMap2.Set("id", zorm.FuncGenerateStringID(ctx)) - entityMap2.Set("userName", "entityMap-userName2") - entityMap2.Set("password", "entityMap-password2") - entityMap2.Set("createTime", time.Now()) - entityMap2.Set("active", 2) - - entityMapSlice = append(entityMapSlice, entityMap1 ,entityMap2) - - // 执行 - _, err := zorm.InsertEntityMapSlice(ctx, entityMapSlice) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - // 标记测试失败 - if err != nil { - t.Errorf("错误:%v", err) - } -} - -// TestQueryRow 06.测试查询一个struct对象 -func TestQueryRow(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 声明一个对象的指针,用于承载返回的数据 - demo := demoStruct{} - - // 构造查询用的finder - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - // finder := zorm.NewSelectFinder(demoStructTableName, "id,userName") // select id,userName from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // finder默认启用了sql注入检查,禁止语句中拼接 ' 单引号,可以设置 finder.InjectionCheck = false 解开限制 - - // finder.Append 第一个参数是语句,后面的参数是对应的值,值的顺序要正确.语句统一使用?,zorm会处理数据库的差异 - // in (?) 参数必须有()括号,不能 in ? - finder.Append("WHERE id=? and active in(?)", "20210630163227149563000042432429", []int{0, 1}) - - // 如何使用like - // finder.Append("WHERE id like ? ", "20210630163227149563000042432429%") - - // 执行查询,has为true表示数据库有数据 - has, err := zorm.QueryRow(ctx, finder, &demo) - - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - // 打印结果 - fmt.Println(demo) -} - -// TestQueryRowMap 07.测试查询map接收结果,用于不太适合struct的场景,比较灵活 -func TestQueryRowMap(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 构造查询用的finder - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // finder.Append 第一个参数是语句,后面的参数是对应的值,值的顺序要正确.语句统一使用?,zorm会处理数据库的差异 - // in (?) 参数必须有()括号,不能 in ? - finder.Append("WHERE id=? and active in(?)", "20210630163227149563000042432429", []int{0, 1}) - // 执行查询 - resultMap, err := zorm.QueryRowMap(ctx, finder) - - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - // 打印结果 - fmt.Println(resultMap) -} - -// TestQuery 08.测试查询对象列表 -func TestQuery(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 创建用于接收结果的slice - list := make([]demoStruct, 0) - - // 构造查询用的finder - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT id FROM " + demoStructTableName) // select * from t_demo - // 创建分页对象,查询完成后,page对象可以直接给前端分页组件使用 - page := zorm.NewPage() - page.PageNo = 1 // 查询第1页,默认是1 - page.PageSize = 20 // 每页20条,默认是20 - - // 不查询总条数 - // finder.SelectTotalCount = false - - // 如果是特别复杂的语句,造成count语句构造失败,可以手动指定分页语句 - // countFinder := zorm.NewFinder().Append("select count(*) from (") - // countFinder.AppendFinder(finder) - // countFinder.Append(") tempcountfinder") - // finder.CountFinder = countFinder - - // 执行查询 - err := zorm.Query(ctx, finder, &list, page) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - // 打印结果 - fmt.Println("总条数:", page.TotalCount, " 列表:", list) -} - -// TestQueryMap 09.测试查询map列表,用于不方便使用struct的场景,一条记录是一个map对象 -func TestQueryMap(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 构造查询用的finder - // finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // 创建分页对象,查询完成后,page对象可以直接给前端分页组件使用 - page := zorm.NewPage() - page.PageNo = 1 // 查询第1页,默认是1 - page.PageSize = 20 // 每页20条,默认是20 - - // 不查询总条数 - // finder.SelectTotalCount = false - - // 如果是特别复杂的语句,造成count语句构造失败,可以手动指定分页语句 - // countFinder := zorm.NewFinder().Append("select count(*) from (") - // countFinder.AppendFinder(finder) - // countFinder.Append(") tempcountfinder") - // finder.CountFinder = countFinder - - // 执行查询 - listMap, err := zorm.QueryMap(ctx, finder, page) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - // 打印结果 - fmt.Println("总条数:", page.TotalCount, " 列表:", listMap) -} - -// TestUpdateNotZeroValue 10.更新struct对象,只更新不为零值的字段.主键必须有值 -func TestUpdateNotZeroValue(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // 声明一个对象的指针,用于更新数据 - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - demo.UserName = "UpdateNotZeroValue" - - // 更新 "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["UpdateNotZeroValue","20210630163227149563000042432429"] - _, err := zorm.UpdateNotZeroValue(ctx, &demo) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - -} - -// TestUpdate 11.更新struct对象,更新所有字段.主键必须有值 -func TestUpdate(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 声明一个对象的指针,用于更新数据 - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - demo.UserName = "TestUpdate" - - _, err := zorm.Update(ctx, &demo) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } -} - -// TestUpdateFinder 12.通过finder更新,zorm最灵活的方式,可以编写任何更新语句,甚至手动编写insert语句 -func TestUpdateFinder(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // finder := zorm.NewUpdateFinder(demoStructTableName) // UPDATE t_demo SET - // finder := zorm.NewDeleteFinder(demoStructTableName) // DELETE FROM t_demo - finder := zorm.NewFinder().Append("UPDATE").Append(demoStructTableName).Append("SET") // UPDATE t_demo SET - finder.Append("userName=?,active=?", "TestUpdateFinder", 1).Append("WHERE id=?", "20210630163227149563000042432429") - - // 更新 "sql":"UPDATE t_demo SET userName=?,active=? WHERE id=?","args":["TestUpdateFinder",1,"20210630163227149563000042432429"] - _, err := zorm.UpdateFinder(ctx, finder) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - -} - -// TestUpdateEntityMap 13.更新一个EntityMap,主键必须有值 -func TestUpdateEntityMap(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - // 创建一个EntityMap,需要传入表名 - entityMap := zorm.NewEntityMap(demoStructTableName) - // 设置主键名称 - entityMap.PkColumnName = "id" - // Set 设置数据库的字段值,主键必须有值 - entityMap.Set("id", "20210630163227149563000042432429") - entityMap.Set("userName", "TestUpdateEntityMap") - // 更新 "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["TestUpdateEntityMap","20210630163227149563000042432429"] - _, err := zorm.UpdateEntityMap(ctx, entityMap) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - -} - -// TestDelete 14.删除一个struct对象,主键必须有值 -func TestDelete(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚.如果设置了DisableTransaction=true,Transaction方法失效,不再要求有事务 - // 如果zorm.DataSourceConfig.DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别 - // 例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用zorm.DataSourceConfig.DefaultTxOptions - _, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - demo := demoStruct{} - demo.Id = "20210630163227149563000042432429" - - // 删除 "sql":"DELETE FROM t_demo WHERE id=?","args":["20210630163227149563000042432429"] - _, err := zorm.Delete(ctx, &demo) - - // 如果返回的err不是nil,事务就会回滚 - return nil, err - }) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - -} - -// TestProc 15.测试调用存储过程 -func TestProc(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - demo := demoStruct{} - finder := zorm.NewFinder().Append("call testproc(?) ", "u_10001") - zorm.QueryRow(ctx, finder, &demo) - fmt.Println(demo) -} - -// TestFunc 16.测试调用自定义函数 -func TestFunc(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - userName := "" - finder := zorm.NewFinder().Append("select testfunc(?) ", "u_10001") - zorm.QueryRow(ctx, finder, &userName) - fmt.Println(userName) -} - -// TestOther 17.其他的一些说明.非常感谢您能看到这一行 -func TestOther(t *testing.T) { - // ctx 一般一个请求一个ctx,正常应该有web层传入,例如gin的c.Request.Context().这里只是模拟 - var ctx = context.Background() - - // 场景1.多个数据库.通过对应数据库的dbDao,调用BindContextDBConnection函数,把这个数据库的连接绑定到返回的ctx上,然后把ctx传递到zorm的函数即可 - // 也可以重写FuncReadWriteStrategy函数,通过ctx设置不同的key,返回指定数据库的DBDao - newCtx, err := dbDao.BindContextDBConnection(ctx) - if err != nil { // 标记测试失败 - t.Errorf("错误:%v", err) - } - - finder := zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo - // 把新产生的newCtx传递到zorm的函数 - list, _ := zorm.QueryMap(newCtx, finder, nil) - fmt.Println(list) - - // 场景2.单个数据库的读写分离.设置读写分离的策略函数. - zorm.FuncReadWriteStrategy = myReadWriteStrategy - - // 场景3.如果是多个数据库,每个数据库还读写分离,按照 场景1 处理. - // 也可以重写FuncReadWriteStrategy函数,通过ctx设置不同的key,返回指定数据库的DBDao - -} - -// myReadWriteStrategy 数据库的读写分离的策略 rwType=0 read,rwType=1 write -// 也可以通过ctx设置不同的key,返回指定数据库的DBDao -func myReadWriteStrategy(ctx context.Context, rwType int) (*zorm.DBDao, error) { - // 根据自己的业务场景,返回需要的读写dao,每次需要数据库的连接的时候,会调用这个函数 - // if rwType == 0 { - // return dbReadDao - // } - // return dbWriteDao - - return DbDao, nil -} - -// -------------------------------------------- -// ICustomDriverValueConver接口,参见达梦的例子 - -// -------------------------------------------- -// OverrideFunc 重写ZORM的函数,当你使用这个函数时,你必须知道自己在做什么 - -``` -## 分布式事务 -### seata-go CallbackWithCtx函数模式 -```golang -// DataSourceConfig 配置 DefaultTxOptions -// DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - -// 引入seata-go 依赖包 -import ( - "context" - "fmt" - "time" - - "github.com/seata/seata-go/pkg/client" - "github.com/seata/seata-go/pkg/tm" - seataSQL "github.com/seata/seata-go/pkg/datasource/sql" //注意:zorm的 DriverName: seataSQL.SeataATMySQLDriver, !!!! -) - -// 配置文件路径 -var configPath = "./conf/seatago.yml" - -func main() { - // 加载配置文件 - client.InitPath(configPath) - - //初始化zorm数据库 - //注意:zorm的 DriverName: seataSQL.SeataATMySQLDriver, !!!! - initZorm() - - //开启分布式事务 - tm.WithGlobalTx(context.Background(), &tm.GtxConfig{ - Name: "ATSampleLocalGlobalTx", - Timeout: time.Second * 30, - }, CallbackWithCtx) - // CallbackWithCtx business callback definition - // type CallbackWithCtx func(ctx context.Context) error - - - // 事务开启之后获取XID.可以通过gin的header传递,或者其他方式传递 - // xid:=tm.GetXID(ctx) - // tm.SetXID(ctx, xid) - - // 如果使用的gin框架,可以使用中间件绑定参数 - // r.Use(ginmiddleware.TransactionMiddleware()) - -} -``` - -### seata-go 事务托管模式 - -```golang - -// 不使用seata-go CallbackWithCtx函数,zorm实现事务托管,不修改业务代码,零侵入实现分布式事务 - -// 必须手动开启分布式事务,必须放到本地事务开启之前调用 -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// 分布式事务示例代码 -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 获取当前分布式事务的XID.不用考虑怎么来的,如果是分布式事务环境,会自动设置值 - // xid := ctx.Value("XID").(string) - - // 把xid传递到第三方应用 - // req.Header.Set("XID", xid) - - // 如果返回的err不是nil,本地事务和分布式事务就会回滚 - return nil, err -}) - -// /----------第三方应用-------/ // - - // 不要使用seata-go默认提供的中间件,只需要ctx绑定XID即可 !!! - //// r.Use(ginmiddleware.TransactionMiddleware()) - xid := c.GetHeader(constant.XidKey) - ctx = context.WithValue(ctx, "XID", xid) - - // 必须手动开启分布式事务,必须放到本地事务开启之前调用 - ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) - // ctx绑定XID之后,调用业务事务 -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 业务代码...... - - // 如果返回的err不是nil,本地事务和分布式事务就会回滚 - return nil, err -}) - - - -// 建议以下代码放到单独的文件里 -// ................// - - -// ZormGlobalTransaction 包装seata-go的*tm.GlobalTransactionManager,实现zorm.IGlobalTransaction接口 -type ZormGlobalTransaction struct { - *tm.GlobalTransactionManager -} - -// MyFuncGlobalTransaction zorm适配seata-go全局分布式事务的函数 -// 重要!!!!需要配置zorm.DataSourceConfig.FuncGlobalTransaction=MyFuncGlobalTransaction 重要!!! -func MyFuncGlobalTransaction(ctx context.Context) (zorm.IGlobalTransaction, context.Context, context.Context, error) { - // 创建seata-go事务 - globalTx := tm.GetGlobalTransactionManager() - // 使用zorm.IGlobalTransaction接口对象包装分布式事务,隔离seata-go依赖 - globalTransaction := &ZormGlobalTransaction{globalTx} - - if tm.IsSeataContext(ctx) { - return globalTransaction, ctx, ctx, nil - } - // open global transaction for the first time - ctx = tm.InitSeataContext(ctx) - // 有请求传入,手动获取的XID - xidObj := ctx.Value("XID") - if xidObj != nil { - xid := xidObj.(string) - tm.SetXID(ctx, xid) - } - tm.SetTxName(ctx, "ATSampleLocalGlobalTx") - - // use new context to process current global transaction. - if tm.IsGlobalTx(ctx) { - globalRootContext := transferTx(ctx) - return globalTransaction, ctx, globalRootContext, nil - } - return globalTransaction, ctx, ctx, nil -} - -// 实现zorm.IGlobalTransaction 托管全局分布式事务接口 -// BeginGTX 开启全局分布式事务 -func (gtx *ZormGlobalTransaction) BeginGTX(ctx context.Context, globalRootContext context.Context) error { - //tm.SetTxStatus(globalRootContext, message.GlobalStatusBegin) - err := gtx.Begin(globalRootContext, time.Second*30) - return err -} - -// CommitGTX 提交全局分布式事务 -func (gtx *ZormGlobalTransaction) CommitGTX(ctx context.Context, globalRootContext context.Context) error { - gtr := tm.GetTx(globalRootContext) - return gtx.Commit(globalRootContext, gtr) -} - -// RollbackGTX 回滚全局分布式事务 -func (gtx *ZormGlobalTransaction) RollbackGTX(ctx context.Context, globalRootContext context.Context) error { - gtr := tm.GetTx(globalRootContext) - // 如果是Participant角色,修改为Launcher角色,允许分支事务提交全局事务. - if gtr.TxRole != tm.Launcher { - gtr.TxRole = tm.Launcher - } - return gtx.Rollback(globalRootContext, gtr) -} - -// GetGTXID 获取全局分布式事务的XID -func (gtx *ZormGlobalTransaction) GetGTXID(ctx context.Context, globalRootContext context.Context) (string, error) { - return tm.GetXID(globalRootContext), nil -} - -// transferTx transfer the gtx into a new ctx from old ctx. -// use it to implement suspend and resume instead of seata java -func transferTx(ctx context.Context) context.Context { - newCtx := tm.InitSeataContext(context.Background()) - tm.SetXID(newCtx, tm.GetXID(ctx)) - return newCtx -} - -// ................// -``` - -### hptx proxy模式 - -hptx已合并[@小口天](https://gitee.com/wuxiangege)的pr, [在hptx代理模式下的zorm使用示例](https://github.com/CECTC/hptx-samples/tree/main/http_proxy_zorm) - -```golang -// DataSourceConfig 配置 DefaultTxOptions -// DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}, - -// 引入hptx 依赖包 -import ( - "github.com/cectc/hptx" - "github.com/cectc/hptx/pkg/config" - "github.com/cectc/hptx/pkg/resource" - "github.com/cectc/mysql" - "github.com/cectc/hptx/pkg/tm" - - gtxContext "github.com/cectc/hptx/pkg/base/context" -) - -// 配置文件路径 -var configPath = "./conf/config.yml" - -func main() { - - // 初始化配置 - hptx.InitFromFile(configPath) - - // 注册mysql驱动 - mysql.RegisterResource(config.GetATConfig().DSN) - resource.InitATBranchResource(mysql.GetDataSourceManager()) - // sqlDB, err := sql.Open("mysql", config.GetATConfig().DSN) - - - // 后续正常初始化zorm,一定要放到hptx mysql 初始化后面!!! - - // ................// - // tm注册事务服务,参照官方例子.(事务托管主要是去掉proxy,对业务零侵入) - tm.Implement(svc.ProxySvc) - // ................// - - - // 获取hptx的rootContext - // rootContext := gtxContext.NewRootContext(ctx) - // rootContext := ctx.(*gtxContext.RootContext) - - // 创建hptx事务 - // globalTx := tm.GetCurrentOrCreate(rootContext) - - // 开始事务 - // globalTx.BeginWithTimeoutAndName(int32(6000), "事务名称", rootContext) - - // 事务开启之后获取XID.可以通过gin的header传递,或者其他方式传递 - // xid:=rootContext.GetXID() - - // 如果使用的gin框架,获取到ctx - // ctx := c.Request.Context() - - // 接受传递过来的XID,绑定到本地ctx - // ctx =context.WithValue(ctx,mysql.XID,xid) -} -``` - -### hptx 事务托管模式 - -hptx已合并[@小口天](https://gitee.com/wuxiangege)的pr, [zorm事务托管hptx示例](https://github.com/CECTC/hptx-samples/tree/main/http_zorm) - -```golang - -// 不使用proxy代理模式,zorm实现事务托管,不修改业务代码,零侵入实现分布式事务 -// tm.Implement(svc.ProxySvc) - -// 必须手动开启分布式事务,必须放到本地事务开启之前调用 -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// 分布式事务示例代码 -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 获取当前分布式事务的XID.不用考虑怎么来的,如果是分布式事务环境,会自动设置值 - // xid := ctx.Value("XID").(string) - - // 把xid传递到第三方应用 - // req.Header.Set("XID", xid) - - // 如果返回的err不是nil,本地事务和分布式事务就会回滚 - return nil, err -}) - -// /----------第三方应用-------// / - -// 第三方应用开启事务前,ctx需要绑定XID,例如使用了gin框架 - -// 接受传递过来的XID,绑定到本地ctx -// xid:=c.Request.Header.Get("XID") -// 获取到ctx -// ctx := c.Request.Context() -// ctx = context.WithValue(ctx,"XID",xid) - -// 必须手动开启分布式事务,必须放到本地事务开启之前调用 -ctx,_ = zorm.BindContextEnableGlobalTransaction(ctx) -// ctx绑定XID之后,调用业务事务 -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 业务代码...... - - // 如果返回的err不是nil,本地事务和分布式事务就会回滚 - return nil, err -}) - - -// 建议以下代码放到单独的文件里 -// ................// - -// ZormGlobalTransaction 包装hptx的*tm.DefaultGlobalTransaction,实现zorm.IGlobalTransaction接口 -type ZormGlobalTransaction struct { - *tm.DefaultGlobalTransaction -} - -// MyFuncGlobalTransaction zorm适配hptx 全局分布式事务的函数 -// 重要!!!!需要配置zorm.DataSourceConfig.FuncGlobalTransaction=MyFuncGlobalTransaction 重要!!! -func MyFuncGlobalTransaction(ctx context.Context) (zorm.IGlobalTransaction, context.Context, context.Context, error) { - // 获取hptx的rootContext - rootContext := gtxContext.NewRootContext(ctx) - // 创建hptx事务 - globalTx := tm.GetCurrentOrCreate(rootContext) - // 使用zorm.IGlobalTransaction接口对象包装分布式事务,隔离hptx依赖 - globalTransaction := &ZormGlobalTransaction{globalTx} - - return globalTransaction, ctx, rootContext, nil -} - - -// 实现zorm.IGlobalTransaction 托管全局分布式事务接口 -// BeginGTX 开启全局分布式事务 -func (gtx *ZormGlobalTransaction) BeginGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - return gtx.BeginWithTimeout(int32(6000), rootContext) -} - -// CommitGTX 提交全局分布式事务 -func (gtx *ZormGlobalTransaction) CommitGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - return gtx.Commit(rootContext) -} - -// RollbackGTX 回滚全局分布式事务 -func (gtx *ZormGlobalTransaction) RollbackGTX(ctx context.Context, globalRootContext context.Context) error { - rootContext := globalRootContext.(*gtxContext.RootContext) - // 如果是Participant角色,修改为Launcher角色,允许分支事务提交全局事务. - if gtx.Role != tm.Launcher { - gtx.Role = tm.Launcher - } - return gtx.Rollback(rootContext) -} -// GetGTXID 获取全局分布式事务的XID -func (gtx *ZormGlobalTransaction) GetGTXID(ctx context.Context, globalRootContext context.Context) (string,error) { - rootContext := globalRootContext.(*gtxContext.RootContext) - return rootContext.GetXID(), nil -} - -// ................// -``` - - - -### dbpack分布式事务 -```dbpack``` 文档:https://cectc.github.io/dbpack-doc/#/README -使用 Mesh 方式部署,对应用集成比较简单,只需要获取xid,放到sql语句的hint就可以了 -```golang -// 开启dbpack事务前,ctx需要绑定sql hint,例如使用gin框架获取header传递过来的xid -xid := c.Request.Header.Get("xid") -// 使用xid生成sql的hint内容,然后将hint绑定到ctx -hint := fmt.Sprintf("/*+ XID('%s') */", xid) -// 获取到ctx -ctx := c.Request.Context() -// 将hint绑定到ctx -ctx,_ = zorm.BindContextSQLHint(ctx,hint) - -// ctx绑定sql hint之后,调用业务事务,传递ctx实现分布式事务的传播 -_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) { - - // 业务代码...... - - // 如果返回的err不是nil,本地事务和分布式事务就会回滚 - return nil, err -}) - -``` - - diff --git a/vendor/gitee.com/chunanyong/zorm/dataSource.go b/vendor/gitee.com/chunanyong/zorm/dataSource.go deleted file mode 100644 index b9d09142..00000000 --- a/vendor/gitee.com/chunanyong/zorm/dataSource.go +++ /dev/null @@ -1,318 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "database/sql" - "errors" - "fmt" - "time" -) - -// dataSorce对象,隔离sql原生对象 -// dataSorce Isolate sql native objects -type dataSource struct { - *sql.DB - // config *DataSourceConfig -} - -// newDataSource 创建一个新的datasource,内部调用,避免外部直接使用datasource -// newDAtaSource Create a new datasource and call it internally to avoid direct external use of the datasource -func newDataSource(config *DataSourceConfig) (*dataSource, error) { - if config == nil { - return nil, errors.New("->newDataSource-->config cannot be nil") - } - - if config.DriverName == "" { - return nil, errors.New("->newDataSource-->DriverName cannot be empty") - } - // 兼容处理,DBType即将废弃,请使用Dialect属性 - if config.DBType != "" && config.Dialect == "" { - FuncLogError(nil, errors.New("->newDataSource-->DataSourceConfig的DBType即将废弃,请使用Dialect属性")) - config.Dialect = config.DBType - } - if config.Dialect == "" { - return nil, errors.New("->newDataSource-->Dialect cannot be empty") - } - var db *sql.DB - var errSQLOpen error - - if config.SQLDB == nil { // 没有已经存在的数据库连接,使用DSN初始化 - if config.DSN == "" { - return nil, errors.New("->newDataSource-->DSN cannot be empty") - } - db, errSQLOpen = sql.Open(config.DriverName, config.DSN) - if errSQLOpen != nil { - errSQLOpen = fmt.Errorf("->newDataSource-->open数据库打开失败:%w", errSQLOpen) - FuncLogError(nil, errSQLOpen) - return nil, errSQLOpen - } - } else { // 使用已经存在的数据库连接 - db = config.SQLDB - } - - if config.MaxOpenConns == 0 { - config.MaxOpenConns = 50 - } - if config.MaxIdleConns == 0 { - config.MaxIdleConns = 50 - } - - if config.ConnMaxLifetimeSecond == 0 { - config.ConnMaxLifetimeSecond = 600 - } - - // 设置数据库最大连接数 - // Set the maximum number of database connections - db.SetMaxOpenConns(config.MaxOpenConns) - // 设置数据库最大空闲连接数 - // Set the maximum number of free connections to the database - db.SetMaxIdleConns(config.MaxIdleConns) - //连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时) - //(Connection survival time in seconds) Destroy and rebuild the connection after the default 600 seconds (10 minutes) - //Prevent the database from actively disconnecting and causing dead connections. MySQL Default wait_timeout 28800 seconds - db.SetConnMaxLifetime(time.Second * time.Duration(config.ConnMaxLifetimeSecond)) - - // 验证连接 - if pingerr := db.Ping(); pingerr != nil { - pingerr = fmt.Errorf("->newDataSource-->ping数据库失败:%w", pingerr) - FuncLogError(nil, pingerr) - db.Close() - return nil, pingerr - } - - return &dataSource{db}, nil -} - -// 事务参照:https://www.jianshu.com/p/2a144332c3db -// Transaction reference: https://www.jianshu.com/p/2a144332c3db - -// dataBaseConnection 数据库dbConnection会话,可以原生查询或者事务 -// dataBaseConnection Database session, native query or transaction. -type dataBaseConnection struct { - // 原生db - // native db - db *sql.DB - - // 原生事务 - // native transaction - tx *sql.Tx - - // 数据库配置 - config *DataSourceConfig -} - -// beginTx 开启事务 -// beginTx Open transaction -func (dbConnection *dataBaseConnection) beginTx(ctx context.Context) error { - if dbConnection.tx != nil { - return nil - } - // 设置事务配置,主要是隔离级别 - var txOptions *sql.TxOptions - contextTxOptions := ctx.Value(contextTxOptionsKey) - if contextTxOptions != nil { - txOptions, _ = contextTxOptions.(*sql.TxOptions) - } else { - txOptions = dbConnection.config.DefaultTxOptions - } - - tx, err := dbConnection.db.BeginTx(ctx, txOptions) - if err != nil { - err = fmt.Errorf("->beginTx事务开启失败:%w", err) - return err - } - dbConnection.tx = tx - return nil -} - -// rollback 回滚事务 -// rollback Rollback transaction -func (dbConnection *dataBaseConnection) rollback() error { - if dbConnection.tx == nil { - return nil - } - - err := dbConnection.tx.Rollback() - if err != nil { - err = fmt.Errorf("->rollback事务回滚失败:%w", err) - return err - } - dbConnection.tx = nil - return nil -} - -// commit 提交事务 -// commit Commit transaction -func (dbConnection *dataBaseConnection) commit() error { - if dbConnection.tx == nil { - return errors.New("->dbConnection.commit()事务为空") - } - - err := dbConnection.tx.Commit() - if err != nil { - err = fmt.Errorf("->dbConnection.commit()事务提交失败:%w", err) - return err - } - dbConnection.tx = nil - return nil -} - -// execContext 执行sql语句,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行 -// execContext Execute sql statement,If the transaction has been opened,it will be executed in transaction mode, if the transaction is not opened,it will be executed in non-transactional mode -func (dbConnection *dataBaseConnection) execContext(ctx context.Context, sqlstr *string, argsValues *[]interface{}) (*sql.Result, error) { - // reBindSQL 重新处理参数代入方式 - execsql, args, err := reBindSQL(dbConnection.config.Dialect, sqlstr, argsValues) - if err != nil { - return nil, err - } - // 更新语句处理ClickHouse特殊语法 - err = reUpdateSQL(dbConnection.config.Dialect, execsql) - if err != nil { - return nil, err - } - // 执行前加入 hint - err = wrapSQLHint(ctx, execsql) - if err != nil { - return nil, err - } - var start *time.Time - var res sql.Result - // 小于0是禁用日志输出;等于0是只输出日志,不计算SQ执行时间;大于0是计算执行时间,并且大于指定值 - slowSQLMillis := dbConnection.config.SlowSQLMillis - if slowSQLMillis == 0 { - FuncPrintSQL(ctx, *execsql, *args, 0) - } else if slowSQLMillis > 0 { - now := time.Now() // 获取当前时间 - start = &now - } - if dbConnection.tx != nil { - res, err = dbConnection.tx.ExecContext(ctx, *execsql, *args...) - } else { - res, err = dbConnection.db.ExecContext(ctx, *execsql, *args...) - } - if slowSQLMillis > 0 { - slow := time.Since(*start).Milliseconds() - if slow-int64(slowSQLMillis) >= 0 { - FuncPrintSQL(ctx, *execsql, *args, slow) - } - } - if err != nil { - err = fmt.Errorf("->execContext执行错误:%w,-->zormErrorExecSQL:%s,-->zormErrorSQLValues:%v", err, *execsql, *args) - } - return &res, err -} - -// queryRowContext 如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行 -func (dbConnection *dataBaseConnection) queryRowContext(ctx context.Context, sqlstr *string, argsValues *[]interface{}) (*sql.Row, error) { - // reBindSQL 重新处理参数代入方式 - query, args, err := reBindSQL(dbConnection.config.Dialect, sqlstr, argsValues) - if err != nil { - return nil, err - } - // 执行前加入 hint - err = wrapSQLHint(ctx, query) - if err != nil { - return nil, err - } - var start *time.Time - var row *sql.Row - // 小于0是禁用日志输出;等于0是只输出日志,不计算SQ执行时间;大于0是计算执行时间,并且大于指定值 - slowSQLMillis := dbConnection.config.SlowSQLMillis - if slowSQLMillis == 0 { - FuncPrintSQL(ctx, *query, *args, 0) - } else if slowSQLMillis > 0 { - now := time.Now() // 获取当前时间 - start = &now - } - - if dbConnection.tx != nil { - row = dbConnection.tx.QueryRowContext(ctx, *query, *args...) - } else { - row = dbConnection.db.QueryRowContext(ctx, *query, *args...) - } - if slowSQLMillis > 0 { - slow := time.Since(*start).Milliseconds() - if slow-int64(slowSQLMillis) >= 0 { - FuncPrintSQL(ctx, *query, *args, slow) - } - } - return row, nil -} - -// queryContext 查询数据,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行 -// queryRowContext Execute sql row statement,If the transaction has been opened,it will be executed in transaction mode, if the transaction is not opened,it will be executed in non-transactional mode -func (dbConnection *dataBaseConnection) queryContext(ctx context.Context, sqlstr *string, argsValues *[]interface{}) (*sql.Rows, error) { - // reBindSQL 重新处理参数代入方式 - query, args, err := reBindSQL(dbConnection.config.Dialect, sqlstr, argsValues) - if err != nil { - return nil, err - } - // 执行前加入 hint - err = wrapSQLHint(ctx, query) - if err != nil { - return nil, err - } - var start *time.Time - var rows *sql.Rows - // 小于0是禁用日志输出;等于0是只输出日志,不计算SQ执行时间;大于0是计算执行时间,并且大于指定值 - slowSQLMillis := dbConnection.config.SlowSQLMillis - if slowSQLMillis == 0 { - FuncPrintSQL(ctx, *query, *args, 0) - } else if slowSQLMillis > 0 { - now := time.Now() // 获取当前时间 - start = &now - } - - if dbConnection.tx != nil { - rows, err = dbConnection.tx.QueryContext(ctx, *query, *args...) - } else { - rows, err = dbConnection.db.QueryContext(ctx, *query, *args...) - } - if slowSQLMillis > 0 { - slow := time.Since(*start).Milliseconds() - if slow-int64(slowSQLMillis) >= 0 { - FuncPrintSQL(ctx, *query, *args, slow) - } - } - if err != nil { - err = fmt.Errorf("->queryContext执行错误:%w,-->zormErrorExecSQL:%s,-->zormErrorSQLValues:%v", err, *query, *args) - } - return rows, err -} - -/* -// prepareContext 预执行,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行 -// prepareContext Pre-execution,If the transaction has been opened,it will be executed in transaction mode,if the transaction is not opened,it will be executed in non-transactional mode -func (dbConnection *dataBaseConnection) prepareContext(ctx context.Context, query *string) (*sql.Stmt, error) { - //打印SQL - //print SQL - if dbConnection.config.PrintSQL { - //logger.Info("printSQL", logger.String("sql", query)) - FuncPrintSQL(ctx,*query, nil) - } - - if dbConnection.tx != nil { - return dbConnection.tx.PrepareContext(ctx, *query) - } - - return dbConnection.db.PrepareContext(ctx, *query) -} -*/ diff --git a/vendor/gitee.com/chunanyong/zorm/decimal/decimal-go.go b/vendor/gitee.com/chunanyong/zorm/decimal/decimal-go.go deleted file mode 100644 index 9958d690..00000000 --- a/vendor/gitee.com/chunanyong/zorm/decimal/decimal-go.go +++ /dev/null @@ -1,415 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Multiprecision decimal numbers. -// For floating-point formatting only; not general purpose. -// Only operations are assign and (binary) left/right shift. -// Can do binary floating point in multiprecision decimal precisely -// because 2 divides 10; cannot do decimal floating point -// in multiprecision binary precisely. - -package decimal - -type decimal struct { - d [800]byte // digits, big-endian representation - nd int // number of digits used - dp int // decimal point - neg bool // negative flag - trunc bool // discarded nonzero digits beyond d[:nd] -} - -func (a *decimal) String() string { - n := 10 + a.nd - if a.dp > 0 { - n += a.dp - } - if a.dp < 0 { - n += -a.dp - } - - buf := make([]byte, n) - w := 0 - switch { - case a.nd == 0: - return "0" - - case a.dp <= 0: - // zeros fill space between decimal point and digits - buf[w] = '0' - w++ - buf[w] = '.' - w++ - w += digitZero(buf[w : w+-a.dp]) - w += copy(buf[w:], a.d[0:a.nd]) - - case a.dp < a.nd: - // decimal point in middle of digits - w += copy(buf[w:], a.d[0:a.dp]) - buf[w] = '.' - w++ - w += copy(buf[w:], a.d[a.dp:a.nd]) - - default: - // zeros fill space between digits and decimal point - w += copy(buf[w:], a.d[0:a.nd]) - w += digitZero(buf[w : w+a.dp-a.nd]) - } - return string(buf[0:w]) -} - -func digitZero(dst []byte) int { - for i := range dst { - dst[i] = '0' - } - return len(dst) -} - -// trim trailing zeros from number. -// (They are meaningless; the decimal point is tracked -// independent of the number of digits.) -func trim(a *decimal) { - for a.nd > 0 && a.d[a.nd-1] == '0' { - a.nd-- - } - if a.nd == 0 { - a.dp = 0 - } -} - -// Assign v to a. -func (a *decimal) Assign(v uint64) { - var buf [24]byte - - // Write reversed decimal in buf. - n := 0 - for v > 0 { - v1 := v / 10 - v -= 10 * v1 - buf[n] = byte(v + '0') - n++ - v = v1 - } - - // Reverse again to produce forward decimal in a.d. - a.nd = 0 - for n--; n >= 0; n-- { - a.d[a.nd] = buf[n] - a.nd++ - } - a.dp = a.nd - trim(a) -} - -// Maximum shift that we can do in one pass without overflow. -// A uint has 32 or 64 bits, and we have to be able to accommodate 9<> 63) -const maxShift = uintSize - 4 - -// Binary shift right (/ 2) by k bits. k <= maxShift to avoid overflow. -func rightShift(a *decimal, k uint) { - r := 0 // read pointer - w := 0 // write pointer - - // Pick up enough leading digits to cover first shift. - var n uint - for ; n>>k == 0; r++ { - if r >= a.nd { - if n == 0 { - // a == 0; shouldn't get here, but handle anyway. - a.nd = 0 - return - } - for n>>k == 0 { - n = n * 10 - r++ - } - break - } - c := uint(a.d[r]) - n = n*10 + c - '0' - } - a.dp -= r - 1 - - var mask uint = (1 << k) - 1 - - // Pick up a digit, put down a digit. - for ; r < a.nd; r++ { - c := uint(a.d[r]) - dig := n >> k - n &= mask - a.d[w] = byte(dig + '0') - w++ - n = n*10 + c - '0' - } - - // Put down extra digits. - for n > 0 { - dig := n >> k - n &= mask - if w < len(a.d) { - a.d[w] = byte(dig + '0') - w++ - } else if dig > 0 { - a.trunc = true - } - n = n * 10 - } - - a.nd = w - trim(a) -} - -// Cheat sheet for left shift: table indexed by shift count giving -// number of new digits that will be introduced by that shift. -// -// For example, leftcheats[4] = {2, "625"}. That means that -// if we are shifting by 4 (multiplying by 16), it will add 2 digits -// when the string prefix is "625" through "999", and one fewer digit -// if the string prefix is "000" through "624". -// -// Credit for this trick goes to Ken. - -type leftCheat struct { - delta int // number of new digits - cutoff string // minus one digit if original < a. -} - -var leftcheats = []leftCheat{ - // Leading digits of 1/2^i = 5^i. - // 5^23 is not an exact 64-bit floating point number, - // so have to use bc for the math. - // Go up to 60 to be large enough for 32bit and 64bit platforms. - /* - seq 60 | sed 's/^/5^/' | bc | - awk 'BEGIN{ print "\t{ 0, \"\" }," } - { - log2 = log(2)/log(10) - printf("\t{ %d, \"%s\" },\t// * %d\n", - int(log2*NR+1), $0, 2**NR) - }' - */ - {0, ""}, - {1, "5"}, // * 2 - {1, "25"}, // * 4 - {1, "125"}, // * 8 - {2, "625"}, // * 16 - {2, "3125"}, // * 32 - {2, "15625"}, // * 64 - {3, "78125"}, // * 128 - {3, "390625"}, // * 256 - {3, "1953125"}, // * 512 - {4, "9765625"}, // * 1024 - {4, "48828125"}, // * 2048 - {4, "244140625"}, // * 4096 - {4, "1220703125"}, // * 8192 - {5, "6103515625"}, // * 16384 - {5, "30517578125"}, // * 32768 - {5, "152587890625"}, // * 65536 - {6, "762939453125"}, // * 131072 - {6, "3814697265625"}, // * 262144 - {6, "19073486328125"}, // * 524288 - {7, "95367431640625"}, // * 1048576 - {7, "476837158203125"}, // * 2097152 - {7, "2384185791015625"}, // * 4194304 - {7, "11920928955078125"}, // * 8388608 - {8, "59604644775390625"}, // * 16777216 - {8, "298023223876953125"}, // * 33554432 - {8, "1490116119384765625"}, // * 67108864 - {9, "7450580596923828125"}, // * 134217728 - {9, "37252902984619140625"}, // * 268435456 - {9, "186264514923095703125"}, // * 536870912 - {10, "931322574615478515625"}, // * 1073741824 - {10, "4656612873077392578125"}, // * 2147483648 - {10, "23283064365386962890625"}, // * 4294967296 - {10, "116415321826934814453125"}, // * 8589934592 - {11, "582076609134674072265625"}, // * 17179869184 - {11, "2910383045673370361328125"}, // * 34359738368 - {11, "14551915228366851806640625"}, // * 68719476736 - {12, "72759576141834259033203125"}, // * 137438953472 - {12, "363797880709171295166015625"}, // * 274877906944 - {12, "1818989403545856475830078125"}, // * 549755813888 - {13, "9094947017729282379150390625"}, // * 1099511627776 - {13, "45474735088646411895751953125"}, // * 2199023255552 - {13, "227373675443232059478759765625"}, // * 4398046511104 - {13, "1136868377216160297393798828125"}, // * 8796093022208 - {14, "5684341886080801486968994140625"}, // * 17592186044416 - {14, "28421709430404007434844970703125"}, // * 35184372088832 - {14, "142108547152020037174224853515625"}, // * 70368744177664 - {15, "710542735760100185871124267578125"}, // * 140737488355328 - {15, "3552713678800500929355621337890625"}, // * 281474976710656 - {15, "17763568394002504646778106689453125"}, // * 562949953421312 - {16, "88817841970012523233890533447265625"}, // * 1125899906842624 - {16, "444089209850062616169452667236328125"}, // * 2251799813685248 - {16, "2220446049250313080847263336181640625"}, // * 4503599627370496 - {16, "11102230246251565404236316680908203125"}, // * 9007199254740992 - {17, "55511151231257827021181583404541015625"}, // * 18014398509481984 - {17, "277555756156289135105907917022705078125"}, // * 36028797018963968 - {17, "1387778780781445675529539585113525390625"}, // * 72057594037927936 - {18, "6938893903907228377647697925567626953125"}, // * 144115188075855872 - {18, "34694469519536141888238489627838134765625"}, // * 288230376151711744 - {18, "173472347597680709441192448139190673828125"}, // * 576460752303423488 - {19, "867361737988403547205962240695953369140625"}, // * 1152921504606846976 -} - -// Is the leading prefix of b lexicographically less than s? -func prefixIsLessThan(b []byte, s string) bool { - for i := 0; i < len(s); i++ { - if i >= len(b) { - return true - } - if b[i] != s[i] { - return b[i] < s[i] - } - } - return false -} - -// Binary shift left (* 2) by k bits. k <= maxShift to avoid overflow. -func leftShift(a *decimal, k uint) { - delta := leftcheats[k].delta - if prefixIsLessThan(a.d[0:a.nd], leftcheats[k].cutoff) { - delta-- - } - - r := a.nd // read index - w := a.nd + delta // write index - - // Pick up a digit, put down a digit. - var n uint - for r--; r >= 0; r-- { - n += (uint(a.d[r]) - '0') << k - quo := n / 10 - rem := n - 10*quo - w-- - if w < len(a.d) { - a.d[w] = byte(rem + '0') - } else if rem != 0 { - a.trunc = true - } - n = quo - } - - // Put down extra digits. - for n > 0 { - quo := n / 10 - rem := n - 10*quo - w-- - if w < len(a.d) { - a.d[w] = byte(rem + '0') - } else if rem != 0 { - a.trunc = true - } - n = quo - } - - a.nd += delta - if a.nd >= len(a.d) { - a.nd = len(a.d) - } - a.dp += delta - trim(a) -} - -// Binary shift left (k > 0) or right (k < 0). -func (a *decimal) Shift(k int) { - switch { - case a.nd == 0: - // nothing to do: a == 0 - case k > 0: - for k > maxShift { - leftShift(a, maxShift) - k -= maxShift - } - leftShift(a, uint(k)) - case k < 0: - for k < -maxShift { - rightShift(a, maxShift) - k += maxShift - } - rightShift(a, uint(-k)) - } -} - -// If we chop a at nd digits, should we round up? -func shouldRoundUp(a *decimal, nd int) bool { - if nd < 0 || nd >= a.nd { - return false - } - if a.d[nd] == '5' && nd+1 == a.nd { // exactly halfway - round to even - // if we truncated, a little higher than what's recorded - always round up - if a.trunc { - return true - } - return nd > 0 && (a.d[nd-1]-'0')%2 != 0 - } - // not halfway - digit tells all - return a.d[nd] >= '5' -} - -// Round a to nd digits (or fewer). -// If nd is zero, it means we're rounding -// just to the left of the digits, as in -// 0.09 -> 0.1. -func (a *decimal) Round(nd int) { - if nd < 0 || nd >= a.nd { - return - } - if shouldRoundUp(a, nd) { - a.RoundUp(nd) - } else { - a.RoundDown(nd) - } -} - -// Round a down to nd digits (or fewer). -func (a *decimal) RoundDown(nd int) { - if nd < 0 || nd >= a.nd { - return - } - a.nd = nd - trim(a) -} - -// Round a up to nd digits (or fewer). -func (a *decimal) RoundUp(nd int) { - if nd < 0 || nd >= a.nd { - return - } - - // round up - for i := nd - 1; i >= 0; i-- { - c := a.d[i] - if c < '9' { // can stop after this digit - a.d[i]++ - a.nd = i + 1 - return - } - } - - // Number is all 9s. - // Change to single 1 with adjusted decimal point. - a.d[0] = '1' - a.nd = 1 - a.dp++ -} - -// Extract integer part, rounded appropriately. -// No guarantees about overflow. -func (a *decimal) RoundedInteger() uint64 { - if a.dp > 20 { - return 0xFFFFFFFFFFFFFFFF - } - var i int - n := uint64(0) - for i = 0; i < a.dp && i < a.nd; i++ { - n = n*10 + uint64(a.d[i]-'0') - } - for ; i < a.dp; i++ { - n *= 10 - } - if shouldRoundUp(a, a.dp) { - n++ - } - return n -} diff --git a/vendor/gitee.com/chunanyong/zorm/decimal/decimal.go b/vendor/gitee.com/chunanyong/zorm/decimal/decimal.go deleted file mode 100644 index c614ea79..00000000 --- a/vendor/gitee.com/chunanyong/zorm/decimal/decimal.go +++ /dev/null @@ -1,1904 +0,0 @@ -// Package decimal implements an arbitrary precision fixed-point decimal. -// -// The zero-value of a Decimal is 0, as you would expect. -// -// The best way to create a new Decimal is to use decimal.NewFromString, ex: -// -// n, err := decimal.NewFromString("-123.4567") -// n.String() // output: "-123.4567" -// -// To use Decimal as part of a struct: -// -// type Struct struct { -// Number Decimal -// } -// -// Note: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. -package decimal - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "math/big" - "regexp" - "strconv" - "strings" -) - -// DivisionPrecision is the number of decimal places in the result when it -// doesn't divide exactly. -// -// Example: -// -// d1 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(3)) -// d1.String() // output: "0.6666666666666667" -// d2 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(30000)) -// d2.String() // output: "0.0000666666666667" -// d3 := decimal.NewFromFloat(20000).Div(decimal.NewFromFloat(3)) -// d3.String() // output: "6666.6666666666666667" -// decimal.DivisionPrecision = 3 -// d4 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(3)) -// d4.String() // output: "0.667" -// -var DivisionPrecision = 16 - -// MarshalJSONWithoutQuotes should be set to true if you want the decimal to -// be JSON marshaled as a number, instead of as a string. -// WARNING: this is dangerous for decimals with many digits, since many JSON -// unmarshallers (ex: Javascript's) will unmarshal JSON numbers to IEEE 754 -// double-precision floating point numbers, which means you can potentially -// silently lose precision. -var MarshalJSONWithoutQuotes = false - -// ExpMaxIterations specifies the maximum number of iterations needed to calculate -// precise natural exponent value using ExpHullAbrham method. -var ExpMaxIterations = 1000 - -// Zero constant, to make computations faster. -// Zero should never be compared with == or != directly, please use decimal.Equal or decimal.Cmp instead. -var Zero = New(0, 1) - -var zeroInt = big.NewInt(0) -var oneInt = big.NewInt(1) -var twoInt = big.NewInt(2) -var fourInt = big.NewInt(4) -var fiveInt = big.NewInt(5) -var tenInt = big.NewInt(10) -var twentyInt = big.NewInt(20) - -var factorials = []Decimal{New(1, 0)} - -// Decimal represents a fixed-point decimal. It is immutable. -// number = value * 10 ^ exp -type Decimal struct { - value *big.Int - - // NOTE(vadim): this must be an int32, because we cast it to float64 during - // calculations. If exp is 64 bit, we might lose precision. - // If we cared about being able to represent every possible decimal, we - // could make exp a *big.Int but it would hurt performance and numbers - // like that are unrealistic. - exp int32 -} - -// New returns a new fixed-point decimal, value * 10 ^ exp. -func New(value int64, exp int32) Decimal { - return Decimal{ - value: big.NewInt(value), - exp: exp, - } -} - -// NewFromInt converts a int64 to Decimal. -// -// Example: -// -// NewFromInt(123).String() // output: "123" -// NewFromInt(-10).String() // output: "-10" -func NewFromInt(value int64) Decimal { - return Decimal{ - value: big.NewInt(value), - exp: 0, - } -} - -// NewFromInt32 converts a int32 to Decimal. -// -// Example: -// -// NewFromInt(123).String() // output: "123" -// NewFromInt(-10).String() // output: "-10" -func NewFromInt32(value int32) Decimal { - return Decimal{ - value: big.NewInt(int64(value)), - exp: 0, - } -} - -// NewFromBigInt returns a new Decimal from a big.Int, value * 10 ^ exp -func NewFromBigInt(value *big.Int, exp int32) Decimal { - return Decimal{ - value: new(big.Int).Set(value), - exp: exp, - } -} - -// NewFromString returns a new Decimal from a string representation. -// Trailing zeroes are not trimmed. -// -// Example: -// -// d, err := NewFromString("-123.45") -// d2, err := NewFromString(".0001") -// d3, err := NewFromString("1.47000") -// -func NewFromString(value string) (Decimal, error) { - originalInput := value - var intString string - var exp int64 - - // Check if number is using scientific notation - eIndex := strings.IndexAny(value, "Ee") - if eIndex != -1 { - expInt, err := strconv.ParseInt(value[eIndex+1:], 10, 32) - if err != nil { - if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange { - return Decimal{}, fmt.Errorf("can't convert %s to decimal: fractional part too long", value) - } - return Decimal{}, fmt.Errorf("can't convert %s to decimal: exponent is not numeric", value) - } - value = value[:eIndex] - exp = expInt - } - - pIndex := -1 - vLen := len(value) - for i := 0; i < vLen; i++ { - if value[i] == '.' { - if pIndex > -1 { - return Decimal{}, fmt.Errorf("can't convert %s to decimal: too many .s", value) - } - pIndex = i - } - } - - if pIndex == -1 { - // There is no decimal point, we can just parse the original string as - // an int - intString = value - } else { - if pIndex+1 < vLen { - intString = value[:pIndex] + value[pIndex+1:] - } else { - intString = value[:pIndex] - } - expInt := -len(value[pIndex+1:]) - exp += int64(expInt) - } - - var dValue *big.Int - // strconv.ParseInt is faster than new(big.Int).SetString so this is just a shortcut for strings we know won't overflow - if len(intString) <= 18 { - parsed64, err := strconv.ParseInt(intString, 10, 64) - if err != nil { - return Decimal{}, fmt.Errorf("can't convert %s to decimal", value) - } - dValue = big.NewInt(parsed64) - } else { - dValue = new(big.Int) - _, ok := dValue.SetString(intString, 10) - if !ok { - return Decimal{}, fmt.Errorf("can't convert %s to decimal", value) - } - } - - if exp < math.MinInt32 || exp > math.MaxInt32 { - // NOTE(vadim): I doubt a string could realistically be this long - return Decimal{}, fmt.Errorf("can't convert %s to decimal: fractional part too long", originalInput) - } - - return Decimal{ - value: dValue, - exp: int32(exp), - }, nil -} - -// NewFromFormattedString returns a new Decimal from a formatted string representation. -// The second argument - replRegexp, is a regular expression that is used to find characters that should be -// removed from given decimal string representation. All matched characters will be replaced with an empty string. -// -// Example: -// -// r := regexp.MustCompile("[$,]") -// d1, err := NewFromFormattedString("$5,125.99", r) -// -// r2 := regexp.MustCompile("[_]") -// d2, err := NewFromFormattedString("1_000_000", r2) -// -// r3 := regexp.MustCompile("[USD\\s]") -// d3, err := NewFromFormattedString("5000 USD", r3) -// -func NewFromFormattedString(value string, replRegexp *regexp.Regexp) (Decimal, error) { - parsedValue := replRegexp.ReplaceAllString(value, "") - d, err := NewFromString(parsedValue) - if err != nil { - return Decimal{}, err - } - return d, nil -} - -// RequireFromString returns a new Decimal from a string representation -// or panics if NewFromString would have returned an error. -// -// Example: -// -// d := RequireFromString("-123.45") -// d2 := RequireFromString(".0001") -// -func RequireFromString(value string) Decimal { - dec, err := NewFromString(value) - if err != nil { - panic(err) - } - return dec -} - -// NewFromFloat converts a float64 to Decimal. -// -// The converted number will contain the number of significant digits that can be -// represented in a float with reliable roundtrip. -// This is typically 15 digits, but may be more in some cases. -// See https://www.exploringbinary.com/decimal-precision-of-binary-floating-point-numbers/ for more information. -// -// For slightly faster conversion, use NewFromFloatWithExponent where you can specify the precision in absolute terms. -// -// NOTE: this will panic on NaN, +/-inf -func NewFromFloat(value float64) Decimal { - if value == 0 { - return New(0, 0) - } - return newFromFloat(value, math.Float64bits(value), &float64info) -} - -// NewFromFloat32 converts a float32 to Decimal. -// -// The converted number will contain the number of significant digits that can be -// represented in a float with reliable roundtrip. -// This is typically 6-8 digits depending on the input. -// See https://www.exploringbinary.com/decimal-precision-of-binary-floating-point-numbers/ for more information. -// -// For slightly faster conversion, use NewFromFloatWithExponent where you can specify the precision in absolute terms. -// -// NOTE: this will panic on NaN, +/-inf -func NewFromFloat32(value float32) Decimal { - if value == 0 { - return New(0, 0) - } - // XOR is workaround for https://github.com/golang/go/issues/26285 - a := math.Float32bits(value) ^ 0x80808080 - return newFromFloat(float64(value), uint64(a)^0x80808080, &float32info) -} - -func newFromFloat(val float64, bits uint64, flt *floatInfo) Decimal { - if math.IsNaN(val) || math.IsInf(val, 0) { - panic(fmt.Sprintf("Cannot create a Decimal from %v", val)) - } - exp := int(bits>>flt.mantbits) & (1<>(flt.expbits+flt.mantbits) != 0 - - roundShortest(&d, mant, exp, flt) - // If less than 19 digits, we can do calculation in an int64. - if d.nd < 19 { - tmp := int64(0) - m := int64(1) - for i := d.nd - 1; i >= 0; i-- { - tmp += m * int64(d.d[i]-'0') - m *= 10 - } - if d.neg { - tmp *= -1 - } - return Decimal{value: big.NewInt(tmp), exp: int32(d.dp) - int32(d.nd)} - } - dValue := new(big.Int) - dValue, ok := dValue.SetString(string(d.d[:d.nd]), 10) - if ok { - return Decimal{value: dValue, exp: int32(d.dp) - int32(d.nd)} - } - - return NewFromFloatWithExponent(val, int32(d.dp)-int32(d.nd)) -} - -// NewFromFloatWithExponent converts a float64 to Decimal, with an arbitrary -// number of fractional digits. -// -// Example: -// -// NewFromFloatWithExponent(123.456, -2).String() // output: "123.46" -// -func NewFromFloatWithExponent(value float64, exp int32) Decimal { - if math.IsNaN(value) || math.IsInf(value, 0) { - panic(fmt.Sprintf("Cannot create a Decimal from %v", value)) - } - - bits := math.Float64bits(value) - mant := bits & (1<<52 - 1) - exp2 := int32((bits >> 52) & (1<<11 - 1)) - sign := bits >> 63 - - if exp2 == 0 { - // specials - if mant == 0 { - return Decimal{} - } - // subnormal - exp2++ - } else { - // normal - mant |= 1 << 52 - } - - exp2 -= 1023 + 52 - - // normalizing base-2 values - for mant&1 == 0 { - mant = mant >> 1 - exp2++ - } - - // maximum number of fractional base-10 digits to represent 2^N exactly cannot be more than -N if N<0 - if exp < 0 && exp < exp2 { - if exp2 < 0 { - exp = exp2 - } else { - exp = 0 - } - } - - // representing 10^M * 2^N as 5^M * 2^(M+N) - exp2 -= exp - - temp := big.NewInt(1) - dMant := big.NewInt(int64(mant)) - - // applying 5^M - if exp > 0 { - temp = temp.SetInt64(int64(exp)) - temp = temp.Exp(fiveInt, temp, nil) - } else if exp < 0 { - temp = temp.SetInt64(-int64(exp)) - temp = temp.Exp(fiveInt, temp, nil) - dMant = dMant.Mul(dMant, temp) - temp = temp.SetUint64(1) - } - - // applying 2^(M+N) - if exp2 > 0 { - dMant = dMant.Lsh(dMant, uint(exp2)) - } else if exp2 < 0 { - temp = temp.Lsh(temp, uint(-exp2)) - } - - // rounding and downscaling - if exp > 0 || exp2 < 0 { - halfDown := new(big.Int).Rsh(temp, 1) - dMant = dMant.Add(dMant, halfDown) - dMant = dMant.Quo(dMant, temp) - } - - if sign == 1 { - dMant = dMant.Neg(dMant) - } - - return Decimal{ - value: dMant, - exp: exp, - } -} - -// Copy returns a copy of decimal with the same value and exponent, but a different pointer to value. -func (d Decimal) Copy() Decimal { - d.ensureInitialized() - return Decimal{ - value: &(*d.value), - exp: d.exp, - } -} - -// rescale returns a rescaled version of the decimal. Returned -// decimal may be less precise if the given exponent is bigger -// than the initial exponent of the Decimal. -// NOTE: this will truncate, NOT round -// -// Example: -// -// d := New(12345, -4) -// d2 := d.rescale(-1) -// d3 := d2.rescale(-4) -// println(d1) -// println(d2) -// println(d3) -// -// Output: -// -// 1.2345 -// 1.2 -// 1.2000 -// -func (d Decimal) rescale(exp int32) Decimal { - d.ensureInitialized() - - if d.exp == exp { - return Decimal{ - new(big.Int).Set(d.value), - d.exp, - } - } - - // NOTE(vadim): must convert exps to float64 before - to prevent overflow - diff := math.Abs(float64(exp) - float64(d.exp)) - value := new(big.Int).Set(d.value) - - expScale := new(big.Int).Exp(tenInt, big.NewInt(int64(diff)), nil) - if exp > d.exp { - value = value.Quo(value, expScale) - } else if exp < d.exp { - value = value.Mul(value, expScale) - } - - return Decimal{ - value: value, - exp: exp, - } -} - -// Abs returns the absolute value of the decimal. -func (d Decimal) Abs() Decimal { - if !d.IsNegative() { - return d - } - d.ensureInitialized() - d2Value := new(big.Int).Abs(d.value) - return Decimal{ - value: d2Value, - exp: d.exp, - } -} - -// Add returns d + d2. -func (d Decimal) Add(d2 Decimal) Decimal { - rd, rd2 := RescalePair(d, d2) - - d3Value := new(big.Int).Add(rd.value, rd2.value) - return Decimal{ - value: d3Value, - exp: rd.exp, - } -} - -// Sub returns d - d2. -func (d Decimal) Sub(d2 Decimal) Decimal { - rd, rd2 := RescalePair(d, d2) - - d3Value := new(big.Int).Sub(rd.value, rd2.value) - return Decimal{ - value: d3Value, - exp: rd.exp, - } -} - -// Neg returns -d. -func (d Decimal) Neg() Decimal { - d.ensureInitialized() - val := new(big.Int).Neg(d.value) - return Decimal{ - value: val, - exp: d.exp, - } -} - -// Mul returns d * d2. -func (d Decimal) Mul(d2 Decimal) Decimal { - d.ensureInitialized() - d2.ensureInitialized() - - expInt64 := int64(d.exp) + int64(d2.exp) - if expInt64 > math.MaxInt32 || expInt64 < math.MinInt32 { - // NOTE(vadim): better to panic than give incorrect results, as - // Decimals are usually used for money - panic(fmt.Sprintf("exponent %v overflows an int32!", expInt64)) - } - - d3Value := new(big.Int).Mul(d.value, d2.value) - return Decimal{ - value: d3Value, - exp: int32(expInt64), - } -} - -// Shift shifts the decimal in base 10. -// It shifts left when shift is positive and right if shift is negative. -// In simpler terms, the given value for shift is added to the exponent -// of the decimal. -func (d Decimal) Shift(shift int32) Decimal { - d.ensureInitialized() - return Decimal{ - value: new(big.Int).Set(d.value), - exp: d.exp + shift, - } -} - -// Div returns d / d2. If it doesn't divide exactly, the result will have -// DivisionPrecision digits after the decimal point. -func (d Decimal) Div(d2 Decimal) Decimal { - return d.DivRound(d2, int32(DivisionPrecision)) -} - -// QuoRem does division with remainder -// d.QuoRem(d2,precision) returns quotient q and remainder r such that -// d = d2 * q + r, q an integer multiple of 10^(-precision) -// 0 <= r < abs(d2) * 10 ^(-precision) if d>=0 -// 0 >= r > -abs(d2) * 10 ^(-precision) if d<0 -// Note that precision<0 is allowed as input. -func (d Decimal) QuoRem(d2 Decimal, precision int32) (Decimal, Decimal) { - d.ensureInitialized() - d2.ensureInitialized() - if d2.value.Sign() == 0 { - panic("decimal division by 0") - } - scale := -precision - e := int64(d.exp - d2.exp - scale) - if e > math.MaxInt32 || e < math.MinInt32 { - panic("overflow in decimal QuoRem") - } - var aa, bb, expo big.Int - var scalerest int32 - // d = a 10^ea - // d2 = b 10^eb - if e < 0 { - aa = *d.value - expo.SetInt64(-e) - bb.Exp(tenInt, &expo, nil) - bb.Mul(d2.value, &bb) - scalerest = d.exp - // now aa = a - // bb = b 10^(scale + eb - ea) - } else { - expo.SetInt64(e) - aa.Exp(tenInt, &expo, nil) - aa.Mul(d.value, &aa) - bb = *d2.value - scalerest = scale + d2.exp - // now aa = a ^ (ea - eb - scale) - // bb = b - } - var q, r big.Int - q.QuoRem(&aa, &bb, &r) - dq := Decimal{value: &q, exp: scale} - dr := Decimal{value: &r, exp: scalerest} - return dq, dr -} - -// DivRound divides and rounds to a given precision -// i.e. to an integer multiple of 10^(-precision) -// for a positive quotient digit 5 is rounded up, away from 0 -// if the quotient is negative then digit 5 is rounded down, away from 0 -// Note that precision<0 is allowed as input. -func (d Decimal) DivRound(d2 Decimal, precision int32) Decimal { - // QuoRem already checks initialization - q, r := d.QuoRem(d2, precision) - // the actual rounding decision is based on comparing r*10^precision and d2/2 - // instead compare 2 r 10 ^precision and d2 - var rv2 big.Int - rv2.Abs(r.value) - rv2.Lsh(&rv2, 1) - // now rv2 = abs(r.value) * 2 - r2 := Decimal{value: &rv2, exp: r.exp + precision} - // r2 is now 2 * r * 10 ^ precision - var c = r2.Cmp(d2.Abs()) - - if c < 0 { - return q - } - - if d.value.Sign()*d2.value.Sign() < 0 { - return q.Sub(New(1, -precision)) - } - - return q.Add(New(1, -precision)) -} - -// Mod returns d % d2. -func (d Decimal) Mod(d2 Decimal) Decimal { - quo := d.DivRound(d2, -d.exp+1).Truncate(0) - return d.Sub(d2.Mul(quo)) -} - -// Pow returns d to the power d2 -func (d Decimal) Pow(d2 Decimal) Decimal { - var temp Decimal - if d2.IntPart() == 0 { - return NewFromFloat(1) - } - temp = d.Pow(d2.Div(NewFromFloat(2))) - if d2.IntPart()%2 == 0 { - return temp.Mul(temp) - } - if d2.IntPart() > 0 { - return temp.Mul(temp).Mul(d) - } - return temp.Mul(temp).Div(d) -} - -// ExpHullAbrham calculates the natural exponent of decimal (e to the power of d) using Hull-Abraham algorithm. -// OverallPrecision argument specifies the overall precision of the result (integer part + decimal part). -// -// ExpHullAbrham is faster than ExpTaylor for small precision values, but it is much slower for large precision values. -// -// Example: -// -// NewFromFloat(26.1).ExpHullAbrham(2).String() // output: "220000000000" -// NewFromFloat(26.1).ExpHullAbrham(20).String() // output: "216314672147.05767284" -// -func (d Decimal) ExpHullAbrham(overallPrecision uint32) (Decimal, error) { - // Algorithm based on Variable precision exponential function. - // ACM Transactions on Mathematical Software by T. E. Hull & A. Abrham. - if d.IsZero() { - return Decimal{oneInt, 0}, nil - } - - currentPrecision := overallPrecision - - // Algorithm does not work if currentPrecision * 23 < |x|. - // Precision is automatically increased in such cases, so the value can be calculated precisely. - // If newly calculated precision is higher than ExpMaxIterations the currentPrecision will not be changed. - f := d.Abs().InexactFloat64() - if ncp := f / 23; ncp > float64(currentPrecision) && ncp < float64(ExpMaxIterations) { - currentPrecision = uint32(math.Ceil(ncp)) - } - - // fail if abs(d) beyond an over/underflow threshold - overflowThreshold := New(23*int64(currentPrecision), 0) - if d.Abs().Cmp(overflowThreshold) > 0 { - return Decimal{}, fmt.Errorf("over/underflow threshold, exp(x) cannot be calculated precisely") - } - - // Return 1 if abs(d) small enough; this also avoids later over/underflow - overflowThreshold2 := New(9, -int32(currentPrecision)-1) - if d.Abs().Cmp(overflowThreshold2) <= 0 { - return Decimal{oneInt, d.exp}, nil - } - - // t is the smallest integer >= 0 such that the corresponding abs(d/k) < 1 - t := d.exp + int32(d.NumDigits()) // Add d.NumDigits because the paper assumes that d.value [0.1, 1) - - if t < 0 { - t = 0 - } - - k := New(1, t) // reduction factor - r := Decimal{new(big.Int).Set(d.value), d.exp - t} // reduced argument - p := int32(currentPrecision) + t + 2 // precision for calculating the sum - - // Determine n, the number of therms for calculating sum - // use first Newton step (1.435p - 1.182) / log10(p/abs(r)) - // for solving appropriate equation, along with directed - // roundings and simple rational bound for log10(p/abs(r)) - rf := r.Abs().InexactFloat64() - pf := float64(p) - nf := math.Ceil((1.453*pf - 1.182) / math.Log10(pf/rf)) - if nf > float64(ExpMaxIterations) || math.IsNaN(nf) { - return Decimal{}, fmt.Errorf("exact value cannot be calculated in <=ExpMaxIterations iterations") - } - n := int64(nf) - - tmp := New(0, 0) - sum := New(1, 0) - one := New(1, 0) - for i := n - 1; i > 0; i-- { - tmp.value.SetInt64(i) - sum = sum.Mul(r.DivRound(tmp, p)) - sum = sum.Add(one) - } - - ki := k.IntPart() - res := New(1, 0) - for i := ki; i > 0; i-- { - res = res.Mul(sum) - } - - resNumDigits := int32(res.NumDigits()) - - var roundDigits int32 - if resNumDigits > abs(res.exp) { - roundDigits = int32(currentPrecision) - resNumDigits - res.exp - } else { - roundDigits = int32(currentPrecision) - } - - res = res.Round(roundDigits) - - return res, nil -} - -// ExpTaylor calculates the natural exponent of decimal (e to the power of d) using Taylor series expansion. -// Precision argument specifies how precise the result must be (number of digits after decimal point). -// Negative precision is allowed. -// -// ExpTaylor is much faster for large precision values than ExpHullAbrham. -// -// Example: -// -// d, err := NewFromFloat(26.1).ExpTaylor(2).String() -// d.String() // output: "216314672147.06" -// -// NewFromFloat(26.1).ExpTaylor(20).String() -// d.String() // output: "216314672147.05767284062928674083" -// -// NewFromFloat(26.1).ExpTaylor(-10).String() -// d.String() // output: "220000000000" -// -func (d Decimal) ExpTaylor(precision int32) (Decimal, error) { - // Note(mwoss): Implementation can be optimized by exclusively using big.Int API only - if d.IsZero() { - return Decimal{oneInt, 0}.Round(precision), nil - } - - var epsilon Decimal - var divPrecision int32 - if precision < 0 { - epsilon = New(1, -1) - divPrecision = 8 - } else { - epsilon = New(1, -precision-1) - divPrecision = precision + 1 - } - - decAbs := d.Abs() - pow := d.Abs() - factorial := New(1, 0) - - result := New(1, 0) - - for i := int64(1); ; { - step := pow.DivRound(factorial, divPrecision) - result = result.Add(step) - - // Stop Taylor series when current step is smaller than epsilon - if step.Cmp(epsilon) < 0 { - break - } - - pow = pow.Mul(decAbs) - - i++ - - // Calculate next factorial number or retrieve cached value - if len(factorials) >= int(i) && !factorials[i-1].IsZero() { - factorial = factorials[i-1] - } else { - // To avoid any race conditions, firstly the zero value is appended to a slice to create - // a spot for newly calculated factorial. After that, the zero value is replaced by calculated - // factorial using the index notation. - factorial = factorials[i-2].Mul(New(i, 0)) - factorials = append(factorials, Zero) - factorials[i-1] = factorial - } - } - - if d.Sign() < 0 { - result = New(1, 0).DivRound(result, precision+1) - } - - result = result.Round(precision) - return result, nil -} - -// NumDigits returns the number of digits of the decimal coefficient (d.Value) -// Note: Current implementation is extremely slow for large decimals and/or decimals with large fractional part -func (d Decimal) NumDigits() int { - // Note(mwoss): It can be optimized, unnecessary cast of big.Int to string - if d.IsNegative() { - return len(d.value.String()) - 1 - } - return len(d.value.String()) -} - -// IsInteger returns true when decimal can be represented as an integer value, otherwise, it returns false. -func (d Decimal) IsInteger() bool { - // The most typical case, all decimal with exponent higher or equal 0 can be represented as integer - if d.exp >= 0 { - return true - } - // When the exponent is negative we have to check every number after the decimal place - // If all of them are zeroes, we are sure that given decimal can be represented as an integer - var r big.Int - q := new(big.Int).Set(d.value) - for z := abs(d.exp); z > 0; z-- { - q.QuoRem(q, tenInt, &r) - if r.Cmp(zeroInt) != 0 { - return false - } - } - return true -} - -// Abs calculates absolute value of any int32. Used for calculating absolute value of decimal's exponent. -func abs(n int32) int32 { - if n < 0 { - return -n - } - return n -} - -// Cmp compares the numbers represented by d and d2 and returns: -// -// -1 if d < d2 -// 0 if d == d2 -// +1 if d > d2 -// -func (d Decimal) Cmp(d2 Decimal) int { - d.ensureInitialized() - d2.ensureInitialized() - - if d.exp == d2.exp { - return d.value.Cmp(d2.value) - } - - rd, rd2 := RescalePair(d, d2) - - return rd.value.Cmp(rd2.value) -} - -// Equal returns whether the numbers represented by d and d2 are equal. -func (d Decimal) Equal(d2 Decimal) bool { - return d.Cmp(d2) == 0 -} - -// Equals is deprecated, please use Equal method instead -func (d Decimal) Equals(d2 Decimal) bool { - return d.Equal(d2) -} - -// GreaterThan (GT) returns true when d is greater than d2. -func (d Decimal) GreaterThan(d2 Decimal) bool { - return d.Cmp(d2) == 1 -} - -// GreaterThanOrEqual (GTE) returns true when d is greater than or equal to d2. -func (d Decimal) GreaterThanOrEqual(d2 Decimal) bool { - cmp := d.Cmp(d2) - return cmp == 1 || cmp == 0 -} - -// LessThan (LT) returns true when d is less than d2. -func (d Decimal) LessThan(d2 Decimal) bool { - return d.Cmp(d2) == -1 -} - -// LessThanOrEqual (LTE) returns true when d is less than or equal to d2. -func (d Decimal) LessThanOrEqual(d2 Decimal) bool { - cmp := d.Cmp(d2) - return cmp == -1 || cmp == 0 -} - -// Sign returns: -// -// -1 if d < 0 -// 0 if d == 0 -// +1 if d > 0 -// -func (d Decimal) Sign() int { - if d.value == nil { - return 0 - } - return d.value.Sign() -} - -// IsPositive return -// -// true if d > 0 -// false if d == 0 -// false if d < 0 -func (d Decimal) IsPositive() bool { - return d.Sign() == 1 -} - -// IsNegative return -// -// true if d < 0 -// false if d == 0 -// false if d > 0 -func (d Decimal) IsNegative() bool { - return d.Sign() == -1 -} - -// IsZero return -// -// true if d == 0 -// false if d > 0 -// false if d < 0 -func (d Decimal) IsZero() bool { - return d.Sign() == 0 -} - -// Exponent returns the exponent, or scale component of the decimal. -func (d Decimal) Exponent() int32 { - return d.exp -} - -// Coefficient returns the coefficient of the decimal. It is scaled by 10^Exponent() -func (d Decimal) Coefficient() *big.Int { - d.ensureInitialized() - // we copy the coefficient so that mutating the result does not mutate the Decimal. - return new(big.Int).Set(d.value) -} - -// CoefficientInt64 returns the coefficient of the decimal as int64. It is scaled by 10^Exponent() -// If coefficient cannot be represented in an int64, the result will be undefined. -func (d Decimal) CoefficientInt64() int64 { - d.ensureInitialized() - return d.value.Int64() -} - -// IntPart returns the integer component of the decimal. -func (d Decimal) IntPart() int64 { - scaledD := d.rescale(0) - return scaledD.value.Int64() -} - -// BigInt returns integer component of the decimal as a BigInt. -func (d Decimal) BigInt() *big.Int { - scaledD := d.rescale(0) - i := &big.Int{} - i.SetString(scaledD.String(), 10) - return i -} - -// BigFloat returns decimal as BigFloat. -// Be aware that casting decimal to BigFloat might cause a loss of precision. -func (d Decimal) BigFloat() *big.Float { - f := &big.Float{} - f.SetString(d.String()) - return f -} - -// Rat returns a rational number representation of the decimal. -func (d Decimal) Rat() *big.Rat { - d.ensureInitialized() - if d.exp <= 0 { - // NOTE(vadim): must negate after casting to prevent int32 overflow - denom := new(big.Int).Exp(tenInt, big.NewInt(-int64(d.exp)), nil) - return new(big.Rat).SetFrac(d.value, denom) - } - - mul := new(big.Int).Exp(tenInt, big.NewInt(int64(d.exp)), nil) - num := new(big.Int).Mul(d.value, mul) - return new(big.Rat).SetFrac(num, oneInt) -} - -// Float64 returns the nearest float64 value for d and a bool indicating -// whether f represents d exactly. -// For more details, see the documentation for big.Rat.Float64 -func (d Decimal) Float64() (f float64, exact bool) { - return d.Rat().Float64() -} - -// InexactFloat64 returns the nearest float64 value for d. -// It doesn't indicate if the returned value represents d exactly. -func (d Decimal) InexactFloat64() float64 { - f, _ := d.Float64() - return f -} - -// String returns the string representation of the decimal -// with the fixed point. -// -// Example: -// -// d := New(-12345, -3) -// println(d.String()) -// -// Output: -// -// -12.345 -// -func (d Decimal) String() string { - return d.string(true) -} - -// StringFixed returns a rounded fixed-point string with places digits after -// the decimal point. -// -// Example: -// -// NewFromFloat(0).StringFixed(2) // output: "0.00" -// NewFromFloat(0).StringFixed(0) // output: "0" -// NewFromFloat(5.45).StringFixed(0) // output: "5" -// NewFromFloat(5.45).StringFixed(1) // output: "5.5" -// NewFromFloat(5.45).StringFixed(2) // output: "5.45" -// NewFromFloat(5.45).StringFixed(3) // output: "5.450" -// NewFromFloat(545).StringFixed(-1) // output: "550" -// -func (d Decimal) StringFixed(places int32) string { - rounded := d.Round(places) - return rounded.string(false) -} - -// StringFixedBank returns a banker rounded fixed-point string with places digits -// after the decimal point. -// -// Example: -// -// NewFromFloat(0).StringFixedBank(2) // output: "0.00" -// NewFromFloat(0).StringFixedBank(0) // output: "0" -// NewFromFloat(5.45).StringFixedBank(0) // output: "5" -// NewFromFloat(5.45).StringFixedBank(1) // output: "5.4" -// NewFromFloat(5.45).StringFixedBank(2) // output: "5.45" -// NewFromFloat(5.45).StringFixedBank(3) // output: "5.450" -// NewFromFloat(545).StringFixedBank(-1) // output: "540" -// -func (d Decimal) StringFixedBank(places int32) string { - rounded := d.RoundBank(places) - return rounded.string(false) -} - -// StringFixedCash returns a Swedish/Cash rounded fixed-point string. For -// more details see the documentation at function RoundCash. -func (d Decimal) StringFixedCash(interval uint8) string { - rounded := d.RoundCash(interval) - return rounded.string(false) -} - -// Round rounds the decimal to places decimal places. -// If places < 0, it will round the integer part to the nearest 10^(-places). -// -// Example: -// -// NewFromFloat(5.45).Round(1).String() // output: "5.5" -// NewFromFloat(545).Round(-1).String() // output: "550" -// -func (d Decimal) Round(places int32) Decimal { - if d.exp == -places { - return d - } - // truncate to places + 1 - ret := d.rescale(-places - 1) - - // add sign(d) * 0.5 - if ret.value.Sign() < 0 { - ret.value.Sub(ret.value, fiveInt) - } else { - ret.value.Add(ret.value, fiveInt) - } - - // floor for positive numbers, ceil for negative numbers - _, m := ret.value.DivMod(ret.value, tenInt, new(big.Int)) - ret.exp++ - if ret.value.Sign() < 0 && m.Cmp(zeroInt) != 0 { - ret.value.Add(ret.value, oneInt) - } - - return ret -} - -// RoundCeil rounds the decimal towards +infinity. -// -// Example: -// -// NewFromFloat(545).RoundCeil(-2).String() // output: "600" -// NewFromFloat(500).RoundCeil(-2).String() // output: "500" -// NewFromFloat(1.1001).RoundCeil(2).String() // output: "1.11" -// NewFromFloat(-1.454).RoundCeil(1).String() // output: "-1.5" -// -func (d Decimal) RoundCeil(places int32) Decimal { - if d.exp >= -places { - return d - } - - rescaled := d.rescale(-places) - if d.Equal(rescaled) { - return d - } - - if d.value.Sign() > 0 { - rescaled.value.Add(rescaled.value, oneInt) - } - - return rescaled -} - -// RoundFloor rounds the decimal towards -infinity. -// -// Example: -// -// NewFromFloat(545).RoundFloor(-2).String() // output: "500" -// NewFromFloat(-500).RoundFloor(-2).String() // output: "-500" -// NewFromFloat(1.1001).RoundFloor(2).String() // output: "1.1" -// NewFromFloat(-1.454).RoundFloor(1).String() // output: "-1.4" -// -func (d Decimal) RoundFloor(places int32) Decimal { - if d.exp >= -places { - return d - } - - rescaled := d.rescale(-places) - if d.Equal(rescaled) { - return d - } - - if d.value.Sign() < 0 { - rescaled.value.Sub(rescaled.value, oneInt) - } - - return rescaled -} - -// RoundUp rounds the decimal away from zero. -// -// Example: -// -// NewFromFloat(545).RoundUp(-2).String() // output: "600" -// NewFromFloat(500).RoundUp(-2).String() // output: "500" -// NewFromFloat(1.1001).RoundUp(2).String() // output: "1.11" -// NewFromFloat(-1.454).RoundUp(1).String() // output: "-1.4" -// -func (d Decimal) RoundUp(places int32) Decimal { - if d.exp >= -places { - return d - } - - rescaled := d.rescale(-places) - if d.Equal(rescaled) { - return d - } - - if d.value.Sign() > 0 { - rescaled.value.Add(rescaled.value, oneInt) - } else if d.value.Sign() < 0 { - rescaled.value.Sub(rescaled.value, oneInt) - } - - return rescaled -} - -// RoundDown rounds the decimal towards zero. -// -// Example: -// -// NewFromFloat(545).RoundDown(-2).String() // output: "500" -// NewFromFloat(-500).RoundDown(-2).String() // output: "-500" -// NewFromFloat(1.1001).RoundDown(2).String() // output: "1.1" -// NewFromFloat(-1.454).RoundDown(1).String() // output: "-1.5" -// -func (d Decimal) RoundDown(places int32) Decimal { - if d.exp >= -places { - return d - } - - rescaled := d.rescale(-places) - if d.Equal(rescaled) { - return d - } - return rescaled -} - -// RoundBank rounds the decimal to places decimal places. -// If the final digit to round is equidistant from the nearest two integers the -// rounded value is taken as the even number -// -// If places < 0, it will round the integer part to the nearest 10^(-places). -// -// Examples: -// -// NewFromFloat(5.45).RoundBank(1).String() // output: "5.4" -// NewFromFloat(545).RoundBank(-1).String() // output: "540" -// NewFromFloat(5.46).RoundBank(1).String() // output: "5.5" -// NewFromFloat(546).RoundBank(-1).String() // output: "550" -// NewFromFloat(5.55).RoundBank(1).String() // output: "5.6" -// NewFromFloat(555).RoundBank(-1).String() // output: "560" -// -func (d Decimal) RoundBank(places int32) Decimal { - - round := d.Round(places) - remainder := d.Sub(round).Abs() - - half := New(5, -places-1) - if remainder.Cmp(half) == 0 && round.value.Bit(0) != 0 { - if round.value.Sign() < 0 { - round.value.Add(round.value, oneInt) - } else { - round.value.Sub(round.value, oneInt) - } - } - - return round -} - -// RoundCash aka Cash/Penny/öre rounding rounds decimal to a specific -// interval. The amount payable for a cash transaction is rounded to the nearest -// multiple of the minimum currency unit available. The following intervals are -// available: 5, 10, 25, 50 and 100; any other number throws a panic. -// 5: 5 cent rounding 3.43 => 3.45 -// 10: 10 cent rounding 3.45 => 3.50 (5 gets rounded up) -// 25: 25 cent rounding 3.41 => 3.50 -// 50: 50 cent rounding 3.75 => 4.00 -// 100: 100 cent rounding 3.50 => 4.00 -// For more details: https://en.wikipedia.org/wiki/Cash_rounding -func (d Decimal) RoundCash(interval uint8) Decimal { - var iVal *big.Int - switch interval { - case 5: - iVal = twentyInt - case 10: - iVal = tenInt - case 25: - iVal = fourInt - case 50: - iVal = twoInt - case 100: - iVal = oneInt - default: - panic(fmt.Sprintf("Decimal does not support this Cash rounding interval `%d`. Supported: 5, 10, 25, 50, 100", interval)) - } - dVal := Decimal{ - value: iVal, - } - - // TODO: optimize those calculations to reduce the high allocations (~29 allocs). - return d.Mul(dVal).Round(0).Div(dVal).Truncate(2) -} - -// Floor returns the nearest integer value less than or equal to d. -func (d Decimal) Floor() Decimal { - d.ensureInitialized() - - if d.exp >= 0 { - return d - } - - exp := big.NewInt(10) - - // NOTE(vadim): must negate after casting to prevent int32 overflow - exp.Exp(exp, big.NewInt(-int64(d.exp)), nil) - - z := new(big.Int).Div(d.value, exp) - return Decimal{value: z, exp: 0} -} - -// Ceil returns the nearest integer value greater than or equal to d. -func (d Decimal) Ceil() Decimal { - d.ensureInitialized() - - if d.exp >= 0 { - return d - } - - exp := big.NewInt(10) - - // NOTE(vadim): must negate after casting to prevent int32 overflow - exp.Exp(exp, big.NewInt(-int64(d.exp)), nil) - - z, m := new(big.Int).DivMod(d.value, exp, new(big.Int)) - if m.Cmp(zeroInt) != 0 { - z.Add(z, oneInt) - } - return Decimal{value: z, exp: 0} -} - -// Truncate truncates off digits from the number, without rounding. -// -// NOTE: precision is the last digit that will not be truncated (must be >= 0). -// -// Example: -// -// decimal.NewFromString("123.456").Truncate(2).String() // "123.45" -// -func (d Decimal) Truncate(precision int32) Decimal { - d.ensureInitialized() - if precision >= 0 && -precision > d.exp { - return d.rescale(-precision) - } - return d -} - -// UnmarshalJSON implements the json.Unmarshaler interface. -func (d *Decimal) UnmarshalJSON(decimalBytes []byte) error { - if string(decimalBytes) == "null" { - return nil - } - - str, err := unquoteIfQuoted(decimalBytes) - if err != nil { - return fmt.Errorf("error decoding string '%s': %s", decimalBytes, err) - } - - decimal, err := NewFromString(str) - *d = decimal - if err != nil { - return fmt.Errorf("error decoding string '%s': %s", str, err) - } - return nil -} - -// MarshalJSON implements the json.Marshaler interface. -func (d Decimal) MarshalJSON() ([]byte, error) { - var str string - if MarshalJSONWithoutQuotes { - str = d.String() - } else { - str = "\"" + d.String() + "\"" - } - return []byte(str), nil -} - -// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. As a string representation -// is already used when encoding to text, this method stores that string as []byte -func (d *Decimal) UnmarshalBinary(data []byte) error { - // Verify we have at least 4 bytes for the exponent. The GOB encoded value - // may be empty. - if len(data) < 4 { - return fmt.Errorf("error decoding binary %v: expected at least 4 bytes, got %d", data, len(data)) - } - - // Extract the exponent - d.exp = int32(binary.BigEndian.Uint32(data[:4])) - - // Extract the value - d.value = new(big.Int) - if err := d.value.GobDecode(data[4:]); err != nil { - return fmt.Errorf("error decoding binary %v: %s", data, err) - } - - return nil -} - -// MarshalBinary implements the encoding.BinaryMarshaler interface. -func (d Decimal) MarshalBinary() (data []byte, err error) { - // Write the exponent first since it's a fixed size - v1 := make([]byte, 4) - binary.BigEndian.PutUint32(v1, uint32(d.exp)) - - // Add the value - var v2 []byte - if v2, err = d.value.GobEncode(); err != nil { - return - } - - // Return the byte array - data = append(v1, v2...) - return -} - -// Scan implements the sql.Scanner interface for database deserialization. -func (d *Decimal) Scan(value interface{}) error { - // first try to see if the data is stored in database as a Numeric datatype - switch v := value.(type) { - - case float32: - *d = NewFromFloat(float64(v)) - return nil - - case float64: - // numeric in sqlite3 sends us float64 - *d = NewFromFloat(v) - return nil - - case int64: - // at least in sqlite3 when the value is 0 in db, the data is sent - // to us as an int64 instead of a float64 ... - *d = New(v, 0) - return nil - - default: - // default is trying to interpret value stored as string - str, err := unquoteIfQuoted(v) - if err != nil { - return err - } - *d, err = NewFromString(str) - return err - } -} - -// Value implements the driver.Valuer interface for database serialization. -func (d Decimal) Value() (driver.Value, error) { - return d.String(), nil -} - -// UnmarshalText implements the encoding.TextUnmarshaler interface for XML -// deserialization. -func (d *Decimal) UnmarshalText(text []byte) error { - str := string(text) - - dec, err := NewFromString(str) - *d = dec - if err != nil { - return fmt.Errorf("error decoding string '%s': %s", str, err) - } - - return nil -} - -// MarshalText implements the encoding.TextMarshaler interface for XML -// serialization. -func (d Decimal) MarshalText() (text []byte, err error) { - return []byte(d.String()), nil -} - -// GobEncode implements the gob.GobEncoder interface for gob serialization. -func (d Decimal) GobEncode() ([]byte, error) { - return d.MarshalBinary() -} - -// GobDecode implements the gob.GobDecoder interface for gob serialization. -func (d *Decimal) GobDecode(data []byte) error { - return d.UnmarshalBinary(data) -} - -// StringScaled first scales the decimal then calls .String() on it. -// NOTE: buggy, unintuitive, and DEPRECATED! Use StringFixed instead. -func (d Decimal) StringScaled(exp int32) string { - return d.rescale(exp).String() -} - -func (d Decimal) string(trimTrailingZeros bool) string { - if d.exp >= 0 { - return d.rescale(0).value.String() - } - - abs := new(big.Int).Abs(d.value) - str := abs.String() - - var intPart, fractionalPart string - - // NOTE(vadim): this cast to int will cause bugs if d.exp == INT_MIN - // and you are on a 32-bit machine. Won't fix this super-edge case. - dExpInt := int(d.exp) - if len(str) > -dExpInt { - intPart = str[:len(str)+dExpInt] - fractionalPart = str[len(str)+dExpInt:] - } else { - intPart = "0" - - num0s := -dExpInt - len(str) - fractionalPart = strings.Repeat("0", num0s) + str - } - - if trimTrailingZeros { - i := len(fractionalPart) - 1 - for ; i >= 0; i-- { - if fractionalPart[i] != '0' { - break - } - } - fractionalPart = fractionalPart[:i+1] - } - - number := intPart - if len(fractionalPart) > 0 { - number += "." + fractionalPart - } - - if d.value.Sign() < 0 { - return "-" + number - } - - return number -} - -func (d *Decimal) ensureInitialized() { - if d.value == nil { - d.value = new(big.Int) - } -} - -// Min returns the smallest Decimal that was passed in the arguments. -// -// To call this function with an array, you must do: -// -// Min(arr[0], arr[1:]...) -// -// This makes it harder to accidentally call Min with 0 arguments. -func Min(first Decimal, rest ...Decimal) Decimal { - ans := first - for _, item := range rest { - if item.Cmp(ans) < 0 { - ans = item - } - } - return ans -} - -// Max returns the largest Decimal that was passed in the arguments. -// -// To call this function with an array, you must do: -// -// Max(arr[0], arr[1:]...) -// -// This makes it harder to accidentally call Max with 0 arguments. -func Max(first Decimal, rest ...Decimal) Decimal { - ans := first - for _, item := range rest { - if item.Cmp(ans) > 0 { - ans = item - } - } - return ans -} - -// Sum returns the combined total of the provided first and rest Decimals -func Sum(first Decimal, rest ...Decimal) Decimal { - total := first - for _, item := range rest { - total = total.Add(item) - } - - return total -} - -// Avg returns the average value of the provided first and rest Decimals -func Avg(first Decimal, rest ...Decimal) Decimal { - count := New(int64(len(rest)+1), 0) - sum := Sum(first, rest...) - return sum.Div(count) -} - -// RescalePair rescales two decimals to common exponential value (minimal exp of both decimals) -func RescalePair(d1 Decimal, d2 Decimal) (Decimal, Decimal) { - d1.ensureInitialized() - d2.ensureInitialized() - - if d1.exp == d2.exp { - return d1, d2 - } - - baseScale := min(d1.exp, d2.exp) - if baseScale != d1.exp { - return d1.rescale(baseScale), d2 - } - return d1, d2.rescale(baseScale) -} - -func min(x, y int32) int32 { - if x >= y { - return y - } - return x -} - -func unquoteIfQuoted(value interface{}) (string, error) { - var bytes []byte - - switch v := value.(type) { - case string: - bytes = []byte(v) - case []byte: - bytes = v - default: - return "", fmt.Errorf("could not convert value '%+v' to byte array of type '%T'", - value, value) - } - - // If the amount is quoted, strip the quotes - if len(bytes) > 2 && bytes[0] == '"' && bytes[len(bytes)-1] == '"' { - bytes = bytes[1 : len(bytes)-1] - } - return string(bytes), nil -} - -// NullDecimal represents a nullable decimal with compatibility for -// scanning null values from the database. -type NullDecimal struct { - Decimal Decimal - Valid bool -} - -func NewNullDecimal(d Decimal) NullDecimal { - return NullDecimal{ - Decimal: d, - Valid: true, - } -} - -// Scan implements the sql.Scanner interface for database deserialization. -func (d *NullDecimal) Scan(value interface{}) error { - if value == nil { - d.Valid = false - return nil - } - d.Valid = true - return d.Decimal.Scan(value) -} - -// Value implements the driver.Valuer interface for database serialization. -func (d NullDecimal) Value() (driver.Value, error) { - if !d.Valid { - return nil, nil - } - return d.Decimal.Value() -} - -// UnmarshalJSON implements the json.Unmarshaler interface. -func (d *NullDecimal) UnmarshalJSON(decimalBytes []byte) error { - if string(decimalBytes) == "null" { - d.Valid = false - return nil - } - d.Valid = true - return d.Decimal.UnmarshalJSON(decimalBytes) -} - -// MarshalJSON implements the json.Marshaler interface. -func (d NullDecimal) MarshalJSON() ([]byte, error) { - if !d.Valid { - return []byte("null"), nil - } - return d.Decimal.MarshalJSON() -} - -// UnmarshalText implements the encoding.TextUnmarshaler interface for XML -// deserialization -func (d *NullDecimal) UnmarshalText(text []byte) error { - str := string(text) - - // check for empty XML or XML without body e.g., - if str == "" { - d.Valid = false - return nil - } - if err := d.Decimal.UnmarshalText(text); err != nil { - d.Valid = false - return err - } - d.Valid = true - return nil -} - -// MarshalText implements the encoding.TextMarshaler interface for XML -// serialization. -func (d NullDecimal) MarshalText() (text []byte, err error) { - if !d.Valid { - return []byte{}, nil - } - return d.Decimal.MarshalText() -} - -// Trig functions - -// Atan returns the arctangent, in radians, of x. -func (d Decimal) Atan() Decimal { - if d.Equal(NewFromFloat(0.0)) { - return d - } - if d.GreaterThan(NewFromFloat(0.0)) { - return d.satan() - } - return d.Neg().satan().Neg() -} - -func (d Decimal) xatan() Decimal { - P0 := NewFromFloat(-8.750608600031904122785e-01) - P1 := NewFromFloat(-1.615753718733365076637e+01) - P2 := NewFromFloat(-7.500855792314704667340e+01) - P3 := NewFromFloat(-1.228866684490136173410e+02) - P4 := NewFromFloat(-6.485021904942025371773e+01) - Q0 := NewFromFloat(2.485846490142306297962e+01) - Q1 := NewFromFloat(1.650270098316988542046e+02) - Q2 := NewFromFloat(4.328810604912902668951e+02) - Q3 := NewFromFloat(4.853903996359136964868e+02) - Q4 := NewFromFloat(1.945506571482613964425e+02) - z := d.Mul(d) - b1 := P0.Mul(z).Add(P1).Mul(z).Add(P2).Mul(z).Add(P3).Mul(z).Add(P4).Mul(z) - b2 := z.Add(Q0).Mul(z).Add(Q1).Mul(z).Add(Q2).Mul(z).Add(Q3).Mul(z).Add(Q4) - z = b1.Div(b2) - z = d.Mul(z).Add(d) - return z -} - -// satan reduces its argument (known to be positive) -// to the range [0, 0.66] and calls xatan. -func (d Decimal) satan() Decimal { - Morebits := NewFromFloat(6.123233995736765886130e-17) // pi/2 = PIO2 + Morebits - Tan3pio8 := NewFromFloat(2.41421356237309504880) // tan(3*pi/8) - pi := NewFromFloat(3.14159265358979323846264338327950288419716939937510582097494459) - - if d.LessThanOrEqual(NewFromFloat(0.66)) { - return d.xatan() - } - if d.GreaterThan(Tan3pio8) { - return pi.Div(NewFromFloat(2.0)).Sub(NewFromFloat(1.0).Div(d).xatan()).Add(Morebits) - } - return pi.Div(NewFromFloat(4.0)).Add((d.Sub(NewFromFloat(1.0)).Div(d.Add(NewFromFloat(1.0)))).xatan()).Add(NewFromFloat(0.5).Mul(Morebits)) -} - -// sin coefficients -var _sin = [...]Decimal{ - NewFromFloat(1.58962301576546568060e-10), // 0x3de5d8fd1fd19ccd - NewFromFloat(-2.50507477628578072866e-8), // 0xbe5ae5e5a9291f5d - NewFromFloat(2.75573136213857245213e-6), // 0x3ec71de3567d48a1 - NewFromFloat(-1.98412698295895385996e-4), // 0xbf2a01a019bfdf03 - NewFromFloat(8.33333333332211858878e-3), // 0x3f8111111110f7d0 - NewFromFloat(-1.66666666666666307295e-1), // 0xbfc5555555555548 -} - -// Sin returns the sine of the radian argument x. -func (d Decimal) Sin() Decimal { - PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts - PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000, - PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170, - M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi - - if d.Equal(NewFromFloat(0.0)) { - return d - } - // make argument positive but save the sign - sign := false - if d.LessThan(NewFromFloat(0.0)) { - d = d.Neg() - sign = true - } - - j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle - y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float - - // map zeros to origin - if j&1 == 1 { - j++ - y = y.Add(NewFromFloat(1.0)) - } - j &= 7 // octant modulo 2Pi radians (360 degrees) - // reflect in x axis - if j > 3 { - sign = !sign - j -= 4 - } - z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic - zz := z.Mul(z) - - if j == 1 || j == 2 { - w := zz.Mul(zz).Mul(_cos[0].Mul(zz).Add(_cos[1]).Mul(zz).Add(_cos[2]).Mul(zz).Add(_cos[3]).Mul(zz).Add(_cos[4]).Mul(zz).Add(_cos[5])) - y = NewFromFloat(1.0).Sub(NewFromFloat(0.5).Mul(zz)).Add(w) - } else { - y = z.Add(z.Mul(zz).Mul(_sin[0].Mul(zz).Add(_sin[1]).Mul(zz).Add(_sin[2]).Mul(zz).Add(_sin[3]).Mul(zz).Add(_sin[4]).Mul(zz).Add(_sin[5]))) - } - if sign { - y = y.Neg() - } - return y -} - -// cos coefficients -var _cos = [...]Decimal{ - NewFromFloat(-1.13585365213876817300e-11), // 0xbda8fa49a0861a9b - NewFromFloat(2.08757008419747316778e-9), // 0x3e21ee9d7b4e3f05 - NewFromFloat(-2.75573141792967388112e-7), // 0xbe927e4f7eac4bc6 - NewFromFloat(2.48015872888517045348e-5), // 0x3efa01a019c844f5 - NewFromFloat(-1.38888888888730564116e-3), // 0xbf56c16c16c14f91 - NewFromFloat(4.16666666666665929218e-2), // 0x3fa555555555554b -} - -// Cos returns the cosine of the radian argument x. -func (d Decimal) Cos() Decimal { - - PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts - PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000, - PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170, - M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi - - // make argument positive - sign := false - if d.LessThan(NewFromFloat(0.0)) { - d = d.Neg() - } - - j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle - y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float - - // map zeros to origin - if j&1 == 1 { - j++ - y = y.Add(NewFromFloat(1.0)) - } - j &= 7 // octant modulo 2Pi radians (360 degrees) - // reflect in x axis - if j > 3 { - sign = !sign - j -= 4 - } - if j > 1 { - sign = !sign - } - - z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic - zz := z.Mul(z) - - if j == 1 || j == 2 { - y = z.Add(z.Mul(zz).Mul(_sin[0].Mul(zz).Add(_sin[1]).Mul(zz).Add(_sin[2]).Mul(zz).Add(_sin[3]).Mul(zz).Add(_sin[4]).Mul(zz).Add(_sin[5]))) - } else { - w := zz.Mul(zz).Mul(_cos[0].Mul(zz).Add(_cos[1]).Mul(zz).Add(_cos[2]).Mul(zz).Add(_cos[3]).Mul(zz).Add(_cos[4]).Mul(zz).Add(_cos[5])) - y = NewFromFloat(1.0).Sub(NewFromFloat(0.5).Mul(zz)).Add(w) - } - if sign { - y = y.Neg() - } - return y -} - -var _tanP = [...]Decimal{ - NewFromFloat(-1.30936939181383777646e+4), // 0xc0c992d8d24f3f38 - NewFromFloat(1.15351664838587416140e+6), // 0x413199eca5fc9ddd - NewFromFloat(-1.79565251976484877988e+7), // 0xc1711fead3299176 -} -var _tanQ = [...]Decimal{ - NewFromFloat(1.00000000000000000000e+0), - NewFromFloat(1.36812963470692954678e+4), //0x40cab8a5eeb36572 - NewFromFloat(-1.32089234440210967447e+6), //0xc13427bc582abc96 - NewFromFloat(2.50083801823357915839e+7), //0x4177d98fc2ead8ef - NewFromFloat(-5.38695755929454629881e+7), //0xc189afe03cbe5a31 -} - -// Tan returns the tangent of the radian argument x. -func (d Decimal) Tan() Decimal { - - PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts - PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000, - PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170, - M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi - - if d.Equal(NewFromFloat(0.0)) { - return d - } - - // make argument positive but save the sign - sign := false - if d.LessThan(NewFromFloat(0.0)) { - d = d.Neg() - sign = true - } - - j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle - y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float - - // map zeros to origin - if j&1 == 1 { - j++ - y = y.Add(NewFromFloat(1.0)) - } - - z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic - zz := z.Mul(z) - - if zz.GreaterThan(NewFromFloat(1e-14)) { - w := zz.Mul(_tanP[0].Mul(zz).Add(_tanP[1]).Mul(zz).Add(_tanP[2])) - x := zz.Add(_tanQ[1]).Mul(zz).Add(_tanQ[2]).Mul(zz).Add(_tanQ[3]).Mul(zz).Add(_tanQ[4]) - y = z.Add(z.Mul(w.Div(x))) - } else { - y = z - } - if j&2 == 2 { - y = NewFromFloat(-1.0).Div(y) - } - if sign { - y = y.Neg() - } - return y -} diff --git a/vendor/gitee.com/chunanyong/zorm/decimal/rounding.go b/vendor/gitee.com/chunanyong/zorm/decimal/rounding.go deleted file mode 100644 index d4b0cd00..00000000 --- a/vendor/gitee.com/chunanyong/zorm/decimal/rounding.go +++ /dev/null @@ -1,160 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Multiprecision decimal numbers. -// For floating-point formatting only; not general purpose. -// Only operations are assign and (binary) left/right shift. -// Can do binary floating point in multiprecision decimal precisely -// because 2 divides 10; cannot do decimal floating point -// in multiprecision binary precisely. - -package decimal - -type floatInfo struct { - mantbits uint - expbits uint - bias int -} - -var float32info = floatInfo{23, 8, -127} -var float64info = floatInfo{52, 11, -1023} - -// roundShortest rounds d (= mant * 2^exp) to the shortest number of digits -// that will let the original floating point value be precisely reconstructed. -func roundShortest(d *decimal, mant uint64, exp int, flt *floatInfo) { - // If mantissa is zero, the number is zero; stop now. - if mant == 0 { - d.nd = 0 - return - } - - // Compute upper and lower such that any decimal number - // between upper and lower (possibly inclusive) - // will round to the original floating point number. - - // We may see at once that the number is already shortest. - // - // Suppose d is not denormal, so that 2^exp <= d < 10^dp. - // The closest shorter number is at least 10^(dp-nd) away. - // The lower/upper bounds computed below are at distance - // at most 2^(exp-mantbits). - // - // So the number is already shortest if 10^(dp-nd) > 2^(exp-mantbits), - // or equivalently log2(10)*(dp-nd) > exp-mantbits. - // It is true if 332/100*(dp-nd) >= exp-mantbits (log2(10) > 3.32). - minexp := flt.bias + 1 // minimum possible exponent - if exp > minexp && 332*(d.dp-d.nd) >= 100*(exp-int(flt.mantbits)) { - // The number is already shortest. - return - } - - // d = mant << (exp - mantbits) - // Next highest floating point number is mant+1 << exp-mantbits. - // Our upper bound is halfway between, mant*2+1 << exp-mantbits-1. - upper := new(decimal) - upper.Assign(mant*2 + 1) - upper.Shift(exp - int(flt.mantbits) - 1) - - // d = mant << (exp - mantbits) - // Next lowest floating point number is mant-1 << exp-mantbits, - // unless mant-1 drops the significant bit and exp is not the minimum exp, - // in which case the next lowest is mant*2-1 << exp-mantbits-1. - // Either way, call it mantlo << explo-mantbits. - // Our lower bound is halfway between, mantlo*2+1 << explo-mantbits-1. - var mantlo uint64 - var explo int - if mant > 1<= d.nd { - break - } - li := ui - upper.dp + lower.dp - l := byte('0') // lower digit - if li >= 0 && li < lower.nd { - l = lower.d[li] - } - m := byte('0') // middle digit - if mi >= 0 { - m = d.d[mi] - } - u := byte('0') // upper digit - if ui < upper.nd { - u = upper.d[ui] - } - - // Okay to round down (truncate) if lower has a different digit - // or if lower is inclusive and is exactly the result of rounding - // down (i.e., and we have reached the final digit of lower). - okdown := l != m || inclusive && li+1 == lower.nd - - switch { - case upperdelta == 0 && m+1 < u: - // Example: - // m = 12345xxx - // u = 12347xxx - upperdelta = 2 - case upperdelta == 0 && m != u: - // Example: - // m = 12345xxx - // u = 12346xxx - upperdelta = 1 - case upperdelta == 1 && (m != '9' || u != '0'): - // Example: - // m = 1234598x - // u = 1234600x - upperdelta = 2 - } - // Okay to round up if upper has a different digit and either upper - // is inclusive or upper is bigger than the result of rounding up. - okup := upperdelta > 0 && (inclusive || upperdelta > 1 || ui+1 < upper.nd) - - // If it's okay to do either, then round to the nearest one. - // If it's okay to do only one, do it. - switch { - case okdown && okup: - d.Round(mi + 1) - return - case okdown: - d.RoundDown(mi + 1) - return - case okup: - d.RoundUp(mi + 1) - return - } - } -} diff --git a/vendor/gitee.com/chunanyong/zorm/dialect.go b/vendor/gitee.com/chunanyong/zorm/dialect.go deleted file mode 100644 index e6277514..00000000 --- a/vendor/gitee.com/chunanyong/zorm/dialect.go +++ /dev/null @@ -1,1084 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "crypto/rand" - "database/sql" - "errors" - "fmt" - "math/big" - "reflect" - "regexp" - "strconv" - "strings" - "time" -) - -// wrapPageSQL 包装分页的SQL语句 -// wrapPageSQL SQL statement for wrapping paging -func wrapPageSQL(dialect string, sqlstr *string, page *Page) error { - if page.PageNo < 1 { // 默认第一页 - page.PageNo = 1 - } - var sqlbuilder strings.Builder - sqlbuilder.Grow(stringBuilderGrowLen) - sqlbuilder.WriteString(*sqlstr) - switch dialect { - case "mysql", "sqlite", "dm", "gbase", "clickhouse", "tdengine", "db2": // MySQL,sqlite3,dm,南通,clickhouse,TDengine,db2 7.2+ - sqlbuilder.WriteString(" LIMIT ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1))) - sqlbuilder.WriteByte(',') - sqlbuilder.WriteString(strconv.Itoa(page.PageSize)) - - case "postgresql", "kingbase", "shentong": // postgresql,kingbase,神通数据库 - sqlbuilder.WriteString(" LIMIT ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize)) - sqlbuilder.WriteString(" OFFSET ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1))) - case "mssql": // sqlserver 2012+ - locOrderBy := findOrderByIndex(sqlstr) - if len(locOrderBy) < 1 { // 如果没有 order by,增加默认的排序 - sqlbuilder.WriteString(" ORDER BY (SELECT NULL) ") - } - sqlbuilder.WriteString(" OFFSET ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1))) - sqlbuilder.WriteString(" ROWS FETCH NEXT ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize)) - sqlbuilder.WriteString(" ROWS ONLY ") - case "oracle": // oracle 12c+ - locOrderBy := findOrderByIndex(sqlstr) - if len(locOrderBy) < 1 { // 如果没有 order by,增加默认的排序 - sqlbuilder.WriteString(" ORDER BY NULL ") - } - sqlbuilder.WriteString(" OFFSET ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1))) - sqlbuilder.WriteString(" ROWS FETCH NEXT ") - sqlbuilder.WriteString(strconv.Itoa(page.PageSize)) - sqlbuilder.WriteString(" ROWS ONLY ") - default: - return errors.New("->wrapPageSQL-->不支持的数据库类型:" + dialect) - - } - *sqlstr = sqlbuilder.String() - // return reBindSQL(dialect, sqlstr) - return nil -} - -// wrapInsertSQL 包装保存Struct语句.返回语句,是否自增,错误信息 -// 数组传递,如果外部方法有调用append的逻辑,append会破坏指针引用,所以传递指针 -// wrapInsertSQL Pack and save 'Struct' statement. Return SQL statement, whether it is incremented, error message -// Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed -func wrapInsertSQL(ctx context.Context, typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}) (string, int, string, error) { - sqlstr := "" - inserColumnName, valuesql, autoIncrement, pktype, err := wrapInsertValueSQL(ctx, typeOf, entity, columns, values) - if err != nil { - return sqlstr, autoIncrement, pktype, err - } - - var sqlBuilder strings.Builder - // sqlBuilder.Grow(len(entity.GetTableName()) + len(inserColumnName) + len(entity.GetTableName()) + len(valuesql) + 19) - sqlBuilder.Grow(stringBuilderGrowLen) - // sqlstr := "INSERT INTO " + insersql + " VALUES" + valuesql - sqlBuilder.WriteString("INSERT INTO ") - sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteString(inserColumnName) - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - sqlstr = sqlBuilder.String() - return sqlstr, autoIncrement, pktype, err -} - -// wrapInsertValueSQL 包装保存Struct语句.返回语句,没有rebuild,返回原始的InsertSQL,ValueSQL,是否自增,主键类型,错误信息 -// 数组传递,如果外部方法有调用append的逻辑,传递指针,因为append会破坏指针引用 -// Pack and save Struct statement. Return SQL statement, no rebuild, return original SQL, whether it is self-increment, error message -// Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed -func wrapInsertValueSQL(ctx context.Context, typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}) (string, string, int, string, error) { - var inserColumnName, valuesql string - // 自增类型 0(不自增),1(普通自增),2(序列自增) - // Self-increment type: 0(Not increase),1(Ordinary increment),2(Sequence increment) - autoIncrement := 0 - // 主键类型 - // Primary key type - pktype := "" - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - // sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteByte('(') - - // SQL语句中,VALUES(?,?,...)语句的构造器 - // In the SQL statement, the constructor of the VALUES(?,?,...) statement - var valueSQLBuilder strings.Builder - valueSQLBuilder.Grow(stringBuilderGrowLen) - valueSQLBuilder.WriteString(" (") - // 主键的名称 - // The name of the primary key. - pkFieldName, e := entityPKFieldName(entity, typeOf) - if e != nil { - return inserColumnName, valuesql, autoIncrement, pktype, e - } - - sequence := entity.GetPkSequence() - if sequence != "" { - // 序列自增 Sequence increment - autoIncrement = 2 - } - - for i := 0; i < len(*columns); i++ { - field := (*columns)[i] - - if field.Name == pkFieldName { // 如果是主键 | If it is the primary key - // 获取主键类型 | Get the primary key type. - pkKind := field.Type.Kind() - switch pkKind { - case reflect.String: - pktype = "string" - case reflect.Int, reflect.Int32, reflect.Int16, reflect.Int8: - pktype = "int" - case reflect.Int64: - pktype = "int64" - default: - return inserColumnName, valuesql, autoIncrement, pktype, errors.New("->wrapInsertValueSQL不支持的主键类型") - } - - // 主键的值 - // The value of the primary key - pkValue := (*values)[i] - valueIsZero := reflect.ValueOf(pkValue).IsZero() - if autoIncrement == 2 { // 如果是序列自增 | If it is a sequence increment - // 去掉这一列,后续不再处理 - // Remove this column and will not process it later. - *columns = append((*columns)[:i], (*columns)[i+1:]...) - *values = append((*values)[:i], (*values)[i+1:]...) - i = i - 1 - if i > 0 { // i+1 0 { // i+1wrapInsertSliceSQL对象数组不能为空") - } - - // 第一个对象,获取第一个Struct对象,用于获取数据库字段,也获取了值 - // The first object, get the first Struct object, used to get the database field, and also get the value - entity := entityStructSlice[0] - - // 先生成一条语句 - // Generate a statement first - inserColumnName, valuesql, autoIncrement, _, firstErr := wrapInsertValueSQL(ctx, typeOf, entity, columns, values) - if firstErr != nil { - return sqlstr, autoIncrement, firstErr - } - var sqlBuilder strings.Builder - // sqlBuilder.Grow(len(entity.GetTableName()) + len(inserColumnName) + len(entity.GetTableName()) + len(valuesql) + 19) - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("INSERT INTO ") - sqlBuilder.WriteString(entity.GetTableName()) - // sqlstr := "INSERT INTO " - if config.Dialect == "tdengine" && !config.TDengineInsertsColumnName { // 如果是tdengine,拼接类似 INSERT INTO table1 values('2','3') table2 values('4','5'),目前要求字段和类型必须一致,如果不一致,改动略多 - } else { - // sqlstr = sqlstr + insertsql + " VALUES" + valuesql - sqlBuilder.WriteString(inserColumnName) - } - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - // 如果只有一个Struct对象 - // If there is only one Struct object - if sliceLen == 1 { - return sqlBuilder.String(), autoIncrement, firstErr - } - // 主键的名称 - // The name of the primary key - pkFieldName, e := entityPKFieldName(entity, typeOf) - if e != nil { - return sqlBuilder.String(), autoIncrement, e - } - - for i := 1; i < sliceLen; i++ { - // 拼接字符串 - // Splicing string - if config.Dialect == "tdengine" { // 如果是tdengine,拼接类似 INSERT INTO table1 values('2','3') table2 values('4','5'),目前要求字段和类型必须一致,如果不一致,改动略多 - sqlBuilder.WriteByte(' ') - sqlBuilder.WriteString(entityStructSlice[i].GetTableName()) - if config.TDengineInsertsColumnName { - sqlBuilder.WriteString(inserColumnName) - } - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - } else { // 标准语法 类似 INSERT INTO table1(id,name) values('2','3'),('4','5') - sqlBuilder.WriteByte(',') - sqlBuilder.WriteString(valuesql) - } - - entityStruct := entityStructSlice[i] - for j := 0; j < len(*columns); j++ { - // 获取实体类的反射,指针下的struct - // Get the reflection of the entity class, the struct under the pointer - valueOf := reflect.ValueOf(entityStruct).Elem() - field := (*columns)[j] - // 字段的值 - // The value of the primary key - fieldValue := valueOf.FieldByName(field.Name) - if field.Name == pkFieldName { // 如果是主键 | If it is the primary key - pkKind := field.Type.Kind() - // pkValue := valueOf.FieldByName(field.Name).Interface() - // 只处理字符串类型的主键,其他类型,columns中并不包含 - // Only handle primary keys of string type, other types, not included in columns - if (pkKind == reflect.String) && fieldValue.IsZero() { - // 主键是字符串类型,并且值为"",赋值'id' - // 生成主键字符串 - // The primary key is a string type, and the value is "", assigned the value'id' - // Generate primary key string - id := FuncGenerateStringID(ctx) - *values = append(*values, id) - // 给对象主键赋值 - // Assign a value to the primary key of the object - fieldValue.Set(reflect.ValueOf(id)) - continue - } - } - - // 给字段赋值 - // Assign a value to the field. - *values = append(*values, fieldValue.Interface()) - - } - } - - sqlstr = sqlBuilder.String() - return sqlstr, autoIncrement, nil -} - -// wrapInsertEntityMapSliceSQL 包装批量保存EntityMapSlice语句.返回语句,值,错误信息 -func wrapInsertEntityMapSliceSQL(ctx context.Context, config *DataSourceConfig, entityMapSlice []IEntityMap) (string, []interface{}, error) { - sliceLen := len(entityMapSlice) - sqlstr := "" - if entityMapSlice == nil || sliceLen < 1 { - return sqlstr, nil, errors.New("->wrapInsertSliceSQL对象数组不能为空") - } - // 第一个对象,获取第一个Struct对象,用于获取数据库字段,也获取了值 - entity := entityMapSlice[0] - // 检查是否是指针对象 - _, err := checkEntityKind(entity) - if err != nil { - return sqlstr, nil, err - } - dbFieldMapKey := entity.GetDBFieldMapKey() - // SQL语句 - inserColumnName, valuesql, values, _, err := wrapInsertValueEntityMapSQL(entity) - if err != nil { - return sqlstr, values, err - } - - var sqlBuilder strings.Builder - // sqlBuilder.Grow(len(entity.GetTableName()) + len(inserColumnName) + len(valuesql) + 19) - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("INSERT INTO ") - sqlBuilder.WriteString(entity.GetTableName()) - // sqlstr = sqlstr + insertsql + " VALUES" + valuesql - sqlBuilder.WriteString(inserColumnName) - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - for i := 1; i < sliceLen; i++ { - // 拼接字符串 - // Splicing string - if config.Dialect == "tdengine" { // 如果是tdengine,拼接类似 INSERT INTO table1 values('2','3') table2 values('4','5'),目前要求字段和类型必须一致,如果不一致,改动略多 - sqlBuilder.WriteByte(' ') - sqlBuilder.WriteString(entityMapSlice[i].GetTableName()) - if config.TDengineInsertsColumnName { - sqlBuilder.WriteString(inserColumnName) - } - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - } else { // 标准语法 类似 INSERT INTO table1(id,name) values('2','3'), values('4','5') - sqlBuilder.WriteByte(',') - sqlBuilder.WriteString(valuesql) - } - - entityMap := entityMapSlice[i] - for j := 0; j < len(dbFieldMapKey); j++ { - key := dbFieldMapKey[j] - value := entityMap.GetDBFieldMap()[key] - values = append(values, value) - } - } - - sqlstr = sqlBuilder.String() - return sqlstr, values, err -} - -// wrapUpdateSQL 包装更新Struct语句 -// 数组传递,如果外部方法有调用append的逻辑,append会破坏指针引用,所以传递指针 -// wrapUpdateSQL Package update Struct statement -// Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed -func wrapUpdateSQL(typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}, onlyUpdateNotZero bool) (string, error) { - sqlstr := "" - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("UPDATE ") - sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteString(" SET ") - - // 主键的值 - // The value of the primary key - var pkValue interface{} - // 主键的名称 - // The name of the primary key - pkFieldName, e := entityPKFieldName(entity, typeOf) - if e != nil { - return sqlstr, e - } - - for i := 0; i < len(*columns); i++ { - field := (*columns)[i] - if field.Name == pkFieldName { - // 如果是主键 - // If it is the primary key. - pkValue = (*values)[i] - // 去掉这一列,最后处理主键 - // Remove this column, and finally process the primary key - *columns = append((*columns)[:i], (*columns)[i+1:]...) - *values = append((*values)[:i], (*values)[i+1:]...) - i = i - 1 - continue - } - - // 如果是默认值字段,删除掉,不更新 - // If it is the default value field, delete it and do not update - if onlyUpdateNotZero && (reflect.ValueOf((*values)[i]).IsZero()) { - // 去掉这一列,不再处理 - // Remove this column and no longer process - *columns = append((*columns)[:i], (*columns)[i+1:]...) - *values = append((*values)[:i], (*values)[i+1:]...) - i = i - 1 - continue - - } - if i > 0 { - sqlBuilder.WriteByte(',') - } - colName := getFieldTagName(&field) - sqlBuilder.WriteString(colName) - sqlBuilder.WriteString("=?") - - } - // 主键的值是最后一个 - // The value of the primary key is the last - *values = append(*values, pkValue) - - // sqlstr = sqlstr + " WHERE " + entity.GetPKColumnName() + "=?" - sqlBuilder.WriteString(" WHERE ") - sqlBuilder.WriteString(entity.GetPKColumnName()) - sqlBuilder.WriteString("=?") - sqlstr = sqlBuilder.String() - - return sqlstr, nil -} - -// wrapDeleteSQL 包装删除Struct语句 -// wrapDeleteSQL Package delete Struct statement -func wrapDeleteSQL(entity IEntityStruct) (string, error) { - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("DELETE FROM ") - sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteString(" WHERE ") - sqlBuilder.WriteString(entity.GetPKColumnName()) - sqlBuilder.WriteString("=?") - sqlstr := sqlBuilder.String() - - return sqlstr, nil -} - -// wrapInsertEntityMapSQL 包装保存Map语句,Map因为没有字段属性,无法完成Id的类型判断和赋值,需要确保Map的值是完整的 -// wrapInsertEntityMapSQL Pack and save the Map statement. Because Map does not have field attributes, -// it cannot complete the type judgment and assignment of Id. It is necessary to ensure that the value of Map is complete -func wrapInsertEntityMapSQL(entity IEntityMap) (string, []interface{}, bool, error) { - sqlstr := "" - inserColumnName, valuesql, values, autoIncrement, err := wrapInsertValueEntityMapSQL(entity) - if err != nil { - return sqlstr, nil, autoIncrement, err - } - // 拼接SQL语句,带上列名,因为Map取值是无序的 - // sqlstr := "INSERT INTO " + insertsql + " VALUES" + valuesql - - var sqlBuilder strings.Builder - // sqlBuilder.Grow(len(inserColumnName) + len(entity.GetTableName()) + len(valuesql) + 19) - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("INSERT INTO ") - sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteString(inserColumnName) - sqlBuilder.WriteString(" VALUES") - sqlBuilder.WriteString(valuesql) - sqlstr = sqlBuilder.String() - - return sqlstr, values, autoIncrement, nil -} - -// wrapInsertValueEntityMapSQL 包装保存Map语句,Map因为没有字段属性,无法完成Id的类型判断和赋值,需要确保Map的值是完整的 -// wrapInsertValueEntityMapSQL Pack and save the Map statement. Because Map does not have field attributes, -// it cannot complete the type judgment and assignment of Id. It is necessary to ensure that the value of Map is complete -func wrapInsertValueEntityMapSQL(entity IEntityMap) (string, string, []interface{}, bool, error) { - var inserColumnName, valuesql string - // 是否自增,默认false - autoIncrement := false - dbFieldMap := entity.GetDBFieldMap() - if len(dbFieldMap) < 1 { - return inserColumnName, inserColumnName, nil, autoIncrement, errors.New("->wrapInsertEntityMapSQL-->GetDBFieldMap返回值不能为空") - } - // SQL对应的参数 - // SQL corresponding parameters - values := []interface{}{} - - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - // sqlBuilder.WriteString("INSERT INTO ") - // sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteByte('(') - - // SQL语句中,VALUES(?,?,...)语句的构造器 - // In the SQL statement, the constructor of the VALUES(?,?,...) statement. - var valueSQLBuilder strings.Builder - valueSQLBuilder.Grow(stringBuilderGrowLen) - valueSQLBuilder.WriteString(" (") - // 是否Set了主键 - // Whether the primary key is set. - _, hasPK := dbFieldMap[entity.GetPKColumnName()] - if entity.GetPKColumnName() != "" && !hasPK { // 如果有主键字段,却没值,认为是自增或者序列 | If the primary key is not set, it is considered to be auto-increment or sequence - autoIncrement = true - if entity.GetEntityMapPkSequence() != "" { // 如果是序列 | If it is a sequence. - sqlBuilder.WriteString(entity.GetPKColumnName()) - valueSQLBuilder.WriteString(entity.GetEntityMapPkSequence()) - if len(dbFieldMap) > 1 { // 如果不只有序列 - sqlBuilder.WriteByte(',') - valueSQLBuilder.WriteByte(',') - } - - } - } - - dbFieldMapKey := entity.GetDBFieldMapKey() - for dbFieldMapIndex := 0; dbFieldMapIndex < len(dbFieldMapKey); dbFieldMapIndex++ { - if dbFieldMapIndex > 0 { - sqlBuilder.WriteByte(',') - valueSQLBuilder.WriteByte(',') - } - k := dbFieldMapKey[dbFieldMapIndex] - v := dbFieldMap[k] - // 拼接字符串 - // Concatenated string - sqlBuilder.WriteString(k) - valueSQLBuilder.WriteByte('?') - values = append(values, v) - } - - sqlBuilder.WriteByte(')') - valueSQLBuilder.WriteByte(')') - inserColumnName = sqlBuilder.String() - valuesql = valueSQLBuilder.String() - - return inserColumnName, valuesql, values, autoIncrement, nil -} - -// wrapUpdateEntityMapSQL 包装Map更新语句,Map因为没有字段属性,无法完成Id的类型判断和赋值,需要确保Map的值是完整的 -// wrapUpdateEntityMapSQL Wrap the Map update statement. Because Map does not have field attributes, -// it cannot complete the type judgment and assignment of Id. It is necessary to ensure that the value of Map is complete -func wrapUpdateEntityMapSQL(entity IEntityMap) (string, []interface{}, error) { - dbFieldMap := entity.GetDBFieldMap() - sqlstr := "" - if len(dbFieldMap) < 1 { - return sqlstr, nil, errors.New("->wrapUpdateEntityMapSQL-->GetDBFieldMap返回值不能为空") - } - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString("UPDATE ") - sqlBuilder.WriteString(entity.GetTableName()) - sqlBuilder.WriteString(" SET ") - - // SQL对应的参数 - // SQL corresponding parameters - values := []interface{}{} - // 主键名称 - // Primary key name - var pkValue interface{} - dbFieldMapIndex := 0 - for k, v := range dbFieldMap { - - if k == entity.GetPKColumnName() { // 如果是主键 | If it is the primary key - pkValue = v - continue - } - if dbFieldMapIndex > 0 { - sqlBuilder.WriteByte(',') - } - - // 拼接字符串 | Splicing string. - sqlBuilder.WriteString(k) - sqlBuilder.WriteString("=?") - values = append(values, v) - dbFieldMapIndex++ - } - // 主键的值是最后一个 - // The value of the primary key is the last - values = append(values, pkValue) - - sqlBuilder.WriteString(" WHERE ") - sqlBuilder.WriteString(entity.GetPKColumnName()) - sqlBuilder.WriteString("=?") - sqlstr = sqlBuilder.String() - - return sqlstr, values, nil -} - -// wrapQuerySQL 封装查询语句 -// wrapQuerySQL Encapsulated query statement -func wrapQuerySQL(dialect string, finder *Finder, page *Page) (string, error) { - // 获取到没有page的sql的语句 - // Get the SQL statement without page. - sqlstr, err := finder.GetSQL() - if err != nil { - return "", err - } - if page != nil { - err = wrapPageSQL(dialect, &sqlstr, page) - } - if err != nil { - return "", err - } - return sqlstr, err -} - -// 查询'order by'在sql中出现的开始位置和结束位置 -// Query the start position and end position of'order by' in SQL -var ( - orderByExpr = "(?i)\\s(order)\\s+by\\s" - orderByRegexp, _ = regexp.Compile(orderByExpr) -) - -// findOrderByIndex 查询order by在sql中出现的开始位置和结束位置 -// findOrderByIndex Query the start position and end position of'order by' in SQL -func findOrderByIndex(strsql *string) []int { - loc := orderByRegexp.FindStringIndex(*strsql) - return loc -} - -// 查询'group by'在sql中出现的开始位置和结束位置 -// Query the start position and end position of'group by' in sql。 -var ( - groupByExpr = "(?i)\\s(group)\\s+by\\s" - groupByRegexp, _ = regexp.Compile(groupByExpr) -) - -// findGroupByIndex 查询group by在sql中出现的开始位置和结束位置 -// findGroupByIndex Query the start position and end position of'group by' in sql -func findGroupByIndex(strsql *string) []int { - loc := groupByRegexp.FindStringIndex(*strsql) - return loc -} - -// 查询 from 在sql中出现的开始位置和结束位置 -// Query the start position and end position of 'from' in sql -// var fromExpr = "(?i)(^\\s*select)(.+?\\(.+?\\))*.*?(from)" -// 感谢奔跑(@zeqjone)提供的正则,排除不在括号内的from,已经满足绝大部分场景, -// select id1,(select (id2) from t1 where id=2) _s FROM table select的子查询 _s中的 id2还有括号,才会出现问题,建议使用CountFinder处理分页语句 -// countFinder := zorm.NewFinder().Append("select count(*) from (") -// countFinder.AppendFinder(finder) -// countFinder.Append(") tempcountfinder") -// finder.CountFinder = countFinder -var ( - fromExpr = "(?i)(^\\s*select)(\\(.*?\\)|[^()]+)*?(from)" - fromRegexp, _ = regexp.Compile(fromExpr) -) - -// findFromIndexa 查询from在sql中出现的开始位置和结束位置 -// findSelectFromIndex Query the start position and end position of 'from' in sql -func findSelectFromIndex(strsql *string) []int { - // 匹配出来的是完整的字符串,用最后的FROM即可 - loc := fromRegexp.FindStringIndex(*strsql) - if len(loc) < 2 { - return loc - } - // 最后的FROM前推4位字符串 - loc[0] = loc[1] - 4 - return loc -} - -/* -var fromExpr = `\(([\s\S]+?)\)` -var fromRegexp, _ = regexp.Compile(fromExpr) - -//查询 from 在sql中出现的开始位置 -//Query the start position of 'from' in sql -func findSelectFromIndex(strsql string) int { - sql := strings.ToLower(strsql) - m := fromRegexp.FindAllString(sql, -1) - for i := 0; i < len(m); i++ { - str := m[i] - strnofrom := strings.ReplaceAll(str, " from ", " zorm ") - sql = strings.ReplaceAll(sql, str, strnofrom) - } - fromIndex := strings.LastIndex(sql, " from ") - if fromIndex < 0 { - return fromIndex - } - //补上一个空格 - fromIndex = fromIndex + 1 - return fromIndex -} -*/ -/* -// 从更新语句中获取表名 -//update\\s(.+)set\\s.* -var ( - updateExper = "(?i)^\\s*update\\s+(\\w+)\\s+set\\s" - updateRegexp, _ = regexp.Compile(updateExper) -) - -// findUpdateTableName 获取语句中表名 -// 第一个是符合的整体数据,第二个是表名 -func findUpdateTableName(strsql *string) []string { - matchs := updateRegexp.FindStringSubmatch(*strsql) - return matchs -} - -// 从删除语句中获取表名 -// delete\\sfrom\\s(.+)where\\s(.*) -var ( - deleteExper = "(?i)^\\s*delete\\s+from\\s+(\\w+)\\s+where\\s" - deleteRegexp, _ = regexp.Compile(deleteExper) -) - -// findDeleteTableName 获取语句中表名 -// 第一个是符合的整体数据,第二个是表名 -func findDeleteTableName(strsql *string) []string { - matchs := deleteRegexp.FindStringSubmatch(*strsql) - return matchs -} -*/ - -// FuncGenerateStringID 默认生成字符串ID的函数.方便自定义扩展 -// FuncGenerateStringID Function to generate string ID by default. Convenient for custom extension -var FuncGenerateStringID = func(ctx context.Context) string { - // 使用 crypto/rand 真随机9位数 - randNum, randErr := rand.Int(rand.Reader, big.NewInt(1000000000)) - if randErr != nil { - return "" - } - // 获取9位数,前置补0,确保9位数 - rand9 := fmt.Sprintf("%09d", randNum) - - // 获取纳秒 按照 年月日时分秒毫秒微秒纳秒 拼接为长度23位的字符串 - pk := time.Now().Format("2006.01.02.15.04.05.000000000") - pk = strings.ReplaceAll(pk, ".", "") - - // 23位字符串+9位随机数=32位字符串,这样的好处就是可以使用ID进行排序 - pk = pk + rand9 - return pk -} - -// FuncWrapFieldTagName 用于包裹字段名, eg. `describe` -var FuncWrapFieldTagName = func(colName string) string { - // custom: return fmt.Sprintf("`%s`", colName) - return colName -} - -// getFieldTagName 获取模型中定义的数据库的 column tag -func getFieldTagName(field *reflect.StructField) string { - colName := field.Tag.Get(tagColumnName) - colName = FuncWrapFieldTagName(colName) - /* - if dialect == "kingbase" { - // kingbase R3 驱动大小写敏感,通常是大写。数据库全的列名部换成双引号括住的大写字符,避免与数据库内置关键词冲突时报错 - colName = strings.ReplaceAll(colName, "\"", "") - colName = fmt.Sprintf(`"%s"`, strings.ToUpper(colName)) - } - */ - return colName -} - -// wrapSQLHint 在sql语句中增加hint -func wrapSQLHint(ctx context.Context, sqlstr *string) error { - // 获取hint - contextValue := ctx.Value(contextSQLHintValueKey) - if contextValue == nil { // 如果没有设置hint - return nil - } - hint, ok := contextValue.(string) - if !ok { - return errors.New("->wrapSQLHint-->contextValue转换string失败") - } - if hint == "" { - return nil - } - sqlByte := []byte(*sqlstr) - // 获取第一个单词 - _, start, end, err := firstOneWord(0, &sqlByte) - if err != nil { - return err - } - if start == -1 || end == -1 { // 未取到字符串 - return nil - } - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString((*sqlstr)[:end]) - sqlBuilder.WriteByte(' ') - sqlBuilder.WriteString(hint) - sqlBuilder.WriteString((*sqlstr)[end:]) - *sqlstr = sqlBuilder.String() - return nil -} - -// reBindSQL 包装基础的SQL语句,根据数据库类型,调整SQL变量符号,例如?,? $1,$2这样的 -// reBindSQL Pack basic SQL statements, adjust the SQL variable symbols according to the database type, such as?,? $1,$2 -func reBindSQL(dialect string, sqlstr *string, args *[]interface{}) (*string, *[]interface{}, error) { - argsNum := len(*args) - if argsNum < 1 { // 没有参数,不需要处理,也不判断参数数量了,数据库会报错提示 - return sqlstr, args, nil - } - // 重新记录参数值 - // Re-record the parameter value - newValues := make([]interface{}, 0) - // 记录sql参数值的下标,例如 $1 @p1 ,从1开始 - sqlParamIndex := 1 - - // 新的sql - // new sql - var newSQLStr strings.Builder - // newSQLStr.Grow(len(*sqlstr)) - newSQLStr.Grow(stringBuilderGrowLen) - i := -1 - for _, v := range []byte(*sqlstr) { - if v != '?' { // 如果不是?问号 - newSQLStr.WriteByte(v) - continue - } - i = i + 1 - if i >= argsNum { // 占位符数量比参数值多,不使用 strings.Count函数,避免多次操作字符串 - return nil, nil, fmt.Errorf("sql语句中参数和值数量不一致,-->zormErrorExecSQL:%s,-->zormErrorSQLValues:%v", *sqlstr, *args) - } - v := (*args)[i] - // 反射获取参数的值 - valueOf := reflect.ValueOf(v) - // 获取类型 - kind := valueOf.Kind() - // 如果参数是个指针类型 - // If the parameter is a pointer type - if kind == reflect.Ptr { // 如果是指针 | If it is a pointer - valueOf = valueOf.Elem() - kind = valueOf.Kind() - } - typeOf := valueOf.Type() - // 参数值长度,默认是1,其他取值数组长度 - valueLen := 1 - - // 如果不是数组或者slice - // If it is not an array or slice - if !(kind == reflect.Array || kind == reflect.Slice) { - // 记录新值 - // Record new value. - newValues = append(newValues, v) - } else if typeOf == reflect.TypeOf([]byte{}) { - // 记录新值 - // Record new value - newValues = append(newValues, v) - } else { - // 如果不是字符串类型的值,无法取长度,这个是个bug,先注释了 - // 获取数组类型参数值的长度 - // If it is not a string type value, the length cannot be taken, this is a bug, first comment - // Get the length of the array type parameter value - valueLen = valueOf.Len() - // 数组类型的参数长度小于1,认为是有异常的参数 - // The parameter length of the array type is less than 1, which is considered to be an abnormal parameter - if valueLen < 1 { - return nil, nil, errors.New("->reBindSQL()语句:" + *sqlstr + ",第" + strconv.Itoa(i+1) + "个参数,类型是Array或者Slice,值的长度为0,请检查sql参数有效性") - } else if valueLen == 1 { // 如果数组里只有一个参数,认为是单个参数 - v = valueOf.Index(0).Interface() - newValues = append(newValues, v) - } - - } - - switch dialect { - case "mysql", "sqlite", "dm", "gbase", "clickhouse", "db2": - wrapParamSQL("?", valueLen, &sqlParamIndex, &newSQLStr, &valueOf, &newValues, false, false) - case "postgresql", "kingbase": // postgresql,kingbase - wrapParamSQL("$", valueLen, &sqlParamIndex, &newSQLStr, &valueOf, &newValues, true, false) - case "mssql": // mssql - wrapParamSQL("@p", valueLen, &sqlParamIndex, &newSQLStr, &valueOf, &newValues, true, false) - case "oracle", "shentong": // oracle,神通 - wrapParamSQL(":", valueLen, &sqlParamIndex, &newSQLStr, &valueOf, &newValues, true, false) - case "tdengine": // tdengine,重新处理 字符类型的参数 '?' - wrapParamSQL("?", valueLen, &sqlParamIndex, &newSQLStr, &valueOf, &newValues, false, true) - default: // 其他情况,还是使用 ? | In other cases, or use ? - newSQLStr.WriteByte('?') - } - - } - - //?号占位符的数量和参数不一致,不使用 strings.Count函数,避免多次操作字符串 - if (i + 1) != argsNum { - return nil, nil, fmt.Errorf("sql语句中参数和值数量不一致,-->zormErrorExecSQL:%s,-->zormErrorSQLValues:%v", *sqlstr, *args) - } - sqlstring := newSQLStr.String() - return &sqlstring, &newValues, nil -} - -// reUpdateFinderSQL 根据数据类型更新 手动编写的 UpdateFinder的语句,用于处理数据库兼容,例如 clickhouse的 UPDATE 和 DELETE -func reUpdateSQL(dialect string, sqlstr *string) error { - if dialect != "clickhouse" { // 目前只处理clickhouse - return nil - } - // 处理clickhouse的特殊更新语法 - sqlByte := []byte(*sqlstr) - // 获取第一个单词 - firstWord, start, end, err := firstOneWord(0, &sqlByte) - if err != nil { - return err - } - if start == -1 || end == -1 { // 未取到字符串 - return nil - } - // SQL语句的构造器 - // SQL statement constructor - var sqlBuilder strings.Builder - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString((*sqlstr)[:start]) - sqlBuilder.WriteString("ALTER TABLE ") - firstWord = strings.ToUpper(firstWord) - tableName := "" - if firstWord == "UPDATE" { // 更新 update tableName set - tableName, _, end, err = firstOneWord(end, &sqlByte) - if err != nil { - return err - } - // 拿到 set - _, start, end, err = firstOneWord(end, &sqlByte) - - } else if firstWord == "DELETE" { // 删除 delete from tableName - // 拿到from - _, _, end, err = firstOneWord(end, &sqlByte) - if err != nil { - return err - } - // 拿到 tableName - tableName, start, end, err = firstOneWord(end, &sqlByte) - } else { // 只处理UPDATE 和 DELETE 语法 - return nil - } - if err != nil { - return err - } - if start == -1 || end == -1 { // 获取的位置异常 - return errors.New("->reUpdateSQL中clickhouse语法异常,请检查sql语句是否标准,-->zormErrorExecSQL:" + *sqlstr) - } - sqlBuilder.WriteString(tableName) - sqlBuilder.WriteByte(' ') - sqlBuilder.WriteString(firstWord) - // sqlBuilder.WriteByte(' ') - sqlBuilder.WriteString((*sqlstr)[end:]) - *sqlstr = sqlBuilder.String() - return nil -} - -// wrapAutoIncrementInsertSQL 包装自增的自增主键的插入sql -func wrapAutoIncrementInsertSQL(pkColumnName string, sqlstr *string, dialect string, values *[]interface{}) (*int64, *int64) { - // oracle 12c+ 支持IDENTITY属性的自增列,因为分页也要求12c+的语法,所以数据库就IDENTITY创建自增吧 - // 处理序列产生的自增主键,例如oracle,postgresql等 - var lastInsertID, zormSQLOutReturningID *int64 - var sqlBuilder strings.Builder - // sqlBuilder.Grow(len(*sqlstr) + len(pkColumnName) + 40) - sqlBuilder.Grow(stringBuilderGrowLen) - sqlBuilder.WriteString(*sqlstr) - switch dialect { - case "postgresql", "kingbase": - var p int64 = 0 - lastInsertID = &p - // sqlstr = sqlstr + " RETURNING " + pkColumnName - sqlBuilder.WriteString(" RETURNING ") - sqlBuilder.WriteString(pkColumnName) - case "oracle", "shentong": - var p int64 = 0 - zormSQLOutReturningID = &p - // sqlstr = sqlstr + " RETURNING " + pkColumnName + " INTO :zormSQLOutReturningID " - sqlBuilder.WriteString(" RETURNING ") - sqlBuilder.WriteString(pkColumnName) - sqlBuilder.WriteString(" INTO :zormSQLOutReturningID ") - v := sql.Named("zormSQLOutReturningID", sql.Out{Dest: zormSQLOutReturningID}) - *values = append(*values, v) - } - - *sqlstr = sqlBuilder.String() - return lastInsertID, zormSQLOutReturningID -} - -// getConfigFromConnection 从dbConnection中获取数据库方言,如果没有,从FuncReadWriteStrategy获取dbDao,获取dbdao.config.Dialect -func getConfigFromConnection(ctx context.Context, dbConnection *dataBaseConnection, rwType int) (*DataSourceConfig, error) { - var config *DataSourceConfig - // dbConnection为nil,使用defaultDao - // dbConnection is nil, use default Dao - if dbConnection == nil { - dbdao, err := FuncReadWriteStrategy(ctx, rwType) - if err != nil { - return nil, err - } - config = dbdao.config - } else { - config = dbConnection.config - } - return config, nil -} - -// wrapParamSQL 包装SQL语句 -// symbols(占位符) valueLen(参数长度) sqlParamIndexPtr(参数的下标指针,数组会改变值) newSQLStr(SQL字符串Builder) valueOf(参数值的反射对象) hasParamIndex(是否拼接参数下标 $1 $2) isTDengine(TDengine数据库需要单独处理字符串类型) -func wrapParamSQL(symbols string, valueLen int, sqlParamIndexPtr *int, newSQLStr *strings.Builder, valueOf *reflect.Value, newValues *[]interface{}, hasParamIndex bool, isTDengine bool) { - sqlParamIndex := *sqlParamIndexPtr - if valueLen == 1 { - if isTDengine && valueOf.Kind() == reflect.String { // 处理tdengine的字符串类型 - symbols = "'?'" - } - newSQLStr.WriteString(symbols) - - if hasParamIndex { - newSQLStr.WriteString(strconv.Itoa(sqlParamIndex)) - } - - } else if valueLen > 1 { // 如果值是数组 - for j := 0; j < valueLen; j++ { - valuej := (*valueOf).Index(j) - if isTDengine && valuej.Kind() == reflect.String { // 处理tdengine的字符串类型 - symbols = "'?'" - } - if j == 0 { // 第一个 - newSQLStr.WriteString(symbols) - } else { - newSQLStr.WriteByte(',') - newSQLStr.WriteString(symbols) - } - if hasParamIndex { - newSQLStr.WriteString(strconv.Itoa(sqlParamIndex + j)) - } - sliceValue := valuej.Interface() - *newValues = append(*newValues, sliceValue) - } - } - *sqlParamIndexPtr = *sqlParamIndexPtr + valueLen -} - -// firstOneWord 从指定下标,获取第一个单词,不包含前后空格,并返回开始下标和结束下标,如果找不到合法的字符串,返回-1 -func firstOneWord(index int, strByte *[]byte) (string, int, int, error) { - start := -1 - end := -1 - byteLen := len(*strByte) - if index < 0 { - return "", start, end, errors.New("->firstOneWord索引小于0") - } - if index > byteLen { // 如果索引大于长度 - return "", start, end, errors.New("->firstOneWord索引大于字符串长度") - } - var newStr strings.Builder - newStr.Grow(10) - for ; index < byteLen; index++ { - v := (*strByte)[index] - if v == '(' || v == ')' { // 不处理括号 - continue - } - if start == -1 && v != ' ' { // 不是空格 - start = index - } - if start == -1 && v == ' ' { // 空格 - continue - } - if start >= 0 && v != ' ' { // 需要的字符 - newStr.WriteByte(v) - } else { // 遇到空格结束记录 - end = index - break - } - } - if start >= 0 && end == -1 { // 记录到结尾,不是空格结束 - end = byteLen - } - - return newStr.String(), start, end, nil -} diff --git a/vendor/gitee.com/chunanyong/zorm/structFieldInfo.go b/vendor/gitee.com/chunanyong/zorm/structFieldInfo.go deleted file mode 100644 index 9ef3f1d7..00000000 --- a/vendor/gitee.com/chunanyong/zorm/structFieldInfo.go +++ /dev/null @@ -1,564 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "database/sql" - "errors" - "fmt" - "go/ast" - "reflect" - "strings" - "sync" -) - -const ( - // tag标签的名称 - tagColumnName = "column" - - // 输出字段 缓存的前缀 - exportPrefix = "_exportStructFields_" - // 私有字段 缓存的前缀 - privatePrefix = "_privateStructFields_" - // 数据库列名 缓存的前缀 - dbColumnNamePrefix = "_dbColumnName_" - - // 数据库所有列名,经过排序 缓存的前缀 - dbColumnNameSlicePrefix = "_dbColumnNameSlice_" - - // field对应的column的tag值 缓存的前缀 - // structFieldTagPrefix = "_structFieldTag_" - // 数据库主键 缓存的前缀 - // dbPKNamePrefix = "_dbPKName_" -) - -// cacheStructFieldInfoMap 用于缓存反射的信息,sync.Map内部处理了并发锁 -var cacheStructFieldInfoMap *sync.Map = &sync.Map{} - -// var cacheStructFieldInfoMap = make(map[string]map[string]reflect.StructField) - -// 用于缓存field对应的column的tag值 -// var cacheStructFieldTagInfoMap = make(map[string]map[string]string) - -// structFieldInfo 获取StructField的信息.只对struct或者*struct判断,如果是指针,返回指针下实际的struct类型 -// 第一个返回值是可以输出的字段(首字母大写),第二个是不能输出的字段(首字母小写) -func structFieldInfo(typeOf *reflect.Type) error { - if typeOf == nil { - return errors.New("->structFieldInfo数据为空") - } - - entityName := (*typeOf).String() - - // 缓存的key - // 所有输出的属性,包含数据库字段,key是struct属性的名称,不区分大小写 - exportCacheKey := exportPrefix + entityName - // 所有私有变量的属性,key是struct属性的名称,不区分大小写 - privateCacheKey := privatePrefix + entityName - // 所有数据库的属性,key是数据库的字段名称,不区分大小写 - dbColumnCacheKey := dbColumnNamePrefix + entityName - // 所有数据库字段名称的slice,经过排序,不区分大小写 - dbColumnNameSliceCacheKey := dbColumnNameSlicePrefix + entityName - - // structFieldTagCacheKey := structFieldTagPrefix + entityName - // dbPKNameCacheKey := dbPKNamePrefix + entityName - // 缓存的数据库主键值 - _, exportOk := cacheStructFieldInfoMap.Load(exportCacheKey) - //_, exportOk := cacheStructFieldInfoMap[exportCacheKey] - //如果存在值,认为缓存中有所有的信息,不再处理 - if exportOk { - return nil - } - // 获取字段长度 - fieldNum := (*typeOf).NumField() - // 如果没有字段 - if fieldNum < 1 { - return errors.New("->structFieldInfo-->NumField entity没有属性") - } - - // 声明所有字段的载体 - var allFieldMap *sync.Map = &sync.Map{} - // anonymous := make([]reflect.StructField, 0) - - // 缓存的数据 - exportStructFieldMap := make(map[string]reflect.StructField) - privateStructFieldMap := make(map[string]reflect.StructField) - dbColumnFieldMap := make(map[string]reflect.StructField) - - // structFieldTagMap := make(map[string]string) - dbColumnFieldNameSlice := make([]string, 0) - - // 遍历sync.Map,要求输入一个func作为参数 - // 这个函数的入参、出参的类型都已经固定,不能修改 - // 可以在函数体内编写自己的代码,调用map中的k,v - // var funcMapKV func(k, v interface{}) bool - funcMapKV := func(k, v interface{}) bool { - field := v.(reflect.StructField) - fieldName := field.Name - if ast.IsExported(fieldName) { // 如果是可以输出的,不区分大小写 - exportStructFieldMap[strings.ToLower(fieldName)] = field - // 如果是数据库字段 - tagColumnValue := field.Tag.Get(tagColumnName) - if len(tagColumnValue) > 0 { - // dbColumnFieldMap[tagColumnValue] = field - // 使用数据库字段的小写,处理oracle和达梦数据库的sql返回值大写 - tagColumnValueLower := strings.ToLower(tagColumnValue) - dbColumnFieldMap[tagColumnValueLower] = field - dbColumnFieldNameSlice = append(dbColumnFieldNameSlice, tagColumnValueLower) - // structFieldTagMap[fieldName] = tagColumnValue - } - - } else { // 私有属性 - privateStructFieldMap[strings.ToLower(fieldName)] = field - } - - return true - } - // 并发锁,用于处理slice并发append - var lock sync.Mutex - // funcRecursiveAnonymous 递归调用struct的匿名属性,就近覆盖属性 - var funcRecursiveAnonymous func(allFieldMap *sync.Map, anonymous *reflect.StructField) - funcRecursiveAnonymous = func(allFieldMap *sync.Map, anonymous *reflect.StructField) { - // 字段类型 - anonymousTypeOf := anonymous.Type - if anonymousTypeOf.Kind() == reflect.Ptr { - // 获取指针下的Struct类型 - anonymousTypeOf = anonymousTypeOf.Elem() - } - - // 只处理Struct类型 - if anonymousTypeOf.Kind() != reflect.Struct { - return - } - - // 获取字段长度 - fieldNum := anonymousTypeOf.NumField() - // 如果没有字段 - if fieldNum < 1 { - return - } - // 遍历所有字段 - for i := 0; i < fieldNum; i++ { - anonymousField := anonymousTypeOf.Field(i) - if anonymousField.Anonymous { // 匿名struct里自身又有匿名struct - funcRecursiveAnonymous(allFieldMap, &anonymousField) - } else if _, ok := allFieldMap.Load(anonymousField.Name); !ok { // 普通命名字段,而且没有记录过 - allFieldMap.Store(anonymousField.Name, anonymousField) - lock.Lock() - funcMapKV(anonymousField.Name, anonymousField) - lock.Unlock() - } - } - } - - // 遍历所有字段,记录匿名属性 - for i := 0; i < fieldNum; i++ { - field := (*typeOf).Field(i) - if field.Anonymous { // 如果是匿名的 - funcRecursiveAnonymous(allFieldMap, &field) - } else if _, ok := allFieldMap.Load(field.Name); !ok { // 普通命名字段,而且没有记录过 - allFieldMap.Store(field.Name, field) - lock.Lock() - funcMapKV(field.Name, field) - lock.Unlock() - } - } - - // allFieldMap.Range(f) - - // 加入缓存 - cacheStructFieldInfoMap.Store(exportCacheKey, exportStructFieldMap) - cacheStructFieldInfoMap.Store(privateCacheKey, privateStructFieldMap) - cacheStructFieldInfoMap.Store(dbColumnCacheKey, dbColumnFieldMap) - // cacheStructFieldInfoMap[exportCacheKey] = exportStructFieldMap - // cacheStructFieldInfoMap[privateCacheKey] = privateStructFieldMap - // cacheStructFieldInfoMap[dbColumnCacheKey] = dbColumnFieldMap - - // cacheStructFieldTagInfoMap[structFieldTagCacheKey] = structFieldTagMap - - // 不按照字母顺序,按照反射获取的Struct属性顺序,生成insert语句和update语句 - // sort.Strings(dbColumnFieldNameSlice) - cacheStructFieldInfoMap.Store(dbColumnNameSliceCacheKey, dbColumnFieldNameSlice) - - return nil -} - -// setFieldValueByColumnName 根据数据库的字段名,找到struct映射的字段,并赋值 -func setFieldValueByColumnName(entity interface{}, columnName string, value interface{}) error { - // 先从本地缓存中查找 - typeOf := reflect.TypeOf(entity) - valueOf := reflect.ValueOf(entity) - if typeOf.Kind() == reflect.Ptr { // 如果是指针 - typeOf = typeOf.Elem() - valueOf = valueOf.Elem() - } - - dbMap, err := getDBColumnFieldMap(&typeOf) - if err != nil { - return err - } - f, ok := dbMap[strings.ToLower(columnName)] - if ok { // 给主键赋值 - valueOf.FieldByName(f.Name).Set(reflect.ValueOf(value)) - } - return nil -} - -// structFieldValue 获取指定字段的值 -func structFieldValue(s interface{}, fieldName string) (interface{}, error) { - if s == nil || len(fieldName) < 1 { - return nil, errors.New("->structFieldValue数据为空") - } - // entity的s类型 - valueOf := reflect.ValueOf(s) - - kind := valueOf.Kind() - if !(kind == reflect.Ptr || kind == reflect.Struct) { - return nil, errors.New("->structFieldValue必须是Struct或者*Struct类型") - } - - if kind == reflect.Ptr { - // 获取指针下的Struct类型 - valueOf = valueOf.Elem() - if valueOf.Kind() != reflect.Struct { - return nil, errors.New("->structFieldValue必须是Struct或者*Struct类型") - } - } - - // FieldByName方法返回的是reflect.Value类型,调用Interface()方法,返回原始类型的数据值 - value := valueOf.FieldByName(fieldName).Interface() - - return value, nil -} - -// getDBColumnExportFieldMap 获取实体类的数据库字段,key是数据库的字段名称.同时返回所有的字段属性的map,key是实体类的属性.不区分大小写 -func getDBColumnExportFieldMap(typeOf *reflect.Type) (map[string]reflect.StructField, map[string]reflect.StructField, error) { - dbColumnFieldMap, err := getCacheStructFieldInfoMap(typeOf, dbColumnNamePrefix) - if err != nil { - return nil, nil, err - } - exportFieldMap, err := getCacheStructFieldInfoMap(typeOf, exportPrefix) - return dbColumnFieldMap, exportFieldMap, err -} - -// getDBColumnFieldMap 获取实体类的数据库字段,key是数据库的字段名称.不区分大小写 -func getDBColumnFieldMap(typeOf *reflect.Type) (map[string]reflect.StructField, error) { - return getCacheStructFieldInfoMap(typeOf, dbColumnNamePrefix) -} - -// getDBColumnFieldNameSlice 获取实体类的数据库字段,经过排序,key是数据库的字段名称.不区分大小写, -func getDBColumnFieldNameSlice(typeOf *reflect.Type) ([]string, error) { - dbColumnFieldSlice, dbmapErr := getCacheStructFieldInfo(typeOf, dbColumnNameSlicePrefix) - if dbmapErr != nil { - return nil, fmt.Errorf("->getDBColumnFieldNameSlice-->getCacheStructFieldInfo()取值错误:%w", dbmapErr) - } - dbcfSlice, efOK := dbColumnFieldSlice.([]string) - if !efOK { - return dbcfSlice, errors.New("->getDBColumnFieldNameSlice-->dbColumnFieldSlice取值转[]string类型异常") - } - return dbcfSlice, nil -} - -// getCacheStructFieldInfo 根据类型和key,获取缓存的数据字段信息slice,已经排序 -func getCacheStructFieldInfo(typeOf *reflect.Type, keyPrefix string) (interface{}, error) { - if typeOf == nil { - return nil, errors.New("->getCacheStructFieldInfo-->typeOf不能为空") - } - key := keyPrefix + (*typeOf).String() - dbColumnFieldMap, dbOk := cacheStructFieldInfoMap.Load(key) - // dbColumnFieldMap, dbOk := cacheStructFieldInfoMap[key] - if !dbOk { // 缓存不存在 - // 获取实体类的输出字段和私有 字段 - err := structFieldInfo(typeOf) - if err != nil { - return nil, err - } - dbColumnFieldMap, dbOk = cacheStructFieldInfoMap.Load(key) - // dbColumnFieldMap, dbOk = cacheStructFieldInfoMap[key] - if !dbOk { - return nil, errors.New("->getCacheStructFieldInfo-->cacheStructFieldInfoMap.Load()获取数据库字段dbColumnFieldMap异常") - } - } - - return dbColumnFieldMap, nil - - // return dbColumnFieldMap, nil -} - -// getCacheStructFieldInfoMap 根据类型和key,获取缓存的字段信息 -func getCacheStructFieldInfoMap(typeOf *reflect.Type, keyPrefix string) (map[string]reflect.StructField, error) { - dbColumnFieldMap, dbmapErr := getCacheStructFieldInfo(typeOf, keyPrefix) - if dbmapErr != nil { - return nil, fmt.Errorf("->getCacheStructFieldInfoMap-->getCacheStructFieldInfo()取值错误:%w", dbmapErr) - } - dbcfMap, efOK := dbColumnFieldMap.(map[string]reflect.StructField) - if !efOK { - return dbcfMap, errors.New("->getCacheStructFieldInfoMap-->dbColumnFieldMap取值转map[string]reflect.StructField类型异常") - } - return dbcfMap, nil - - // return dbColumnFieldMap, nil -} - -// columnAndValue 根据保存的对象,返回插入的语句,需要插入的字段,字段的值 -func columnAndValue(entity interface{}) (reflect.Type, []reflect.StructField, []interface{}, error) { - typeOf, checkerr := checkEntityKind(entity) - if checkerr != nil { - return typeOf, nil, nil, checkerr - } - // 获取实体类的反射,指针下的struct - valueOf := reflect.ValueOf(entity).Elem() - // reflect.Indirect - - // 先从本地缓存中查找 - // typeOf := reflect.TypeOf(entity).Elem() - - dbMap, err := getDBColumnFieldMap(&typeOf) - if err != nil { - return typeOf, nil, nil, err - } - dbSlice, err := getDBColumnFieldNameSlice(&typeOf) - if err != nil { - return typeOf, nil, nil, err - } - // 实体类公开字段的长度 - fLen := len(dbMap) - // 长度不一致 - if fLen-len(dbSlice) != 0 { - return typeOf, nil, nil, errors.New("->columnAndValue-->缓存的数据库字段和实体类字段不对应") - } - // 接收列的数组,这里是做一个副本,避免外部更改掉原始的列信息 - columns := make([]reflect.StructField, 0, fLen) - // 接收值的数组 - values := make([]interface{}, 0, fLen) - - // 遍历所有数据库属性 - for _, fieldName := range dbSlice { - //获取字段类型的Kind - // fieldKind := field.Type.Kind() - //if !allowTypeMap[fieldKind] { //不允许的类型 - // continue - //} - field := dbMap[fieldName] - columns = append(columns, field) - // FieldByName方法返回的是reflect.Value类型,调用Interface()方法,返回原始类型的数据值.字段不会重名,不使用FieldByIndex()函数 - value := valueOf.FieldByName(field.Name).Interface() - // 添加到记录值的数组 - values = append(values, value) - - } - - // 缓存数据库的列 - return typeOf, columns, values, nil -} - -// entityPKFieldName 获取实体类主键属性名称 -func entityPKFieldName(entity IEntityStruct, typeOf *reflect.Type) (string, error) { - //检查是否是指针对象 - //typeOf, checkerr := checkEntityKind(entity) - //if checkerr != nil { - // return "", checkerr - //} - - // 缓存的key,TypeOf和ValueOf的String()方法,返回值不一样 - // typeOf := reflect.TypeOf(entity).Elem() - - dbMap, err := getDBColumnFieldMap(typeOf) - if err != nil { - return "", err - } - field := dbMap[strings.ToLower(entity.GetPKColumnName())] - return field.Name, nil -} - -// checkEntityKind 检查entity类型必须是*struct类型或者基础类型的指针 -func checkEntityKind(entity interface{}) (reflect.Type, error) { - if entity == nil { - return nil, errors.New("->checkEntityKind参数不能为空,必须是*struct类型或者基础类型的指针") - } - typeOf := reflect.TypeOf(entity) - if typeOf.Kind() != reflect.Ptr { // 如果不是指针 - return nil, errors.New("->checkEntityKind必须是*struct类型或者基础类型的指针") - } - typeOf = typeOf.Elem() - //if !(typeOf.Kind() == reflect.Struct || allowBaseTypeMap[typeOf.Kind()]) { //如果不是指针 - // return nil, errors.New("checkEntityKind必须是*struct类型或者基础类型的指针") - //} - return typeOf, nil -} - -// sqlRowsValues 包装接收sqlRows的Values数组,反射rows屏蔽数据库null值,兼容单个字段查询和Struct映射 -// fix:converting NULL to int is unsupported -// 当读取数据库的值为NULL时,由于基本类型不支持为NULL,通过反射将未知driver.Value改为interface{},不再映射到struct实体类 -// 感谢@fastabler提交的pr -// oneColumnScanner 只有一个字段,而且可以直接Scan,例如string或者[]string,不需要反射StructType进行处理 -func sqlRowsValues(ctx context.Context, dialect string, valueOf *reflect.Value, typeOf *reflect.Type, rows *sql.Rows, driverValue *reflect.Value, columnTypes []*sql.ColumnType, entity interface{}, dbColumnFieldMap, exportFieldMap *map[string]reflect.StructField) error { - if entity == nil && valueOf == nil { - return errors.New("->sqlRowsValues-->valueOfElem为nil") - } - - var valueOfElem reflect.Value - if entity == nil && valueOf != nil { - valueOfElem = valueOf.Elem() - } - - ctLen := len(columnTypes) - // 声明载体数组,用于存放struct的属性指针 - // Declare a carrier array to store the attribute pointer of the struct - values := make([]interface{}, ctLen) - // 记录需要类型转换的字段信息 - var fieldTempDriverValueMap map[*sql.ColumnType]*driverValueInfo - if iscdvm { - fieldTempDriverValueMap = make(map[*sql.ColumnType]*driverValueInfo) - } - var err error - var customDriverValueConver ICustomDriverValueConver - var converOK bool - - for i, columnType := range columnTypes { - if iscdvm { - databaseTypeName := strings.ToUpper(columnType.DatabaseTypeName()) - // 根据接收的类型,获取到类型转换的接口实现,优先匹配指定的数据库类型 - customDriverValueConver, converOK = customDriverValueMap[dialect+"."+databaseTypeName] - if !converOK { - customDriverValueConver, converOK = customDriverValueMap[databaseTypeName] - } - } - dv := driverValue.Index(i) - if dv.IsValid() && dv.InterfaceData()[0] == 0 { // 该字段的数据库值是null,取默认值 - values[i] = new(interface{}) - continue - } else if converOK { // 如果是需要转换的字段 - // 获取字段类型 - var structFieldType *reflect.Type - if entity != nil { // 查询一个字段,并且可以直接接收 - structFieldType = typeOf - } else { // 如果是struct类型 - field, err := getStructFieldByColumnType(columnType, dbColumnFieldMap, exportFieldMap) - if err != nil { - return err - } - if field != nil { // 存在这个字段 - vtype := field.Type - structFieldType = &vtype - } - } - tempDriverValue, err := customDriverValueConver.GetDriverValue(ctx, columnType, structFieldType) - if err != nil { - return err - } - if tempDriverValue == nil { - return errors.New("->sqlRowsValues-->customDriverValueConver.GetDriverValue返回的driver.Value不能为nil") - } - values[i] = tempDriverValue - - // 如果需要类型转换 - dvinfo := driverValueInfo{} - dvinfo.customDriverValueConver = customDriverValueConver - // dvinfo.columnType = columnType - dvinfo.structFieldType = structFieldType - dvinfo.tempDriverValue = tempDriverValue - fieldTempDriverValueMap[columnType] = &dvinfo - continue - - } else if entity != nil { // 查询一个字段,并且可以直接接收 - values[i] = entity - continue - } else { - field, err := getStructFieldByColumnType(columnType, dbColumnFieldMap, exportFieldMap) - if err != nil { - return err - } - if field == nil { // 如果不存在这个字段 - values[i] = new(interface{}) - } else { - // fieldType := refPV.FieldByName(field.Name).Type() - // v := reflect.New(field.Type).Interface() - // 字段的反射值 - fieldValue := valueOfElem.FieldByName(field.Name) - v := fieldValue.Addr().Interface() - // v := new(interface{}) - values[i] = v - } - } - - } - err = rows.Scan(values...) - if err != nil { - return err - } - if len(fieldTempDriverValueMap) < 1 { - return err - } - - // 循环需要替换的值 - for columnType, driverValueInfo := range fieldTempDriverValueMap { - // 根据列名,字段类型,新值 返回符合接收类型值的指针,返回值是个指针,指针,指针!!!! - // typeOf := fieldValue.Type() - rightValue, errConverDriverValue := driverValueInfo.customDriverValueConver.ConverDriverValue(ctx, columnType, driverValueInfo.tempDriverValue, driverValueInfo.structFieldType) - if errConverDriverValue != nil { - errConverDriverValue = fmt.Errorf("->sqlRowsValues-->customDriverValueConver.ConverDriverValue错误:%w", errConverDriverValue) - FuncLogError(ctx, errConverDriverValue) - return errConverDriverValue - } - if entity != nil { // 查询一个字段,并且可以直接接收 - // entity = rightValue - // valueOfElem.Set(reflect.ValueOf(rightValue).Elem()) - reflect.ValueOf(entity).Elem().Set(reflect.ValueOf(rightValue).Elem()) - continue - } else { // 如果是Struct类型接收 - field, err := getStructFieldByColumnType(columnType, dbColumnFieldMap, exportFieldMap) - if err != nil { - return err - } - if field != nil { // 如果存在这个字段 - // 字段的反射值 - fieldValue := valueOfElem.FieldByName(field.Name) - // 给字段赋值 - fieldValue.Set(reflect.ValueOf(rightValue).Elem()) - } - } - - } - - return err -} - -// getStructFieldByColumnType 根据ColumnType获取StructField对象,兼容驼峰 -func getStructFieldByColumnType(columnType *sql.ColumnType, dbColumnFieldMap *map[string]reflect.StructField, exportFieldMap *map[string]reflect.StructField) (*reflect.StructField, error) { - columnName := strings.ToLower(columnType.Name()) - // columnName := "test" - // 从缓存中获取列名的field字段 - // Get the field field of the column name from the cache - field, fok := (*dbColumnFieldMap)[columnName] - if !fok { - field, fok = (*exportFieldMap)[columnName] - if !fok { - // 尝试驼峰 - cname := strings.ReplaceAll(columnName, "_", "") - field, fok = (*exportFieldMap)[cname] - - } - - } - if fok { - return &field, nil - } - return nil, nil -} diff --git a/vendor/gitee.com/chunanyong/zorm/typeConvert.go b/vendor/gitee.com/chunanyong/zorm/typeConvert.go deleted file mode 100644 index 2d0aedcb..00000000 --- a/vendor/gitee.com/chunanyong/zorm/typeConvert.go +++ /dev/null @@ -1,611 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - */ - -package zorm - -import ( - "context" - "errors" - "strconv" - - "gitee.com/chunanyong/zorm/decimal" -) - -// FuncDecimalValue 设置decimal类型接收值,复写函数自定义decimal实现,例如github.com/shopspring/decimal,返回的是指针 -var FuncDecimalValue = func(ctx context.Context, dialect string) interface{} { - return &decimal.Decimal{} -} - -// OverrideFunc 重写ZORM的函数,用于风险监控,只要查看这个函数的调用,就知道哪些地方重写了函数,避免项目混乱.当你使用这个函数时,你必须知道自己在做什么 -// funcName 是需要重写的方法命,funcObject是对应的函数. 返回值bool是否重写成功,interface{}是重写前的函数 -// 一般是在init里调用重写 -func OverrideFunc(funcName string, funcObject interface{}) (bool, interface{}, error) { - if funcName == "" { - return false, nil, errors.New("->OverrideFunc-->funcName不能为空") - } - - // oldFunc 老的函数 - var oldFunc interface{} = nil - switch funcName { - case "Transaction": - newFunc, ok := funcObject.(func(ctx context.Context, doTransaction func(ctx context.Context) (interface{}, error)) (interface{}, error)) - if ok { - oldFunc = transaction - transaction = newFunc - } - case "QueryRow": - newFunc, ok := funcObject.(func(ctx context.Context, finder *Finder, entity interface{}) (bool, error)) - if ok { - oldFunc = queryRow - queryRow = newFunc - } - case "Query": - newFunc, ok := funcObject.(func(ctx context.Context, finder *Finder, rowsSlicePtr interface{}, page *Page) error) - if ok { - oldFunc = query - query = newFunc - } - - case "QueryRowMap": - newFunc, ok := funcObject.(func(ctx context.Context, finder *Finder) (map[string]interface{}, error)) - if ok { - oldFunc = queryRowMap - queryRowMap = newFunc - } - case "QueryMap": - newFunc, ok := funcObject.(func(ctx context.Context, finder *Finder, page *Page) ([]map[string]interface{}, error)) - if ok { - oldFunc = queryMap - queryMap = newFunc - } - case "UpdateFinder": - newFunc, ok := funcObject.(func(ctx context.Context, finder *Finder) (int, error)) - if ok { - oldFunc = updateFinder - updateFinder = newFunc - } - case "Insert": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityStruct) (int, error)) - if ok { - oldFunc = insert - insert = newFunc - } - case "InsertSlice": - newFunc, ok := funcObject.(func(ctx context.Context, entityStructSlice []IEntityStruct) (int, error)) - if ok { - oldFunc = insertSlice - insertSlice = newFunc - } - case "Update": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityStruct) (int, error)) - if ok { - oldFunc = update - update = newFunc - } - case "UpdateNotZeroValue": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityStruct) (int, error)) - if ok { - oldFunc = updateNotZeroValue - updateNotZeroValue = newFunc - } - case "Delete": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityStruct) (int, error)) - if ok { - oldFunc = delete - delete = newFunc - } - - case "InsertEntityMap": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityMap) (int, error)) - if ok { - oldFunc = insertEntityMap - insertEntityMap = newFunc - } - case "InsertEntityMapSlice": - newFunc, ok := funcObject.(func(ctx context.Context, entity []IEntityMap) (int, error)) - if ok { - oldFunc = insertEntityMapSlice - insertEntityMapSlice = newFunc - } - case "UpdateEntityMap": - newFunc, ok := funcObject.(func(ctx context.Context, entity IEntityMap) (int, error)) - if ok { - oldFunc = updateEntityMap - updateEntityMap = newFunc - } - default: - return false, oldFunc, errors.New("->OverrideFunc-->函数" + funcName + "暂不支持重写或不存在") - } - if oldFunc == nil { - return false, oldFunc, errors.New("->OverrideFunc-->请检查传入的" + funcName + "函数实现,断言转换失败.") - } - return true, oldFunc, nil -} - -// typeConvertInt64toInt int64 转 int -func typeConvertInt64toInt(from int64) (int, error) { - strInt64 := strconv.FormatInt(from, 10) - return strconv.Atoi(strInt64) -} - -/* -func typeConvertFloat32(i interface{}) (float32, error) { - if i == nil { - return 0, nil - } - if v, ok := i.(float32); ok { - return v, nil - } - v, err := typeConvertString(i) - if err != nil { - return 0, err - } - vf, err := strconv.ParseFloat(v, 32) - return float32(vf), err -} - -func typeConvertFloat64(i interface{}) (float64, error) { - if i == nil { - return 0, nil - } - if v, ok := i.(float64); ok { - return v, nil - } - v, err := typeConvertString(i) - if err != nil { - return 0, err - } - return strconv.ParseFloat(v, 64) -} - -func typeConvertDecimal(i interface{}) (decimal.Decimal, error) { - if i == nil { - return decimal.Zero, nil - } - if v, ok := i.(decimal.Decimal); ok { - return v, nil - } - v, err := typeConvertString(i) - if err != nil { - return decimal.Zero, err - } - return decimal.NewFromString(v) -} - -func typeConvertInt64(i interface{}) (int64, error) { - if i == nil { - return 0, nil - } - if v, ok := i.(int64); ok { - return v, nil - } - v, err := typeConvertInt(i) - if err != nil { - return 0, err - } - return int64(v), err -} - -func typeConvertString(i interface{}) (string, error) { - if i == nil { - return "", nil - } - switch value := i.(type) { - case int: - return strconv.Itoa(value), nil - case int8: - return strconv.Itoa(int(value)), nil - case int16: - return strconv.Itoa(int(value)), nil - case int32: - return strconv.Itoa(int(value)), nil - case int64: - return strconv.Itoa(int(value)), nil - case uint: - return strconv.FormatUint(uint64(value), 10), nil - case uint8: - return strconv.FormatUint(uint64(value), 10), nil - case uint16: - return strconv.FormatUint(uint64(value), 10), nil - case uint32: - return strconv.FormatUint(uint64(value), 10), nil - case uint64: - return strconv.FormatUint(uint64(value), 10), nil - case float32: - return strconv.FormatFloat(float64(value), 'f', -1, 32), nil - case float64: - return strconv.FormatFloat(value, 'f', -1, 64), nil - case bool: - return strconv.FormatBool(value), nil - case string: - return value, nil - case []byte: - return string(value), nil - default: - return fmt.Sprintf("%v", value), nil - } -} - -//false: "", 0, false, off -func typeConvertBool(i interface{}) (bool, error) { - if i == nil { - return false, nil - } - if v, ok := i.(bool); ok { - return v, nil - } - s, err := typeConvertString(i) - if err != nil { - return false, err - } - if s != "" && s != "0" && s != "false" && s != "off" { - return true, err - } - return false, err -} - -func typeConvertInt(i interface{}) (int, error) { - if i == nil { - return 0, nil - } - switch value := i.(type) { - case int: - return value, nil - case int8: - return int(value), nil - case int16: - return int(value), nil - case int32: - return int(value), nil - case int64: - return int(value), nil - case uint: - return int(value), nil - case uint8: - return int(value), nil - case uint16: - return int(value), nil - case uint32: - return int(value), nil - case uint64: - return int(value), nil - case float32: - return int(value), nil - case float64: - return int(value), nil - case bool: - if value { - return 1, nil - } - return 0, nil - default: - v, err := typeConvertString(value) - if err != nil { - return 0, err - } - return strconv.Atoi(v) - } -} - - - -func typeConvertTime(i interface{}, format string, TZLocation ...*time.Location) (time.Time, error) { - s, err := typeConvertString(i) - if err != nil { - return time.Time{}, err - } - return typeConvertStrToTime(s, format, TZLocation...) -} - -func typeConvertStrToTime(str string, format string, TZLocation ...*time.Location) (time.Time, error) { - if len(TZLocation) > 0 { - return time.ParseInLocation(format, str, TZLocation[0]) - } - return time.ParseInLocation(format, str, time.Local) -} - -func encodeString(s string) []byte { - return []byte(s) -} - -func decodeToString(b []byte) string { - return string(b) -} - -func encodeBool(b bool) []byte { - if b { - return []byte{1} - } - return []byte{0} - -} - -func encodeInt(i int) []byte { - if i <= math.MaxInt8 { - return encodeInt8(int8(i)) - } else if i <= math.MaxInt16 { - return encodeInt16(int16(i)) - } else if i <= math.MaxInt32 { - return encodeInt32(int32(i)) - } else { - return encodeInt64(int64(i)) - } -} - -func encodeUint(i uint) []byte { - if i <= math.MaxUint8 { - return encodeUint8(uint8(i)) - } else if i <= math.MaxUint16 { - return encodeUint16(uint16(i)) - } else if i <= math.MaxUint32 { - return encodeUint32(uint32(i)) - } else { - return encodeUint64(uint64(i)) - } -} - -func encodeInt8(i int8) []byte { - return []byte{byte(i)} -} - -func encodeUint8(i uint8) []byte { - return []byte{byte(i)} -} - -func encodeInt16(i int16) []byte { - bytes := make([]byte, 2) - binary.LittleEndian.PutUint16(bytes, uint16(i)) - return bytes -} - -func encodeUint16(i uint16) []byte { - bytes := make([]byte, 2) - binary.LittleEndian.PutUint16(bytes, i) - return bytes -} - -func encodeInt32(i int32) []byte { - bytes := make([]byte, 4) - binary.LittleEndian.PutUint32(bytes, uint32(i)) - return bytes -} - -func encodeUint32(i uint32) []byte { - bytes := make([]byte, 4) - binary.LittleEndian.PutUint32(bytes, i) - return bytes -} - -func encodeInt64(i int64) []byte { - bytes := make([]byte, 8) - binary.LittleEndian.PutUint64(bytes, uint64(i)) - return bytes -} - -func encodeUint64(i uint64) []byte { - bytes := make([]byte, 8) - binary.LittleEndian.PutUint64(bytes, i) - return bytes -} - -func encodeFloat32(f float32) []byte { - bits := math.Float32bits(f) - bytes := make([]byte, 4) - binary.LittleEndian.PutUint32(bytes, bits) - return bytes -} - -func encodeFloat64(f float64) []byte { - bits := math.Float64bits(f) - bytes := make([]byte, 8) - binary.LittleEndian.PutUint64(bytes, bits) - return bytes -} - -func encode(vs ...interface{}) []byte { - buf := new(bytes.Buffer) - for i := 0; i < len(vs); i++ { - switch value := vs[i].(type) { - case int: - buf.Write(encodeInt(value)) - case int8: - buf.Write(encodeInt8(value)) - case int16: - buf.Write(encodeInt16(value)) - case int32: - buf.Write(encodeInt32(value)) - case int64: - buf.Write(encodeInt64(value)) - case uint: - buf.Write(encodeUint(value)) - case uint8: - buf.Write(encodeUint8(value)) - case uint16: - buf.Write(encodeUint16(value)) - case uint32: - buf.Write(encodeUint32(value)) - case uint64: - buf.Write(encodeUint64(value)) - case bool: - buf.Write(encodeBool(value)) - case string: - buf.Write(encodeString(value)) - case []byte: - buf.Write(value) - case float32: - buf.Write(encodeFloat32(value)) - case float64: - buf.Write(encodeFloat64(value)) - default: - if err := binary.Write(buf, binary.LittleEndian, value); err != nil { - buf.Write(encodeString(fmt.Sprintf("%v", value))) - } - } - } - return buf.Bytes() -} - -func isNumeric(s string) bool { - for i := 0; i < len(s); i++ { - if s[i] < byte('0') || s[i] > byte('9') { - return false - } - } - return true -} -func typeConvertTimeDuration(i interface{}) time.Duration { - return time.Duration(typeConvertInt64(i)) -} - -func typeConvertBytes(i interface{}) []byte { - if i == nil { - return nil - } - if r, ok := i.([]byte); ok { - return r - } - return encode(i) - -} - -func typeConvertStrings(i interface{}) []string { - if i == nil { - return nil - } - if r, ok := i.([]string); ok { - return r - } else if r, ok := i.([]interface{}); ok { - strs := make([]string, len(r)) - for k, v := range r { - strs[k] = typeConvertString(v) - } - return strs - } - return []string{fmt.Sprintf("%v", i)} -} - -func typeConvertInt8(i interface{}) int8 { - if i == nil { - return 0 - } - if v, ok := i.(int8); ok { - return v - } - return int8(typeConvertInt(i)) -} - -func typeConvertInt16(i interface{}) int16 { - if i == nil { - return 0 - } - if v, ok := i.(int16); ok { - return v - } - return int16(typeConvertInt(i)) -} - -func typeConvertInt32(i interface{}) int32 { - if i == nil { - return 0 - } - if v, ok := i.(int32); ok { - return v - } - return int32(typeConvertInt(i)) -} - -func typeConvertUint(i interface{}) uint { - if i == nil { - return 0 - } - switch value := i.(type) { - case int: - return uint(value) - case int8: - return uint(value) - case int16: - return uint(value) - case int32: - return uint(value) - case int64: - return uint(value) - case uint: - return value - case uint8: - return uint(value) - case uint16: - return uint(value) - case uint32: - return uint(value) - case uint64: - return uint(value) - case float32: - return uint(value) - case float64: - return uint(value) - case bool: - if value { - return 1 - } - return 0 - default: - v, _ := strconv.ParseUint(typeConvertString(value), 10, 64) - return uint(v) - } -} - -func typeConvertUint8(i interface{}) uint8 { - if i == nil { - return 0 - } - if v, ok := i.(uint8); ok { - return v - } - return uint8(typeConvertUint(i)) -} - -func typeConvertUint16(i interface{}) uint16 { - if i == nil { - return 0 - } - if v, ok := i.(uint16); ok { - return v - } - return uint16(typeConvertUint(i)) -} - -func typeConvertUint32(i interface{}) uint32 { - if i == nil { - return 0 - } - if v, ok := i.(uint32); ok { - return v - } - return uint32(typeConvertUint(i)) -} - -func typeConvertUint64(i interface{}) uint64 { - if i == nil { - return 0 - } - if v, ok := i.(uint64); ok { - return v - } - return uint64(typeConvertUint(i)) -} -*/ diff --git a/vendor/gitee.com/chunanyong/zorm/zorm-logo.png b/vendor/gitee.com/chunanyong/zorm/zorm-logo.png deleted file mode 100644 index 00eb5278..00000000 Binary files a/vendor/gitee.com/chunanyong/zorm/zorm-logo.png and /dev/null differ diff --git a/vendor/github.com/bmizerany/pq/.gitignore b/vendor/github.com/bmizerany/pq/.gitignore deleted file mode 100644 index 0f1d00e1..00000000 --- a/vendor/github.com/bmizerany/pq/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -.db -*.test -*~ -*.swp diff --git a/vendor/github.com/bmizerany/pq/LICENSE.md b/vendor/github.com/bmizerany/pq/LICENSE.md deleted file mode 100644 index 258bdff0..00000000 --- a/vendor/github.com/bmizerany/pq/LICENSE.md +++ /dev/null @@ -1,7 +0,0 @@ -Copyright (C) 2011 Blake Mizerany - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/bmizerany/pq/README.md b/vendor/github.com/bmizerany/pq/README.md deleted file mode 100644 index dd7a6f59..00000000 --- a/vendor/github.com/bmizerany/pq/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# pq - A pure Go postgres driver for Go's database/sql package - -**This package is now deprecated. The up to date version is at -[github.com/lib/pq](https://github.com/lib/pq).** - -## Install - - go get github.com/bmizerany/pq - -## Docs - - - -## Use - - package main - - import ( - _ "github.com/bmizerany/pq" - "database/sql" - ) - - func main() { - db, err := sql.Open("postgres", "user=pqgotest dbname=pqgotest sslmode=verify-full") - // ... - } - -**Connection String Parameters** - -These are a subset of the libpq connection parameters. In addition, a -number of the [environment -variables](http://www.postgresql.org/docs/9.1/static/libpq-envars.html) -supported by libpq are also supported. Just like libpq, these have -lower precedence than explicitly provided connection parameters. - -See http://www.postgresql.org/docs/9.1/static/libpq-connect.html. - -* `dbname` - The name of the database to connect to -* `user` - The user to sign in as -* `password` - The user's password -* `host` - The host to connect to. Values that start with `/` are for unix domain sockets. (default is `localhost`) -* `port` - The port to bind to. (default is `5432`) -* `sslmode` - Whether or not to use SSL (default is `require`, this is not the default for libpq) - Valid values are: - * `disable` - No SSL - * `require` - Always SSL (skip verification) - * `verify-full` - Always SSL (require verification) - -See http://golang.org/pkg/database/sql to learn how to use with `pq` through the `database/sql` package. - -## Tests - -`go test` is used for testing. A running PostgreSQL server is -required, with the ability to log in. The default database to connect -to test with is "pqgotest," but it can be overridden using environment -variables. - -Example: - - PGHOST=/var/run/postgresql go test pq - -## Features - -* SSL -* Handles bad connections for `database/sql` -* Scan `time.Time` correctly (i.e. `timestamp[tz]`, `time[tz]`, `date`) -* Scan binary blobs correctly (i.e. `bytea`) -* pq.ParseURL for converting urls to connection strings for sql.Open. -* Many libpq compatible environment variables -* Unix socket support - -## Future / Things you can help with - -* Notifications: `LISTEN`/`NOTIFY` -* `hstore` sugar (i.e. handling hstore in `rows.Scan`) - -## Thank you (alphabetical) - -Some of these contributors are from the original library `bmizerany/pq.go` whose -code still exists in here. - -* Andy Balholm (andybalholm) -* Ben Berkert (benburkert) -* Bjørn Madsen (aeons) -* Blake Gentry (bgentry) -* Brad Fitzpatrick (bradfitz) -* Daniel Farina (fdr) -* Everyone at The Go Team -* Federico Romero (federomero) -* Heroku (heroku) -* John Gallagher (jgallagher) -* Kamil Kisiel (kisielk) -* Keith Rarick (kr) -* Marc Brinkmann (mbr) -* Martin Olsen (martinolsen) -* Mike Lewis (mikelikespie) -* Ryan Smith (ryandotsmith) -* Samuel Stauffer (samuel) -* notedit (notedit) diff --git a/vendor/github.com/bmizerany/pq/buf.go b/vendor/github.com/bmizerany/pq/buf.go deleted file mode 100644 index cb4f5493..00000000 --- a/vendor/github.com/bmizerany/pq/buf.go +++ /dev/null @@ -1,80 +0,0 @@ -package pq - -import ( - "bytes" - "encoding/binary" -) - -type readBuf []byte - -func (b *readBuf) int32() (n int) { - n = int(int32(binary.BigEndian.Uint32(*b))) - *b = (*b)[4:] - return -} - -func (b *readBuf) oid() (n oid) { - n = oid(binary.BigEndian.Uint32(*b)) - *b = (*b)[4:] - return -} - -func (b *readBuf) int16() (n int) { - n = int(binary.BigEndian.Uint16(*b)) - *b = (*b)[2:] - return -} - -var stringTerm = []byte{0} - -func (b *readBuf) string() string { - i := bytes.Index(*b, stringTerm) - if i < 0 { - errorf("invalid message format; expected string terminator") - } - s := (*b)[:i] - *b = (*b)[i+1:] - return string(s) -} - -func (b *readBuf) next(n int) (v []byte) { - v = (*b)[:n] - *b = (*b)[n:] - return -} - -func (b *readBuf) byte() byte { - return b.next(1)[0] -} - -type writeBuf []byte - -func newWriteBuf(c byte) *writeBuf { - b := make(writeBuf, 5) - b[0] = c - return &b -} - -func (b *writeBuf) int32(n int) { - x := make([]byte, 4) - binary.BigEndian.PutUint32(x, uint32(n)) - *b = append(*b, x...) -} - -func (b *writeBuf) int16(n int) { - x := make([]byte, 2) - binary.BigEndian.PutUint16(x, uint16(n)) - *b = append(*b, x...) -} - -func (b *writeBuf) string(s string) { - *b = append(*b, (s + "\000")...) -} - -func (b *writeBuf) byte(c byte) { - *b = append(*b, c) -} - -func (b *writeBuf) bytes(v []byte) { - *b = append(*b, v...) -} diff --git a/vendor/github.com/bmizerany/pq/conn.go b/vendor/github.com/bmizerany/pq/conn.go deleted file mode 100644 index 79ce2c1b..00000000 --- a/vendor/github.com/bmizerany/pq/conn.go +++ /dev/null @@ -1,678 +0,0 @@ -package pq - -import ( - "bufio" - "crypto/md5" - "crypto/tls" - "database/sql" - "database/sql/driver" - "encoding/binary" - "errors" - "fmt" - "io" - "net" - "os" - "os/user" - "path" - "strconv" - "strings" -) - -var ( - ErrSSLNotSupported = errors.New("pq: SSL is not enabled on the server") - ErrNotSupported = errors.New("pq: invalid command") -) - -type drv struct{} - -func (d *drv) Open(name string) (driver.Conn, error) { - return Open(name) -} - -func init() { - sql.Register("postgres", &drv{}) -} - -type conn struct { - c net.Conn - buf *bufio.Reader - namei int -} - -func Open(name string) (_ driver.Conn, err error) { - defer errRecover(&err) - defer errRecoverWithPGReason(&err) - - o := make(Values) - - // A number of defaults are applied here, in this order: - // - // * Very low precedence defaults applied in every situation - // * Environment variables - // * Explicitly passed connection information - o.Set("host", "localhost") - o.Set("port", "5432") - - // Default the username, but ignore errors, because a user - // passed in via environment variable or connection string - // would be okay. This can result in connections failing - // *sometimes* if the client relies on being able to determine - // the current username and there are intermittent problems. - u, err := user.Current() - if err == nil { - o.Set("user", u.Username) - } - - for k, v := range parseEnviron(os.Environ()) { - o.Set(k, v) - } - - parseOpts(name, o) - - c, err := net.Dial(network(o)) - if err != nil { - return nil, err - } - - cn := &conn{c: c} - cn.ssl(o) - cn.buf = bufio.NewReader(cn.c) - cn.startup(o) - return cn, nil -} - -func network(o Values) (string, string) { - host := o.Get("host") - - if strings.HasPrefix(host, "/") { - sockPath := path.Join(host, ".s.PGSQL."+o.Get("port")) - return "unix", sockPath - } - - return "tcp", host + ":" + o.Get("port") -} - -type Values map[string]string - -func (vs Values) Set(k, v string) { - vs[k] = v -} - -func (vs Values) Get(k string) (v string) { - v, _ = vs[k] - return -} - -func parseOpts(name string, o Values) { - if len(name) == 0 { - return - } - - ps := strings.Split(name, " ") - for _, p := range ps { - kv := strings.Split(p, "=") - if len(kv) < 2 { - errorf("invalid option: %q", p) - } - o.Set(kv[0], kv[1]) - } -} - -func (cn *conn) Begin() (driver.Tx, error) { - _, err := cn.Exec("BEGIN", nil) - if err != nil { - return nil, err - } - return cn, err -} - -func (cn *conn) Commit() error { - _, err := cn.Exec("COMMIT", nil) - return err -} - -func (cn *conn) Rollback() error { - _, err := cn.Exec("ROLLBACK", nil) - return err -} - -func (cn *conn) gname() string { - cn.namei++ - return strconv.FormatInt(int64(cn.namei), 10) -} - -func (cn *conn) simpleQuery(q string) (res driver.Result, err error) { - defer errRecover(&err) - - b := newWriteBuf('Q') - b.string(q) - cn.send(b) - - for { - t, r := cn.recv1() - switch t { - case 'C': - res = parseComplete(r.string()) - case 'Z': - // done - return - case 'E': - err = parseError(r) - case 'T', 'N', 'S': - // ignore - default: - errorf("unknown response for simple query: %q", t) - } - } - panic("not reached") -} - -func (cn *conn) prepareTo(q, stmtName string) (_ driver.Stmt, err error) { - defer errRecover(&err) - - st := &stmt{cn: cn, name: stmtName, query: q} - - b := newWriteBuf('P') - b.string(st.name) - b.string(q) - b.int16(0) - cn.send(b) - - b = newWriteBuf('D') - b.byte('S') - b.string(st.name) - cn.send(b) - - cn.send(newWriteBuf('S')) - - for { - t, r := cn.recv1() - switch t { - case '1', '2', 'N': - case 't': - st.nparams = int(r.int16()) - st.paramTyps = make([]oid, st.nparams, st.nparams) - - for i := 0; i < st.nparams; i += 1 { - st.paramTyps[i] = r.oid() - } - case 'T': - n := r.int16() - st.cols = make([]string, n) - st.rowTyps = make([]oid, n) - for i := range st.cols { - st.cols[i] = r.string() - r.next(6) - st.rowTyps[i] = r.oid() - r.next(8) - } - case 'n': - // no data - case 'Z': - return st, err - case 'E': - err = parseError(r) - default: - errorf("unexpected describe rows response: %q", t) - } - } - - panic("not reached") -} - -func (cn *conn) Prepare(q string) (driver.Stmt, error) { - return cn.prepareTo(q, cn.gname()) -} - -func (cn *conn) Close() (err error) { - defer errRecover(&err) - cn.send(newWriteBuf('X')) - - return cn.c.Close() -} - -// Implement the optional "Execer" interface for one-shot queries -func (cn *conn) Exec(query string, args []driver.Value) (_ driver.Result, err error) { - defer errRecover(&err) - - // Check to see if we can use the "simpleQuery" interface, which is - // *much* faster than going through prepare/exec - if len(args) == 0 { - return cn.simpleQuery(query) - } - - // Use the unnamed statement to defer planning until bind - // time, or else value-based selectivity estimates cannot be - // used. - st, err := cn.prepareTo(query, "") - if err != nil { - panic(err) - } - - r, err := st.Exec(args) - if err != nil { - panic(err) - } - - return r, err -} - -// Assumes len(*m) is > 5 -func (cn *conn) send(m *writeBuf) { - b := (*m)[1:] - binary.BigEndian.PutUint32(b, uint32(len(b))) - - if (*m)[0] == 0 { - *m = b - } - - _, err := cn.c.Write(*m) - if err != nil { - panic(err) - } -} - -func (cn *conn) recv() (t byte, r *readBuf) { - for { - t, r = cn.recv1() - switch t { - case 'E': - panic(parseError(r)) - case 'N': - // ignore - default: - return - } - } - - panic("not reached") -} - -func (cn *conn) recv1() (byte, *readBuf) { - x := make([]byte, 5) - _, err := io.ReadFull(cn.buf, x) - if err != nil { - panic(err) - } - - b := readBuf(x[1:]) - y := make([]byte, b.int32()-4) - _, err = io.ReadFull(cn.buf, y) - if err != nil { - panic(err) - } - - return x[0], (*readBuf)(&y) -} - -func (cn *conn) ssl(o Values) { - tlsConf := tls.Config{} - switch mode := o.Get("sslmode"); mode { - case "require", "": - tlsConf.InsecureSkipVerify = true - case "verify-full": - // fall out - case "disable": - return - default: - errorf(`unsupported sslmode %q; only "require" (default), "verify-full", and "disable" supported`, mode) - } - - w := newWriteBuf(0) - w.int32(80877103) - cn.send(w) - - b := make([]byte, 1) - _, err := io.ReadFull(cn.c, b) - if err != nil { - panic(err) - } - - if b[0] != 'S' { - panic(ErrSSLNotSupported) - } - - cn.c = tls.Client(cn.c, &tlsConf) -} - -func (cn *conn) startup(o Values) { - w := newWriteBuf(0) - w.int32(196608) - w.string("user") - w.string(o.Get("user")) - w.string("database") - w.string(o.Get("dbname")) - w.string("") - cn.send(w) - - for { - t, r := cn.recv() - switch t { - case 'K', 'S': - case 'R': - cn.auth(r, o) - case 'Z': - return - default: - errorf("unknown response for startup: %q", t) - } - } -} - -func (cn *conn) auth(r *readBuf, o Values) { - switch code := r.int32(); code { - case 0: - // OK - case 3: - w := newWriteBuf('p') - w.string(o.Get("password")) - cn.send(w) - - t, r := cn.recv() - if t != 'R' { - errorf("unexpected password response: %q", t) - } - - if r.int32() != 0 { - errorf("unexpected authentication response: %q", t) - } - case 5: - s := string(r.next(4)) - w := newWriteBuf('p') - w.string("md5" + md5s(md5s(o.Get("password")+o.Get("user"))+s)) - cn.send(w) - - t, r := cn.recv() - if t != 'R' { - errorf("unexpected password response: %q", t) - } - - if r.int32() != 0 { - errorf("unexpected authentication resoonse: %q", t) - } - default: - errorf("unknown authentication response: %d", code) - } -} - -type stmt struct { - cn *conn - name string - query string - cols []string - nparams int - rowTyps []oid - paramTyps []oid - closed bool -} - -func (st *stmt) Close() (err error) { - if st.closed { - return nil - } - - defer errRecover(&err) - - w := newWriteBuf('C') - w.byte('S') - w.string(st.name) - st.cn.send(w) - - st.cn.send(newWriteBuf('S')) - - t, _ := st.cn.recv() - if t != '3' { - errorf("unexpected close response: %q", t) - } - st.closed = true - - t, _ = st.cn.recv() - if t != 'Z' { - errorf("expected ready for query, but got: %q", t) - } - - return nil -} - -func (st *stmt) Query(v []driver.Value) (_ driver.Rows, err error) { - defer errRecover(&err) - st.exec(v) - return &rows{st: st}, nil -} - -func (st *stmt) Exec(v []driver.Value) (res driver.Result, err error) { - defer errRecover(&err) - - if len(v) == 0 { - return st.cn.simpleQuery(st.query) - } - st.exec(v) - - for { - t, r := st.cn.recv1() - switch t { - case 'E': - err = parseError(r) - case 'C': - res = parseComplete(r.string()) - case 'Z': - // done - return - case 'D': - errorf("unexpected data row returned in Exec; check your query") - case 'S', 'N': - // Ignore - default: - errorf("unknown exec response: %q", t) - } - } - - panic("not reached") -} - -func (st *stmt) exec(v []driver.Value) { - w := newWriteBuf('B') - w.string("") - w.string(st.name) - w.int16(0) - w.int16(len(v)) - for i, x := range v { - if x == nil { - w.int32(-1) - } else { - b := encode(x, st.paramTyps[i]) - w.int32(len(b)) - w.bytes(b) - } - } - w.int16(0) - st.cn.send(w) - - w = newWriteBuf('E') - w.string("") - w.int32(0) - st.cn.send(w) - - st.cn.send(newWriteBuf('S')) - - var err error - for { - t, r := st.cn.recv1() - switch t { - case 'E': - err = parseError(r) - case '2': - if err != nil { - panic(err) - } - return - case 'Z': - if err != nil { - panic(err) - } - return - case 'N': - // ignore - default: - errorf("unexpected bind response: %q", t) - } - } -} - -func (st *stmt) NumInput() int { - return st.nparams -} - -type result int64 - -func (i result) RowsAffected() (int64, error) { - return int64(i), nil -} - -func (i result) LastInsertId() (int64, error) { - return 0, ErrNotSupported -} - -func parseComplete(s string) driver.Result { - parts := strings.Split(s, " ") - n, _ := strconv.ParseInt(parts[len(parts)-1], 10, 64) - return result(n) -} - -type rows struct { - st *stmt - done bool -} - -func (rs *rows) Close() error { - for { - err := rs.Next(nil) - switch err { - case nil: - case io.EOF: - return nil - default: - return err - } - } - panic("not reached") -} - -func (rs *rows) Columns() []string { - return rs.st.cols -} - -func (rs *rows) Next(dest []driver.Value) (err error) { - if rs.done { - return io.EOF - } - - defer errRecover(&err) - - for { - t, r := rs.st.cn.recv1() - switch t { - case 'E': - err = parseError(r) - case 'C', 'S', 'N': - continue - case 'Z': - rs.done = true - if err != nil { - return err - } - return io.EOF - case 'D': - n := r.int16() - for i := 0; i < len(dest) && i < n; i++ { - l := r.int32() - if l == -1 { - dest[i] = nil - continue - } - dest[i] = decode(r.next(l), rs.st.rowTyps[i]) - } - return - default: - errorf("unexpected message after execute: %q", t) - } - } - - panic("not reached") -} - -func md5s(s string) string { - h := md5.New() - h.Write([]byte(s)) - return fmt.Sprintf("%x", h.Sum(nil)) -} - -// parseEnviron tries to mimic some of libpq's environment handling -// -// To ease testing, it does not directly reference os.Environ, but is -// designed to accept its output. -// -// Environment-set connection information is intended to have a higher -// precedence than a library default but lower than any explicitly -// passed information (such as in the URL or connection string). -func parseEnviron(env []string) (out map[string]string) { - out = make(map[string]string) - - for _, v := range env { - parts := strings.SplitN(v, "=", 2) - - accrue := func(keyname string) { - out[keyname] = parts[1] - } - - // The order of these is the same as is seen in the - // PostgreSQL 9.1 manual, with omissions briefly - // noted. - switch parts[0] { - case "PGHOST": - accrue("host") - case "PGHOSTADDR": - accrue("hostaddr") - case "PGPORT": - accrue("port") - case "PGDATABASE": - accrue("dbname") - case "PGUSER": - accrue("user") - case "PGPASSWORD": - accrue("password") - // skip PGPASSFILE, PGSERVICE, PGSERVICEFILE, - // PGREALM - case "PGOPTIONS": - accrue("options") - case "PGAPPNAME": - accrue("application_name") - case "PGSSLMODE": - accrue("sslmode") - case "PGREQUIRESSL": - accrue("requiressl") - case "PGSSLCERT": - accrue("sslcert") - case "PGSSLKEY": - accrue("sslkey") - case "PGSSLROOTCERT": - accrue("sslrootcert") - case "PGSSLCRL": - accrue("sslcrl") - case "PGREQUIREPEER": - accrue("requirepeer") - case "PGKRBSRVNAME": - accrue("krbsrvname") - case "PGGSSLIB": - accrue("gsslib") - case "PGCONNECT_TIMEOUT": - accrue("connect_timeout") - case "PGCLIENTENCODING": - accrue("client_encoding") - // skip PGDATESTYLE, PGTZ, PGGEQO, PGSYSCONFDIR, - // PGLOCALEDIR - } - } - - return out -} diff --git a/vendor/github.com/bmizerany/pq/encode.go b/vendor/github.com/bmizerany/pq/encode.go deleted file mode 100644 index 819e9457..00000000 --- a/vendor/github.com/bmizerany/pq/encode.go +++ /dev/null @@ -1,121 +0,0 @@ -package pq - -import ( - "database/sql/driver" - "encoding/hex" - "fmt" - "strconv" - "time" -) - -func encode(x interface{}, pgtypoid oid) []byte { - switch v := x.(type) { - case int64: - return []byte(fmt.Sprintf("%d", v)) - case float32, float64: - return []byte(fmt.Sprintf("%f", v)) - case []byte: - if pgtypoid == t_bytea { - return []byte(fmt.Sprintf("\\x%x", v)) - } - - return v - case string: - if pgtypoid == t_bytea { - return []byte(fmt.Sprintf("\\x%x", v)) - } - - return []byte(v) - case bool: - return []byte(fmt.Sprintf("%t", v)) - case time.Time: - return []byte(v.Format(time.RFC3339Nano)) - default: - errorf("encode: unknown type for %T", v) - } - - panic("not reached") -} - -func decode(s []byte, typ oid) interface{} { - switch typ { - case t_bytea: - s = s[2:] // trim off "\\x" - d := make([]byte, hex.DecodedLen(len(s))) - _, err := hex.Decode(d, s) - if err != nil { - errorf("%s", err) - } - return d - case t_timestamptz: - return mustParse("2006-01-02 15:04:05-07", typ, s) - case t_timestamp: - return mustParse("2006-01-02 15:04:05", typ, s) - case t_time: - return mustParse("15:04:05", typ, s) - case t_timetz: - return mustParse("15:04:05-07", typ, s) - case t_date: - return mustParse("2006-01-02", typ, s) - case t_bool: - return s[0] == 't' - case t_int8, t_int2, t_int4: - i, err := strconv.ParseInt(string(s), 10, 64) - if err != nil { - errorf("%s", err) - } - return i - case t_float4, t_float8: - bits := 64 - if typ == t_float4 { - bits = 32 - } - f, err := strconv.ParseFloat(string(s), bits) - if err != nil { - errorf("%s", err) - } - return f - } - - return s -} - -func mustParse(f string, typ oid, s []byte) time.Time { - str := string(s) - - // Special case until time.Parse bug is fixed: - // http://code.google.com/p/go/issues/detail?id=3487 - if str[len(str)-2] == '.' { - str += "0" - } - - // check for a 30-minute-offset timezone - if (typ == t_timestamptz || typ == t_timetz) && - str[len(str)-3] == ':' { - f += ":00" - } - t, err := time.Parse(f, str) - if err != nil { - errorf("decode: %s", err) - } - return t -} - -type NullTime struct { - Time time.Time - Valid bool // Valid is true if Time is not NULL -} - -// Scan implements the Scanner interface. -func (nt *NullTime) Scan(value interface{}) error { - nt.Time, nt.Valid = value.(time.Time) - return nil -} - -// Value implements the driver Valuer interface. -func (nt NullTime) Value() (driver.Value, error) { - if !nt.Valid { - return nil, nil - } - return nt.Time, nil -} diff --git a/vendor/github.com/bmizerany/pq/error.go b/vendor/github.com/bmizerany/pq/error.go deleted file mode 100644 index 9384ab3e..00000000 --- a/vendor/github.com/bmizerany/pq/error.go +++ /dev/null @@ -1,108 +0,0 @@ -package pq - -import ( - "database/sql/driver" - "fmt" - "io" - "net" - "runtime" -) - -const ( - Efatal = "FATAL" - Epanic = "PANIC" - Ewarning = "WARNING" - Enotice = "NOTICE" - Edebug = "DEBUG" - Einfo = "INFO" - Elog = "LOG" -) - -type Error error - -type PGError interface { - Error() string - Fatal() bool - Get(k byte) (v string) -} -type pgError struct { - c map[byte]string -} - -func parseError(r *readBuf) *pgError { - err := &pgError{make(map[byte]string)} - for t := r.byte(); t != 0; t = r.byte() { - err.c[t] = r.string() - } - return err -} - -func (err *pgError) Get(k byte) (v string) { - v, _ = err.c[k] - return -} - -func (err *pgError) Fatal() bool { - return err.Get('S') == Efatal -} - -func (err *pgError) Error() string { - var s string - for k, v := range err.c { - s += fmt.Sprintf(" %c:%q", k, v) - } - return "pq: " + s[1:] -} - -func errorf(s string, args ...interface{}) { - panic(Error(fmt.Errorf("pq: %s", fmt.Sprintf(s, args...)))) -} - -type SimplePGError struct { - pgError -} - -func (err *SimplePGError) Error() string { - return "pq: " + err.Get('M') -} - -func errRecoverWithPGReason(err *error) { - e := recover() - switch v := e.(type) { - case nil: - // Do nothing - case *pgError: - // Return a SimplePGError in place - *err = &SimplePGError{*v} - default: - // Otherwise re-panic - panic(e) - } -} - -func errRecover(err *error) { - e := recover() - switch v := e.(type) { - case nil: - // Do nothing - case runtime.Error: - panic(v) - case *pgError: - if v.Fatal() { - *err = driver.ErrBadConn - } else { - *err = v - } - case *net.OpError: - *err = driver.ErrBadConn - case error: - if v == io.EOF || v.(error).Error() == "remote error: handshake failure" { - *err = driver.ErrBadConn - } else { - *err = v - } - - default: - panic(fmt.Sprintf("unknown error: %#v", e)) - } -} diff --git a/vendor/github.com/bmizerany/pq/types.go b/vendor/github.com/bmizerany/pq/types.go deleted file mode 100644 index 7d069644..00000000 --- a/vendor/github.com/bmizerany/pq/types.go +++ /dev/null @@ -1,319 +0,0 @@ -package pq - -type oid uint32 - -const ( - t_bool oid = 16 - t_bytea = 17 - t_char = 18 - t_name = 19 - t_int8 = 20 - t_int2 = 21 - t_int2vector = 22 - t_int4 = 23 - t_regproc = 24 - t_text = 25 - t_oid = 26 - t_tid = 27 - t_xid = 28 - t_cid = 29 - t_oidvector = 30 - t_pg_type = 71 - t_pg_attribute = 75 - t_pg_proc = 81 - t_pg_class = 83 - t_xml = 142 - t__xml = 143 - t_pg_node_tree = 194 - t_smgr = 210 - t_point = 600 - t_lseg = 601 - t_path = 602 - t_box = 603 - t_polygon = 604 - t_line = 628 - t__line = 629 - t_float4 = 700 - t_float8 = 701 - t_abstime = 702 - t_reltime = 703 - t_tinterval = 704 - t_unknown = 705 - t_circle = 718 - t__circle = 719 - t_money = 790 - t__money = 791 - t_macaddr = 829 - t_inet = 869 - t_cidr = 650 - t__bool = 1000 - t__bytea = 1001 - t__char = 1002 - t__name = 1003 - t__int2 = 1005 - t__int2vector = 1006 - t__int4 = 1007 - t__regproc = 1008 - t__text = 1009 - t__oid = 1028 - t__tid = 1010 - t__xid = 1011 - t__cid = 1012 - t__oidvector = 1013 - t__bpchar = 1014 - t__varchar = 1015 - t__int8 = 1016 - t__point = 1017 - t__lseg = 1018 - t__path = 1019 - t__box = 1020 - t__float4 = 1021 - t__float8 = 1022 - t__abstime = 1023 - t__reltime = 1024 - t__tinterval = 1025 - t__polygon = 1027 - t_aclitem = 1033 - t__aclitem = 1034 - t__macaddr = 1040 - t__inet = 1041 - t__cidr = 651 - t__cstring = 1263 - t_bpchar = 1042 - t_varchar = 1043 - t_date = 1082 - t_time = 1083 - t_timestamp = 1114 - t__timestamp = 1115 - t__date = 1182 - t__time = 1183 - t_timestamptz = 1184 - t__timestamptz = 1185 - t_interval = 1186 - t__interval = 1187 - t__numeric = 1231 - t_timetz = 1266 - t__timetz = 1270 - t_bit = 1560 - t__bit = 1561 - t_varbit = 1562 - t__varbit = 1563 - t_numeric = 1700 - t_refcursor = 1790 - t__refcursor = 2201 - t_regprocedure = 2202 - t_regoper = 2203 - t_regoperator = 2204 - t_regclass = 2205 - t_regtype = 2206 - t__regprocedure = 2207 - t__regoper = 2208 - t__regoperator = 2209 - t__regclass = 2210 - t__regtype = 2211 - t_uuid = 2950 - t__uuid = 2951 - t_tsvector = 3614 - t_gtsvector = 3642 - t_tsquery = 3615 - t_regconfig = 3734 - t_regdictionary = 3769 - t__tsvector = 3643 - t__gtsvector = 3644 - t__tsquery = 3645 - t__regconfig = 3735 - t__regdictionary = 3770 - t_txid_snapshot = 2970 - t__txid_snapshot = 2949 - t_record = 2249 - t__record = 2287 - t_cstring = 2275 - t_any = 2276 - t_anyarray = 2277 - t_void = 2278 - t_trigger = 2279 - t_language_handler = 2280 - t_internal = 2281 - t_opaque = 2282 - t_anyelement = 2283 - t_anynonarray = 2776 - t_anyenum = 3500 - t_fdw_handler = 3115 - t_pg_attrdef = 10000 - t_pg_constraint = 10001 - t_pg_inherits = 10002 - t_pg_index = 10003 - t_pg_operator = 10004 - t_pg_opfamily = 10005 - t_pg_opclass = 10006 - t_pg_am = 10117 - t_pg_amop = 10118 - t_pg_amproc = 10478 - t_pg_language = 10731 - t_pg_largeobject_metadata = 10732 - t_pg_largeobject = 10733 - t_pg_aggregate = 10734 - t_pg_statistic = 10735 - t_pg_rewrite = 10736 - t_pg_trigger = 10737 - t_pg_description = 10738 - t_pg_cast = 10739 - t_pg_enum = 10936 - t_pg_namespace = 10937 - t_pg_conversion = 10938 - t_pg_depend = 10939 - t_pg_database = 1248 - t_pg_db_role_setting = 10940 - t_pg_tablespace = 10941 - t_pg_pltemplate = 10942 - t_pg_authid = 2842 - t_pg_auth_members = 2843 - t_pg_shdepend = 10943 - t_pg_shdescription = 10944 - t_pg_ts_config = 10945 - t_pg_ts_config_map = 10946 - t_pg_ts_dict = 10947 - t_pg_ts_parser = 10948 - t_pg_ts_template = 10949 - t_pg_extension = 10950 - t_pg_foreign_data_wrapper = 10951 - t_pg_foreign_server = 10952 - t_pg_user_mapping = 10953 - t_pg_foreign_table = 10954 - t_pg_default_acl = 10955 - t_pg_seclabel = 10956 - t_pg_collation = 10957 - t_pg_toast_2604 = 10958 - t_pg_toast_2606 = 10959 - t_pg_toast_2609 = 10960 - t_pg_toast_1255 = 10961 - t_pg_toast_2618 = 10962 - t_pg_toast_3596 = 10963 - t_pg_toast_2619 = 10964 - t_pg_toast_2620 = 10965 - t_pg_toast_1262 = 10966 - t_pg_toast_2396 = 10967 - t_pg_toast_2964 = 10968 - t_pg_roles = 10970 - t_pg_shadow = 10973 - t_pg_group = 10976 - t_pg_user = 10979 - t_pg_rules = 10982 - t_pg_views = 10986 - t_pg_tables = 10989 - t_pg_indexes = 10993 - t_pg_stats = 10997 - t_pg_locks = 11001 - t_pg_cursors = 11004 - t_pg_available_extensions = 11007 - t_pg_available_extension_versions = 11010 - t_pg_prepared_xacts = 11013 - t_pg_prepared_statements = 11017 - t_pg_seclabels = 11020 - t_pg_settings = 11024 - t_pg_timezone_abbrevs = 11029 - t_pg_timezone_names = 11032 - t_pg_stat_all_tables = 11035 - t_pg_stat_xact_all_tables = 11039 - t_pg_stat_sys_tables = 11043 - t_pg_stat_xact_sys_tables = 11047 - t_pg_stat_user_tables = 11050 - t_pg_stat_xact_user_tables = 11054 - t_pg_statio_all_tables = 11057 - t_pg_statio_sys_tables = 11061 - t_pg_statio_user_tables = 11064 - t_pg_stat_all_indexes = 11067 - t_pg_stat_sys_indexes = 11071 - t_pg_stat_user_indexes = 11074 - t_pg_statio_all_indexes = 11077 - t_pg_statio_sys_indexes = 11081 - t_pg_statio_user_indexes = 11084 - t_pg_statio_all_sequences = 11087 - t_pg_statio_sys_sequences = 11090 - t_pg_statio_user_sequences = 11093 - t_pg_stat_activity = 11096 - t_pg_stat_replication = 11099 - t_pg_stat_database = 11102 - t_pg_stat_database_conflicts = 11105 - t_pg_stat_user_functions = 11108 - t_pg_stat_xact_user_functions = 11112 - t_pg_stat_bgwriter = 11116 - t_pg_user_mappings = 11119 - t_cardinal_number = 11669 - t_character_data = 11671 - t_sql_identifier = 11672 - t_information_schema_catalog_name = 11674 - t_time_stamp = 11676 - t_yes_or_no = 11677 - t_applicable_roles = 11680 - t_administrable_role_authorizations = 11684 - t_attributes = 11687 - t_character_sets = 11691 - t_check_constraint_routine_usage = 11695 - t_check_constraints = 11699 - t_collations = 11703 - t_collation_character_set_applicability = 11706 - t_column_domain_usage = 11709 - t_column_privileges = 11713 - t_column_udt_usage = 11717 - t_columns = 11721 - t_constraint_column_usage = 11725 - t_constraint_table_usage = 11729 - t_domain_constraints = 11733 - t_domain_udt_usage = 11737 - t_domains = 11740 - t_enabled_roles = 11744 - t_key_column_usage = 11747 - t_parameters = 11751 - t_referential_constraints = 11755 - t_role_column_grants = 11759 - t_routine_privileges = 11762 - t_role_routine_grants = 11766 - t_routines = 11769 - t_schemata = 11773 - t_sequences = 11776 - t_sql_features = 11780 - t_pg_toast_11779 = 11782 - t_sql_implementation_info = 11785 - t_pg_toast_11784 = 11787 - t_sql_languages = 11790 - t_pg_toast_11789 = 11792 - t_sql_packages = 11795 - t_pg_toast_11794 = 11797 - t_sql_parts = 11800 - t_pg_toast_11799 = 11802 - t_sql_sizing = 11805 - t_pg_toast_11804 = 11807 - t_sql_sizing_profiles = 11810 - t_pg_toast_11809 = 11812 - t_table_constraints = 11815 - t_table_privileges = 11819 - t_role_table_grants = 11823 - t_tables = 11826 - t_triggered_update_columns = 11830 - t_triggers = 11834 - t_usage_privileges = 11838 - t_role_usage_grants = 11842 - t_view_column_usage = 11845 - t_view_routine_usage = 11849 - t_view_table_usage = 11853 - t_views = 11857 - t_data_type_privileges = 11861 - t_element_types = 11865 - t__pg_foreign_data_wrappers = 11869 - t_foreign_data_wrapper_options = 11872 - t_foreign_data_wrappers = 11875 - t__pg_foreign_servers = 11878 - t_foreign_server_options = 11882 - t_foreign_servers = 11885 - t__pg_foreign_tables = 11888 - t_foreign_table_options = 11892 - t_foreign_tables = 11895 - t__pg_user_mappings = 11898 - t_user_mapping_options = 11901 - t_user_mappings = 11905 - t_t = 16806 - t__t = 16805 - t_temp = 16810 - t__temp = 16809 -) diff --git a/vendor/github.com/bmizerany/pq/url.go b/vendor/github.com/bmizerany/pq/url.go deleted file mode 100644 index 4e32cea8..00000000 --- a/vendor/github.com/bmizerany/pq/url.go +++ /dev/null @@ -1,68 +0,0 @@ -package pq - -import ( - "fmt" - nurl "net/url" - "sort" - "strings" -) - -// ParseURL converts url to a connection string for driver.Open. -// Example: -// -// "postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full" -// -// converts to: -// -// "user=bob password=secret host=1.2.3.4 port=5432 dbname=mydb sslmode=verify-full" -// -// A minimal example: -// -// "postgres://" -// -// This will be blank, causing driver.Open to use all of the defaults -func ParseURL(url string) (string, error) { - u, err := nurl.Parse(url) - if err != nil { - return "", err - } - - if u.Scheme != "postgres" { - return "", fmt.Errorf("invalid connection protocol: %s", u.Scheme) - } - - var kvs []string - accrue := func(k, v string) { - if v != "" { - kvs = append(kvs, k+"="+v) - } - } - - if u.User != nil { - v := u.User.Username() - accrue("user", v) - - v, _ = u.User.Password() - accrue("password", v) - } - - i := strings.Index(u.Host, ":") - if i < 0 { - accrue("host", u.Host) - } else { - accrue("host", u.Host[:i]) - accrue("port", u.Host[i+1:]) - } - - if u.Path != "" { - accrue("dbname", u.Path[1:]) - } - - q := u.Query() - for k, _ := range q { - accrue(k, q.Get(k)) - } - - sort.Strings(kvs) // Makes testing easier (not a performance concern) - return strings.Join(kvs, " "), nil -} diff --git a/vendor/github.com/jackc/chunkreader/v2/.travis.yml b/vendor/github.com/jackc/chunkreader/v2/.travis.yml deleted file mode 100644 index e176228e..00000000 --- a/vendor/github.com/jackc/chunkreader/v2/.travis.yml +++ /dev/null @@ -1,9 +0,0 @@ -language: go - -go: - - 1.x - - tip - -matrix: - allow_failures: - - go: tip diff --git a/vendor/github.com/jackc/chunkreader/v2/LICENSE b/vendor/github.com/jackc/chunkreader/v2/LICENSE deleted file mode 100644 index c1c4f50f..00000000 --- a/vendor/github.com/jackc/chunkreader/v2/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2019 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/chunkreader/v2/README.md b/vendor/github.com/jackc/chunkreader/v2/README.md deleted file mode 100644 index 01209bfa..00000000 --- a/vendor/github.com/jackc/chunkreader/v2/README.md +++ /dev/null @@ -1,8 +0,0 @@ -[![](https://godoc.org/github.com/jackc/chunkreader?status.svg)](https://godoc.org/github.com/jackc/chunkreader) -[![Build Status](https://travis-ci.org/jackc/chunkreader.svg)](https://travis-ci.org/jackc/chunkreader) - -# chunkreader - -Package chunkreader provides an io.Reader wrapper that minimizes IO reads and memory allocations. - -Extracted from original implementation in https://github.com/jackc/pgx. diff --git a/vendor/github.com/jackc/chunkreader/v2/chunkreader.go b/vendor/github.com/jackc/chunkreader/v2/chunkreader.go deleted file mode 100644 index afea1c52..00000000 --- a/vendor/github.com/jackc/chunkreader/v2/chunkreader.go +++ /dev/null @@ -1,104 +0,0 @@ -// Package chunkreader provides an io.Reader wrapper that minimizes IO reads and memory allocations. -package chunkreader - -import ( - "io" -) - -// ChunkReader is a io.Reader wrapper that minimizes IO reads and memory allocations. It allocates memory in chunks and -// will read as much as will fit in the current buffer in a single call regardless of how large a read is actually -// requested. The memory returned via Next is owned by the caller. This avoids the need for an additional copy. -// -// The downside of this approach is that a large buffer can be pinned in memory even if only a small slice is -// referenced. For example, an entire 4096 byte block could be pinned in memory by even a 1 byte slice. In these rare -// cases it would be advantageous to copy the bytes to another slice. -type ChunkReader struct { - r io.Reader - - buf []byte - rp, wp int // buf read position and write position - - config Config -} - -// Config contains configuration parameters for ChunkReader. -type Config struct { - MinBufLen int // Minimum buffer length -} - -// New creates and returns a new ChunkReader for r with default configuration. -func New(r io.Reader) *ChunkReader { - cr, err := NewConfig(r, Config{}) - if err != nil { - panic("default config can't be bad") - } - - return cr -} - -// NewConfig creates and a new ChunkReader for r configured by config. -func NewConfig(r io.Reader, config Config) (*ChunkReader, error) { - if config.MinBufLen == 0 { - // By historical reasons Postgres currently has 8KB send buffer inside, - // so here we want to have at least the same size buffer. - // @see https://github.com/postgres/postgres/blob/249d64999615802752940e017ee5166e726bc7cd/src/backend/libpq/pqcomm.c#L134 - // @see https://www.postgresql.org/message-id/0cdc5485-cb3c-5e16-4a46-e3b2f7a41322%40ya.ru - config.MinBufLen = 8192 - } - - return &ChunkReader{ - r: r, - buf: make([]byte, config.MinBufLen), - config: config, - }, nil -} - -// Next returns buf filled with the next n bytes. The caller gains ownership of buf. It is not necessary to make a copy -// of buf. If an error occurs, buf will be nil. -func (r *ChunkReader) Next(n int) (buf []byte, err error) { - // n bytes already in buf - if (r.wp - r.rp) >= n { - buf = r.buf[r.rp : r.rp+n] - r.rp += n - return buf, err - } - - // available space in buf is less than n - if len(r.buf) < n { - r.copyBufContents(r.newBuf(n)) - } - - // buf is large enough, but need to shift filled area to start to make enough contiguous space - minReadCount := n - (r.wp - r.rp) - if (len(r.buf) - r.wp) < minReadCount { - newBuf := r.newBuf(n) - r.copyBufContents(newBuf) - } - - if err := r.appendAtLeast(minReadCount); err != nil { - return nil, err - } - - buf = r.buf[r.rp : r.rp+n] - r.rp += n - return buf, nil -} - -func (r *ChunkReader) appendAtLeast(fillLen int) error { - n, err := io.ReadAtLeast(r.r, r.buf[r.wp:], fillLen) - r.wp += n - return err -} - -func (r *ChunkReader) newBuf(size int) []byte { - if size < r.config.MinBufLen { - size = r.config.MinBufLen - } - return make([]byte, size) -} - -func (r *ChunkReader) copyBufContents(dest []byte) { - r.wp = copy(dest, r.buf[r.rp:r.wp]) - r.rp = 0 - r.buf = dest -} diff --git a/vendor/github.com/jackc/pgconn/.gitignore b/vendor/github.com/jackc/pgconn/.gitignore deleted file mode 100644 index e980f555..00000000 --- a/vendor/github.com/jackc/pgconn/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -.envrc -vendor/ -.vscode diff --git a/vendor/github.com/jackc/pgconn/CHANGELOG.md b/vendor/github.com/jackc/pgconn/CHANGELOG.md deleted file mode 100644 index 3550b437..00000000 --- a/vendor/github.com/jackc/pgconn/CHANGELOG.md +++ /dev/null @@ -1,161 +0,0 @@ -# 1.14.0 (February 11, 2023) - -* Fix: each connection attempt to new node gets own timeout (Nathan Giardina) -* Set SNI for SSL connections (Stas Kelvich) -* Fix: CopyFrom I/O race (Tommy Reilly) -* Minor dependency upgrades - -# 1.13.0 (August 6, 2022) - -* Add sslpassword support (Eric McCormack and yun.xu) -* Add prefer-standby target_session_attrs support (sergey.bashilov) -* Fix GSS ErrorResponse handling (Oliver Tan) - -# 1.12.1 (May 7, 2022) - -* Fix: setting krbspn and krbsrvname in connection string (sireax) -* Add support for Unix sockets on Windows (Eno Compton) -* Stop ignoring ErrorResponse during SCRAM auth (Rafi Shamim) - -# 1.12.0 (April 21, 2022) - -* Add pluggable GSSAPI support (Oliver Tan) -* Fix: Consider any "0A000" error a possible cached plan changed error due to locale -* Better match psql fallback behavior with multiple hosts - -# 1.11.0 (February 7, 2022) - -* Support port in ip from LookupFunc to override config (James Hartig) -* Fix TLS connection timeout (Blake Embrey) -* Add support for read-only, primary, standby, prefer-standby target_session_attributes (Oscar) -* Fix connect when receiving NoticeResponse - -# 1.10.1 (November 20, 2021) - -* Close without waiting for response (Kei Kamikawa) -* Save waiting for network round-trip in CopyFrom (Rueian) -* Fix concurrency issue with ContextWatcher -* LRU.Get always checks context for cancellation / expiration (Georges Varouchas) - -# 1.10.0 (July 24, 2021) - -* net.Timeout errors are no longer returned when a query is canceled via context. A wrapped context error is returned. - -# 1.9.0 (July 10, 2021) - -* pgconn.Timeout only is true for errors originating in pgconn (Michael Darr) -* Add defaults for sslcert, sslkey, and sslrootcert (Joshua Brindle) -* Solve issue with 'sslmode=verify-full' when there are multiple hosts (mgoddard) -* Fix default host when parsing URL without host but with port -* Allow dbname query parameter in URL conn string -* Update underlying dependencies - -# 1.8.1 (March 25, 2021) - -* Better connection string sanitization (ip.novikov) -* Use proper pgpass location on Windows (Moshe Katz) -* Use errors instead of golang.org/x/xerrors -* Resume fallback on server error in Connect (Andrey Borodin) - -# 1.8.0 (December 3, 2020) - -* Add StatementErrored method to stmtcache.Cache. This allows the cache to purge invalidated prepared statements. (Ethan Pailes) - -# 1.7.2 (November 3, 2020) - -* Fix data value slices into work buffer with capacities larger than length. - -# 1.7.1 (October 31, 2020) - -* Do not asyncClose after receiving FATAL error from PostgreSQL server - -# 1.7.0 (September 26, 2020) - -* Exec(Params|Prepared) return ResultReader with FieldDescriptions loaded -* Add ReceiveResults (Sebastiaan Mannem) -* Fix parsing DSN connection with bad backslash -* Add PgConn.CleanupDone so connection pools can determine when async close is complete - -# 1.6.4 (July 29, 2020) - -* Fix deadlock on error after CommandComplete but before ReadyForQuery -* Fix panic on parsing DSN with trailing '=' - -# 1.6.3 (July 22, 2020) - -* Fix error message after AppendCertsFromPEM failure (vahid-sohrabloo) - -# 1.6.2 (July 14, 2020) - -* Update pgservicefile library - -# 1.6.1 (June 27, 2020) - -* Update golang.org/x/crypto to latest -* Update golang.org/x/text to 0.3.3 -* Fix error handling for bad PGSERVICE definition -* Redact passwords in ParseConfig errors (Lukas Vogel) - -# 1.6.0 (June 6, 2020) - -* Fix panic when closing conn during cancellable query -* Fix behavior of sslmode=require with sslrootcert present (Petr Jediný) -* Fix field descriptions available after command concluded (Tobias Salzmann) -* Support connect_timeout (georgysavva) -* Handle IPv6 in connection URLs (Lukas Vogel) -* Fix ValidateConnect with cancelable context -* Improve CopyFrom performance -* Add Config.Copy (georgysavva) - -# 1.5.0 (March 30, 2020) - -* Update golang.org/x/crypto for security fix -* Implement "verify-ca" SSL mode (Greg Curtis) - -# 1.4.0 (March 7, 2020) - -* Fix ExecParams and ExecPrepared handling of empty query. -* Support reading config from PostgreSQL service files. - -# 1.3.2 (February 14, 2020) - -* Update chunkreader to v2.0.1 for optimized default buffer size. - -# 1.3.1 (February 5, 2020) - -* Fix CopyFrom deadlock when multiple NoticeResponse received during copy - -# 1.3.0 (January 23, 2020) - -* Add Hijack and Construct. -* Update pgproto3 to v2.0.1. - -# 1.2.1 (January 13, 2020) - -* Fix data race in context cancellation introduced in v1.2.0. - -# 1.2.0 (January 11, 2020) - -## Features - -* Add Insert(), Update(), Delete(), and Select() statement type query methods to CommandTag. -* Add PgError.SQLState method. This could be used for compatibility with other drivers and databases. - -## Performance - -* Improve performance when context.Background() is used. (bakape) -* CommandTag.RowsAffected is faster and does not allocate. - -## Fixes - -* Try to cancel any in-progress query when a conn is closed by ctx cancel. -* Handle NoticeResponse during CopyFrom. -* Ignore errors sending Terminate message while closing connection. This mimics the behavior of libpq PGfinish. - -# 1.1.0 (October 12, 2019) - -* Add PgConn.IsBusy() method. - -# 1.0.1 (September 19, 2019) - -* Fix statement cache not properly cleaning discarded statements. diff --git a/vendor/github.com/jackc/pgconn/LICENSE b/vendor/github.com/jackc/pgconn/LICENSE deleted file mode 100644 index aebadd6c..00000000 --- a/vendor/github.com/jackc/pgconn/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2019-2021 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/pgconn/README.md b/vendor/github.com/jackc/pgconn/README.md deleted file mode 100644 index 9af04fe7..00000000 --- a/vendor/github.com/jackc/pgconn/README.md +++ /dev/null @@ -1,62 +0,0 @@ -[![](https://godoc.org/github.com/jackc/pgconn?status.svg)](https://godoc.org/github.com/jackc/pgconn) -![CI](https://github.com/jackc/pgconn/workflows/CI/badge.svg) - ---- - -This version is used with pgx `v4`. In pgx `v5` it is part of the https://github.com/jackc/pgx repository. - ---- - -# pgconn - -Package pgconn is a low-level PostgreSQL database driver. It operates at nearly the same level as the C library libpq. -It is primarily intended to serve as the foundation for higher level libraries such as https://github.com/jackc/pgx. -Applications should handle normal queries with a higher level library and only use pgconn directly when required for -low-level access to PostgreSQL functionality. - -## Example Usage - -```go -pgConn, err := pgconn.Connect(context.Background(), os.Getenv("DATABASE_URL")) -if err != nil { - log.Fatalln("pgconn failed to connect:", err) -} -defer pgConn.Close(context.Background()) - -result := pgConn.ExecParams(context.Background(), "SELECT email FROM users WHERE id=$1", [][]byte{[]byte("123")}, nil, nil, nil) -for result.NextRow() { - fmt.Println("User 123 has email:", string(result.Values()[0])) -} -_, err = result.Close() -if err != nil { - log.Fatalln("failed reading result:", err) -} -``` - -## Testing - -The pgconn tests require a PostgreSQL database. It will connect to the database specified in the `PGX_TEST_CONN_STRING` -environment variable. The `PGX_TEST_CONN_STRING` environment variable can be a URL or DSN. In addition, the standard `PG*` -environment variables will be respected. Consider using [direnv](https://github.com/direnv/direnv) to simplify -environment variable handling. - -### Example Test Environment - -Connect to your PostgreSQL server and run: - -``` -create database pgx_test; -``` - -Now you can run the tests: - -```bash -PGX_TEST_CONN_STRING="host=/var/run/postgresql dbname=pgx_test" go test ./... -``` - -### Connection and Authentication Tests - -Pgconn supports multiple connection types and means of authentication. These tests are optional. They -will only run if the appropriate environment variable is set. Run `go test -v | grep SKIP` to see if any tests are being -skipped. Most developers will not need to enable these tests. See `ci/setup_test.bash` for an example set up if you need change -authentication code. diff --git a/vendor/github.com/jackc/pgconn/auth_scram.go b/vendor/github.com/jackc/pgconn/auth_scram.go deleted file mode 100644 index d8d71116..00000000 --- a/vendor/github.com/jackc/pgconn/auth_scram.go +++ /dev/null @@ -1,270 +0,0 @@ -// SCRAM-SHA-256 authentication -// -// Resources: -// https://tools.ietf.org/html/rfc5802 -// https://tools.ietf.org/html/rfc8265 -// https://www.postgresql.org/docs/current/sasl-authentication.html -// -// Inspiration drawn from other implementations: -// https://github.com/lib/pq/pull/608 -// https://github.com/lib/pq/pull/788 -// https://github.com/lib/pq/pull/833 - -package pgconn - -import ( - "bytes" - "crypto/hmac" - "crypto/rand" - "crypto/sha256" - "encoding/base64" - "errors" - "fmt" - "strconv" - - "github.com/jackc/pgproto3/v2" - "golang.org/x/crypto/pbkdf2" - "golang.org/x/text/secure/precis" -) - -const clientNonceLen = 18 - -// Perform SCRAM authentication. -func (c *PgConn) scramAuth(serverAuthMechanisms []string) error { - sc, err := newScramClient(serverAuthMechanisms, c.config.Password) - if err != nil { - return err - } - - // Send client-first-message in a SASLInitialResponse - saslInitialResponse := &pgproto3.SASLInitialResponse{ - AuthMechanism: "SCRAM-SHA-256", - Data: sc.clientFirstMessage(), - } - _, err = c.conn.Write(saslInitialResponse.Encode(nil)) - if err != nil { - return err - } - - // Receive server-first-message payload in a AuthenticationSASLContinue. - saslContinue, err := c.rxSASLContinue() - if err != nil { - return err - } - err = sc.recvServerFirstMessage(saslContinue.Data) - if err != nil { - return err - } - - // Send client-final-message in a SASLResponse - saslResponse := &pgproto3.SASLResponse{ - Data: []byte(sc.clientFinalMessage()), - } - _, err = c.conn.Write(saslResponse.Encode(nil)) - if err != nil { - return err - } - - // Receive server-final-message payload in a AuthenticationSASLFinal. - saslFinal, err := c.rxSASLFinal() - if err != nil { - return err - } - return sc.recvServerFinalMessage(saslFinal.Data) -} - -func (c *PgConn) rxSASLContinue() (*pgproto3.AuthenticationSASLContinue, error) { - msg, err := c.receiveMessage() - if err != nil { - return nil, err - } - switch m := msg.(type) { - case *pgproto3.AuthenticationSASLContinue: - return m, nil - case *pgproto3.ErrorResponse: - return nil, ErrorResponseToPgError(m) - } - - return nil, fmt.Errorf("expected AuthenticationSASLContinue message but received unexpected message %T", msg) -} - -func (c *PgConn) rxSASLFinal() (*pgproto3.AuthenticationSASLFinal, error) { - msg, err := c.receiveMessage() - if err != nil { - return nil, err - } - switch m := msg.(type) { - case *pgproto3.AuthenticationSASLFinal: - return m, nil - case *pgproto3.ErrorResponse: - return nil, ErrorResponseToPgError(m) - } - - return nil, fmt.Errorf("expected AuthenticationSASLFinal message but received unexpected message %T", msg) -} - -type scramClient struct { - serverAuthMechanisms []string - password []byte - clientNonce []byte - - clientFirstMessageBare []byte - - serverFirstMessage []byte - clientAndServerNonce []byte - salt []byte - iterations int - - saltedPassword []byte - authMessage []byte -} - -func newScramClient(serverAuthMechanisms []string, password string) (*scramClient, error) { - sc := &scramClient{ - serverAuthMechanisms: serverAuthMechanisms, - } - - // Ensure server supports SCRAM-SHA-256 - hasScramSHA256 := false - for _, mech := range sc.serverAuthMechanisms { - if mech == "SCRAM-SHA-256" { - hasScramSHA256 = true - break - } - } - if !hasScramSHA256 { - return nil, errors.New("server does not support SCRAM-SHA-256") - } - - // precis.OpaqueString is equivalent to SASLprep for password. - var err error - sc.password, err = precis.OpaqueString.Bytes([]byte(password)) - if err != nil { - // PostgreSQL allows passwords invalid according to SCRAM / SASLprep. - sc.password = []byte(password) - } - - buf := make([]byte, clientNonceLen) - _, err = rand.Read(buf) - if err != nil { - return nil, err - } - sc.clientNonce = make([]byte, base64.RawStdEncoding.EncodedLen(len(buf))) - base64.RawStdEncoding.Encode(sc.clientNonce, buf) - - return sc, nil -} - -func (sc *scramClient) clientFirstMessage() []byte { - sc.clientFirstMessageBare = []byte(fmt.Sprintf("n=,r=%s", sc.clientNonce)) - return []byte(fmt.Sprintf("n,,%s", sc.clientFirstMessageBare)) -} - -func (sc *scramClient) recvServerFirstMessage(serverFirstMessage []byte) error { - sc.serverFirstMessage = serverFirstMessage - buf := serverFirstMessage - if !bytes.HasPrefix(buf, []byte("r=")) { - return errors.New("invalid SCRAM server-first-message received from server: did not include r=") - } - buf = buf[2:] - - idx := bytes.IndexByte(buf, ',') - if idx == -1 { - return errors.New("invalid SCRAM server-first-message received from server: did not include s=") - } - sc.clientAndServerNonce = buf[:idx] - buf = buf[idx+1:] - - if !bytes.HasPrefix(buf, []byte("s=")) { - return errors.New("invalid SCRAM server-first-message received from server: did not include s=") - } - buf = buf[2:] - - idx = bytes.IndexByte(buf, ',') - if idx == -1 { - return errors.New("invalid SCRAM server-first-message received from server: did not include i=") - } - saltStr := buf[:idx] - buf = buf[idx+1:] - - if !bytes.HasPrefix(buf, []byte("i=")) { - return errors.New("invalid SCRAM server-first-message received from server: did not include i=") - } - buf = buf[2:] - iterationsStr := buf - - var err error - sc.salt, err = base64.StdEncoding.DecodeString(string(saltStr)) - if err != nil { - return fmt.Errorf("invalid SCRAM salt received from server: %w", err) - } - - sc.iterations, err = strconv.Atoi(string(iterationsStr)) - if err != nil || sc.iterations <= 0 { - return fmt.Errorf("invalid SCRAM iteration count received from server: %w", err) - } - - if !bytes.HasPrefix(sc.clientAndServerNonce, sc.clientNonce) { - return errors.New("invalid SCRAM nonce: did not start with client nonce") - } - - if len(sc.clientAndServerNonce) <= len(sc.clientNonce) { - return errors.New("invalid SCRAM nonce: did not include server nonce") - } - - return nil -} - -func (sc *scramClient) clientFinalMessage() string { - clientFinalMessageWithoutProof := []byte(fmt.Sprintf("c=biws,r=%s", sc.clientAndServerNonce)) - - sc.saltedPassword = pbkdf2.Key([]byte(sc.password), sc.salt, sc.iterations, 32, sha256.New) - sc.authMessage = bytes.Join([][]byte{sc.clientFirstMessageBare, sc.serverFirstMessage, clientFinalMessageWithoutProof}, []byte(",")) - - clientProof := computeClientProof(sc.saltedPassword, sc.authMessage) - - return fmt.Sprintf("%s,p=%s", clientFinalMessageWithoutProof, clientProof) -} - -func (sc *scramClient) recvServerFinalMessage(serverFinalMessage []byte) error { - if !bytes.HasPrefix(serverFinalMessage, []byte("v=")) { - return errors.New("invalid SCRAM server-final-message received from server") - } - - serverSignature := serverFinalMessage[2:] - - if !hmac.Equal(serverSignature, computeServerSignature(sc.saltedPassword, sc.authMessage)) { - return errors.New("invalid SCRAM ServerSignature received from server") - } - - return nil -} - -func computeHMAC(key, msg []byte) []byte { - mac := hmac.New(sha256.New, key) - mac.Write(msg) - return mac.Sum(nil) -} - -func computeClientProof(saltedPassword, authMessage []byte) []byte { - clientKey := computeHMAC(saltedPassword, []byte("Client Key")) - storedKey := sha256.Sum256(clientKey) - clientSignature := computeHMAC(storedKey[:], authMessage) - - clientProof := make([]byte, len(clientSignature)) - for i := 0; i < len(clientSignature); i++ { - clientProof[i] = clientKey[i] ^ clientSignature[i] - } - - buf := make([]byte, base64.StdEncoding.EncodedLen(len(clientProof))) - base64.StdEncoding.Encode(buf, clientProof) - return buf -} - -func computeServerSignature(saltedPassword []byte, authMessage []byte) []byte { - serverKey := computeHMAC(saltedPassword, []byte("Server Key")) - serverSignature := computeHMAC(serverKey, authMessage) - buf := make([]byte, base64.StdEncoding.EncodedLen(len(serverSignature))) - base64.StdEncoding.Encode(buf, serverSignature) - return buf -} diff --git a/vendor/github.com/jackc/pgconn/config.go b/vendor/github.com/jackc/pgconn/config.go deleted file mode 100644 index 4080f2c6..00000000 --- a/vendor/github.com/jackc/pgconn/config.go +++ /dev/null @@ -1,905 +0,0 @@ -package pgconn - -import ( - "context" - "crypto/tls" - "crypto/x509" - "encoding/pem" - "errors" - "fmt" - "io" - "io/ioutil" - "math" - "net" - "net/url" - "os" - "path/filepath" - "strconv" - "strings" - "time" - - "github.com/jackc/chunkreader/v2" - "github.com/jackc/pgpassfile" - "github.com/jackc/pgproto3/v2" - "github.com/jackc/pgservicefile" -) - -type AfterConnectFunc func(ctx context.Context, pgconn *PgConn) error -type ValidateConnectFunc func(ctx context.Context, pgconn *PgConn) error -type GetSSLPasswordFunc func(ctx context.Context) string - -// Config is the settings used to establish a connection to a PostgreSQL server. It must be created by ParseConfig. A -// manually initialized Config will cause ConnectConfig to panic. -type Config struct { - Host string // host (e.g. localhost) or absolute path to unix domain socket directory (e.g. /private/tmp) - Port uint16 - Database string - User string - Password string - TLSConfig *tls.Config // nil disables TLS - ConnectTimeout time.Duration - DialFunc DialFunc // e.g. net.Dialer.DialContext - LookupFunc LookupFunc // e.g. net.Resolver.LookupHost - BuildFrontend BuildFrontendFunc - RuntimeParams map[string]string // Run-time parameters to set on connection as session default values (e.g. search_path or application_name) - - KerberosSrvName string - KerberosSpn string - Fallbacks []*FallbackConfig - - // ValidateConnect is called during a connection attempt after a successful authentication with the PostgreSQL server. - // It can be used to validate that the server is acceptable. If this returns an error the connection is closed and the next - // fallback config is tried. This allows implementing high availability behavior such as libpq does with target_session_attrs. - ValidateConnect ValidateConnectFunc - - // AfterConnect is called after ValidateConnect. It can be used to set up the connection (e.g. Set session variables - // or prepare statements). If this returns an error the connection attempt fails. - AfterConnect AfterConnectFunc - - // OnNotice is a callback function called when a notice response is received. - OnNotice NoticeHandler - - // OnNotification is a callback function called when a notification from the LISTEN/NOTIFY system is received. - OnNotification NotificationHandler - - createdByParseConfig bool // Used to enforce created by ParseConfig rule. -} - -// ParseConfigOptions contains options that control how a config is built such as getsslpassword. -type ParseConfigOptions struct { - // GetSSLPassword gets the password to decrypt a SSL client certificate. This is analogous to the the libpq function - // PQsetSSLKeyPassHook_OpenSSL. - GetSSLPassword GetSSLPasswordFunc -} - -// Copy returns a deep copy of the config that is safe to use and modify. -// The only exception is the TLSConfig field: -// according to the tls.Config docs it must not be modified after creation. -func (c *Config) Copy() *Config { - newConf := new(Config) - *newConf = *c - if newConf.TLSConfig != nil { - newConf.TLSConfig = c.TLSConfig.Clone() - } - if newConf.RuntimeParams != nil { - newConf.RuntimeParams = make(map[string]string, len(c.RuntimeParams)) - for k, v := range c.RuntimeParams { - newConf.RuntimeParams[k] = v - } - } - if newConf.Fallbacks != nil { - newConf.Fallbacks = make([]*FallbackConfig, len(c.Fallbacks)) - for i, fallback := range c.Fallbacks { - newFallback := new(FallbackConfig) - *newFallback = *fallback - if newFallback.TLSConfig != nil { - newFallback.TLSConfig = fallback.TLSConfig.Clone() - } - newConf.Fallbacks[i] = newFallback - } - } - return newConf -} - -// FallbackConfig is additional settings to attempt a connection with when the primary Config fails to establish a -// network connection. It is used for TLS fallback such as sslmode=prefer and high availability (HA) connections. -type FallbackConfig struct { - Host string // host (e.g. localhost) or path to unix domain socket directory (e.g. /private/tmp) - Port uint16 - TLSConfig *tls.Config // nil disables TLS -} - -// isAbsolutePath checks if the provided value is an absolute path either -// beginning with a forward slash (as on Linux-based systems) or with a capital -// letter A-Z followed by a colon and a backslash, e.g., "C:\", (as on Windows). -func isAbsolutePath(path string) bool { - isWindowsPath := func(p string) bool { - if len(p) < 3 { - return false - } - drive := p[0] - colon := p[1] - backslash := p[2] - if drive >= 'A' && drive <= 'Z' && colon == ':' && backslash == '\\' { - return true - } - return false - } - return strings.HasPrefix(path, "/") || isWindowsPath(path) -} - -// NetworkAddress converts a PostgreSQL host and port into network and address suitable for use with -// net.Dial. -func NetworkAddress(host string, port uint16) (network, address string) { - if isAbsolutePath(host) { - network = "unix" - address = filepath.Join(host, ".s.PGSQL.") + strconv.FormatInt(int64(port), 10) - } else { - network = "tcp" - address = net.JoinHostPort(host, strconv.Itoa(int(port))) - } - return network, address -} - -// ParseConfig builds a *Config from connString with similar behavior to the PostgreSQL standard C library libpq. It -// uses the same defaults as libpq (e.g. port=5432) and understands most PG* environment variables. ParseConfig closely -// matches the parsing behavior of libpq. connString may either be in URL format or keyword = value format (DSN style). -// See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING for details. connString also may be -// empty to only read from the environment. If a password is not supplied it will attempt to read the .pgpass file. -// -// # Example DSN -// user=jack password=secret host=pg.example.com port=5432 dbname=mydb sslmode=verify-ca -// -// # Example URL -// postgres://jack:secret@pg.example.com:5432/mydb?sslmode=verify-ca -// -// The returned *Config may be modified. However, it is strongly recommended that any configuration that can be done -// through the connection string be done there. In particular the fields Host, Port, TLSConfig, and Fallbacks can be -// interdependent (e.g. TLSConfig needs knowledge of the host to validate the server certificate). These fields should -// not be modified individually. They should all be modified or all left unchanged. -// -// ParseConfig supports specifying multiple hosts in similar manner to libpq. Host and port may include comma separated -// values that will be tried in order. This can be used as part of a high availability system. See -// https://www.postgresql.org/docs/11/libpq-connect.html#LIBPQ-MULTIPLE-HOSTS for more information. -// -// # Example URL -// postgres://jack:secret@foo.example.com:5432,bar.example.com:5432/mydb -// -// ParseConfig currently recognizes the following environment variable and their parameter key word equivalents passed -// via database URL or DSN: -// -// PGHOST -// PGPORT -// PGDATABASE -// PGUSER -// PGPASSWORD -// PGPASSFILE -// PGSERVICE -// PGSERVICEFILE -// PGSSLMODE -// PGSSLCERT -// PGSSLKEY -// PGSSLROOTCERT -// PGSSLPASSWORD -// PGAPPNAME -// PGCONNECT_TIMEOUT -// PGTARGETSESSIONATTRS -// -// See http://www.postgresql.org/docs/11/static/libpq-envars.html for details on the meaning of environment variables. -// -// See https://www.postgresql.org/docs/11/libpq-connect.html#LIBPQ-PARAMKEYWORDS for parameter key word names. They are -// usually but not always the environment variable name downcased and without the "PG" prefix. -// -// Important Security Notes: -// -// ParseConfig tries to match libpq behavior with regard to PGSSLMODE. This includes defaulting to "prefer" behavior if -// not set. -// -// See http://www.postgresql.org/docs/11/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION for details on what level of -// security each sslmode provides. -// -// The sslmode "prefer" (the default), sslmode "allow", and multiple hosts are implemented via the Fallbacks field of -// the Config struct. If TLSConfig is manually changed it will not affect the fallbacks. For example, in the case of -// sslmode "prefer" this means it will first try the main Config settings which use TLS, then it will try the fallback -// which does not use TLS. This can lead to an unexpected unencrypted connection if the main TLS config is manually -// changed later but the unencrypted fallback is present. Ensure there are no stale fallbacks when manually setting -// TLSConfig. -// -// Other known differences with libpq: -// -// When multiple hosts are specified, libpq allows them to have different passwords set via the .pgpass file. pgconn -// does not. -// -// In addition, ParseConfig accepts the following options: -// -// min_read_buffer_size -// The minimum size of the internal read buffer. Default 8192. -// servicefile -// libpq only reads servicefile from the PGSERVICEFILE environment variable. ParseConfig accepts servicefile as a -// part of the connection string. -func ParseConfig(connString string) (*Config, error) { - var parseConfigOptions ParseConfigOptions - return ParseConfigWithOptions(connString, parseConfigOptions) -} - -// ParseConfigWithOptions builds a *Config from connString and options with similar behavior to the PostgreSQL standard -// C library libpq. options contains settings that cannot be specified in a connString such as providing a function to -// get the SSL password. -func ParseConfigWithOptions(connString string, options ParseConfigOptions) (*Config, error) { - defaultSettings := defaultSettings() - envSettings := parseEnvSettings() - - connStringSettings := make(map[string]string) - if connString != "" { - var err error - // connString may be a database URL or a DSN - if strings.HasPrefix(connString, "postgres://") || strings.HasPrefix(connString, "postgresql://") { - connStringSettings, err = parseURLSettings(connString) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "failed to parse as URL", err: err} - } - } else { - connStringSettings, err = parseDSNSettings(connString) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "failed to parse as DSN", err: err} - } - } - } - - settings := mergeSettings(defaultSettings, envSettings, connStringSettings) - if service, present := settings["service"]; present { - serviceSettings, err := parseServiceSettings(settings["servicefile"], service) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "failed to read service", err: err} - } - - settings = mergeSettings(defaultSettings, envSettings, serviceSettings, connStringSettings) - } - - minReadBufferSize, err := strconv.ParseInt(settings["min_read_buffer_size"], 10, 32) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "cannot parse min_read_buffer_size", err: err} - } - - config := &Config{ - createdByParseConfig: true, - Database: settings["database"], - User: settings["user"], - Password: settings["password"], - RuntimeParams: make(map[string]string), - BuildFrontend: makeDefaultBuildFrontendFunc(int(minReadBufferSize)), - } - - if connectTimeoutSetting, present := settings["connect_timeout"]; present { - connectTimeout, err := parseConnectTimeoutSetting(connectTimeoutSetting) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "invalid connect_timeout", err: err} - } - config.ConnectTimeout = connectTimeout - config.DialFunc = makeConnectTimeoutDialFunc(connectTimeout) - } else { - defaultDialer := makeDefaultDialer() - config.DialFunc = defaultDialer.DialContext - } - - config.LookupFunc = makeDefaultResolver().LookupHost - - notRuntimeParams := map[string]struct{}{ - "host": {}, - "port": {}, - "database": {}, - "user": {}, - "password": {}, - "passfile": {}, - "connect_timeout": {}, - "sslmode": {}, - "sslkey": {}, - "sslcert": {}, - "sslrootcert": {}, - "sslpassword": {}, - "sslsni": {}, - "krbspn": {}, - "krbsrvname": {}, - "target_session_attrs": {}, - "min_read_buffer_size": {}, - "service": {}, - "servicefile": {}, - } - - // Adding kerberos configuration - if _, present := settings["krbsrvname"]; present { - config.KerberosSrvName = settings["krbsrvname"] - } - if _, present := settings["krbspn"]; present { - config.KerberosSpn = settings["krbspn"] - } - - for k, v := range settings { - if _, present := notRuntimeParams[k]; present { - continue - } - config.RuntimeParams[k] = v - } - - fallbacks := []*FallbackConfig{} - - hosts := strings.Split(settings["host"], ",") - ports := strings.Split(settings["port"], ",") - - for i, host := range hosts { - var portStr string - if i < len(ports) { - portStr = ports[i] - } else { - portStr = ports[0] - } - - port, err := parsePort(portStr) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "invalid port", err: err} - } - - var tlsConfigs []*tls.Config - - // Ignore TLS settings if Unix domain socket like libpq - if network, _ := NetworkAddress(host, port); network == "unix" { - tlsConfigs = append(tlsConfigs, nil) - } else { - var err error - tlsConfigs, err = configTLS(settings, host, options) - if err != nil { - return nil, &parseConfigError{connString: connString, msg: "failed to configure TLS", err: err} - } - } - - for _, tlsConfig := range tlsConfigs { - fallbacks = append(fallbacks, &FallbackConfig{ - Host: host, - Port: port, - TLSConfig: tlsConfig, - }) - } - } - - config.Host = fallbacks[0].Host - config.Port = fallbacks[0].Port - config.TLSConfig = fallbacks[0].TLSConfig - config.Fallbacks = fallbacks[1:] - - passfile, err := pgpassfile.ReadPassfile(settings["passfile"]) - if err == nil { - if config.Password == "" { - host := config.Host - if network, _ := NetworkAddress(config.Host, config.Port); network == "unix" { - host = "localhost" - } - - config.Password = passfile.FindPassword(host, strconv.Itoa(int(config.Port)), config.Database, config.User) - } - } - - switch tsa := settings["target_session_attrs"]; tsa { - case "read-write": - config.ValidateConnect = ValidateConnectTargetSessionAttrsReadWrite - case "read-only": - config.ValidateConnect = ValidateConnectTargetSessionAttrsReadOnly - case "primary": - config.ValidateConnect = ValidateConnectTargetSessionAttrsPrimary - case "standby": - config.ValidateConnect = ValidateConnectTargetSessionAttrsStandby - case "prefer-standby": - config.ValidateConnect = ValidateConnectTargetSessionAttrsPreferStandby - case "any": - // do nothing - default: - return nil, &parseConfigError{connString: connString, msg: fmt.Sprintf("unknown target_session_attrs value: %v", tsa)} - } - - return config, nil -} - -func mergeSettings(settingSets ...map[string]string) map[string]string { - settings := make(map[string]string) - - for _, s2 := range settingSets { - for k, v := range s2 { - settings[k] = v - } - } - - return settings -} - -func parseEnvSettings() map[string]string { - settings := make(map[string]string) - - nameMap := map[string]string{ - "PGHOST": "host", - "PGPORT": "port", - "PGDATABASE": "database", - "PGUSER": "user", - "PGPASSWORD": "password", - "PGPASSFILE": "passfile", - "PGAPPNAME": "application_name", - "PGCONNECT_TIMEOUT": "connect_timeout", - "PGSSLMODE": "sslmode", - "PGSSLKEY": "sslkey", - "PGSSLCERT": "sslcert", - "PGSSLSNI": "sslsni", - "PGSSLROOTCERT": "sslrootcert", - "PGSSLPASSWORD": "sslpassword", - "PGTARGETSESSIONATTRS": "target_session_attrs", - "PGSERVICE": "service", - "PGSERVICEFILE": "servicefile", - } - - for envname, realname := range nameMap { - value := os.Getenv(envname) - if value != "" { - settings[realname] = value - } - } - - return settings -} - -func parseURLSettings(connString string) (map[string]string, error) { - settings := make(map[string]string) - - url, err := url.Parse(connString) - if err != nil { - return nil, err - } - - if url.User != nil { - settings["user"] = url.User.Username() - if password, present := url.User.Password(); present { - settings["password"] = password - } - } - - // Handle multiple host:port's in url.Host by splitting them into host,host,host and port,port,port. - var hosts []string - var ports []string - for _, host := range strings.Split(url.Host, ",") { - if host == "" { - continue - } - if isIPOnly(host) { - hosts = append(hosts, strings.Trim(host, "[]")) - continue - } - h, p, err := net.SplitHostPort(host) - if err != nil { - return nil, fmt.Errorf("failed to split host:port in '%s', err: %w", host, err) - } - if h != "" { - hosts = append(hosts, h) - } - if p != "" { - ports = append(ports, p) - } - } - if len(hosts) > 0 { - settings["host"] = strings.Join(hosts, ",") - } - if len(ports) > 0 { - settings["port"] = strings.Join(ports, ",") - } - - database := strings.TrimLeft(url.Path, "/") - if database != "" { - settings["database"] = database - } - - nameMap := map[string]string{ - "dbname": "database", - } - - for k, v := range url.Query() { - if k2, present := nameMap[k]; present { - k = k2 - } - - settings[k] = v[0] - } - - return settings, nil -} - -func isIPOnly(host string) bool { - return net.ParseIP(strings.Trim(host, "[]")) != nil || !strings.Contains(host, ":") -} - -var asciiSpace = [256]uint8{'\t': 1, '\n': 1, '\v': 1, '\f': 1, '\r': 1, ' ': 1} - -func parseDSNSettings(s string) (map[string]string, error) { - settings := make(map[string]string) - - nameMap := map[string]string{ - "dbname": "database", - } - - for len(s) > 0 { - var key, val string - eqIdx := strings.IndexRune(s, '=') - if eqIdx < 0 { - return nil, errors.New("invalid dsn") - } - - key = strings.Trim(s[:eqIdx], " \t\n\r\v\f") - s = strings.TrimLeft(s[eqIdx+1:], " \t\n\r\v\f") - if len(s) == 0 { - } else if s[0] != '\'' { - end := 0 - for ; end < len(s); end++ { - if asciiSpace[s[end]] == 1 { - break - } - if s[end] == '\\' { - end++ - if end == len(s) { - return nil, errors.New("invalid backslash") - } - } - } - val = strings.Replace(strings.Replace(s[:end], "\\\\", "\\", -1), "\\'", "'", -1) - if end == len(s) { - s = "" - } else { - s = s[end+1:] - } - } else { // quoted string - s = s[1:] - end := 0 - for ; end < len(s); end++ { - if s[end] == '\'' { - break - } - if s[end] == '\\' { - end++ - } - } - if end == len(s) { - return nil, errors.New("unterminated quoted string in connection info string") - } - val = strings.Replace(strings.Replace(s[:end], "\\\\", "\\", -1), "\\'", "'", -1) - if end == len(s) { - s = "" - } else { - s = s[end+1:] - } - } - - if k, ok := nameMap[key]; ok { - key = k - } - - if key == "" { - return nil, errors.New("invalid dsn") - } - - settings[key] = val - } - - return settings, nil -} - -func parseServiceSettings(servicefilePath, serviceName string) (map[string]string, error) { - servicefile, err := pgservicefile.ReadServicefile(servicefilePath) - if err != nil { - return nil, fmt.Errorf("failed to read service file: %v", servicefilePath) - } - - service, err := servicefile.GetService(serviceName) - if err != nil { - return nil, fmt.Errorf("unable to find service: %v", serviceName) - } - - nameMap := map[string]string{ - "dbname": "database", - } - - settings := make(map[string]string, len(service.Settings)) - for k, v := range service.Settings { - if k2, present := nameMap[k]; present { - k = k2 - } - settings[k] = v - } - - return settings, nil -} - -// configTLS uses libpq's TLS parameters to construct []*tls.Config. It is -// necessary to allow returning multiple TLS configs as sslmode "allow" and -// "prefer" allow fallback. -func configTLS(settings map[string]string, thisHost string, parseConfigOptions ParseConfigOptions) ([]*tls.Config, error) { - host := thisHost - sslmode := settings["sslmode"] - sslrootcert := settings["sslrootcert"] - sslcert := settings["sslcert"] - sslkey := settings["sslkey"] - sslpassword := settings["sslpassword"] - sslsni := settings["sslsni"] - - // Match libpq default behavior - if sslmode == "" { - sslmode = "prefer" - } - if sslsni == "" { - sslsni = "1" - } - - tlsConfig := &tls.Config{} - - switch sslmode { - case "disable": - return []*tls.Config{nil}, nil - case "allow", "prefer": - tlsConfig.InsecureSkipVerify = true - case "require": - // According to PostgreSQL documentation, if a root CA file exists, - // the behavior of sslmode=require should be the same as that of verify-ca - // - // See https://www.postgresql.org/docs/12/libpq-ssl.html - if sslrootcert != "" { - goto nextCase - } - tlsConfig.InsecureSkipVerify = true - break - nextCase: - fallthrough - case "verify-ca": - // Don't perform the default certificate verification because it - // will verify the hostname. Instead, verify the server's - // certificate chain ourselves in VerifyPeerCertificate and - // ignore the server name. This emulates libpq's verify-ca - // behavior. - // - // See https://github.com/golang/go/issues/21971#issuecomment-332693931 - // and https://pkg.go.dev/crypto/tls?tab=doc#example-Config-VerifyPeerCertificate - // for more info. - tlsConfig.InsecureSkipVerify = true - tlsConfig.VerifyPeerCertificate = func(certificates [][]byte, _ [][]*x509.Certificate) error { - certs := make([]*x509.Certificate, len(certificates)) - for i, asn1Data := range certificates { - cert, err := x509.ParseCertificate(asn1Data) - if err != nil { - return errors.New("failed to parse certificate from server: " + err.Error()) - } - certs[i] = cert - } - - // Leave DNSName empty to skip hostname verification. - opts := x509.VerifyOptions{ - Roots: tlsConfig.RootCAs, - Intermediates: x509.NewCertPool(), - } - // Skip the first cert because it's the leaf. All others - // are intermediates. - for _, cert := range certs[1:] { - opts.Intermediates.AddCert(cert) - } - _, err := certs[0].Verify(opts) - return err - } - case "verify-full": - tlsConfig.ServerName = host - default: - return nil, errors.New("sslmode is invalid") - } - - if sslrootcert != "" { - caCertPool := x509.NewCertPool() - - caPath := sslrootcert - caCert, err := ioutil.ReadFile(caPath) - if err != nil { - return nil, fmt.Errorf("unable to read CA file: %w", err) - } - - if !caCertPool.AppendCertsFromPEM(caCert) { - return nil, errors.New("unable to add CA to cert pool") - } - - tlsConfig.RootCAs = caCertPool - tlsConfig.ClientCAs = caCertPool - } - - if (sslcert != "" && sslkey == "") || (sslcert == "" && sslkey != "") { - return nil, errors.New(`both "sslcert" and "sslkey" are required`) - } - - if sslcert != "" && sslkey != "" { - buf, err := ioutil.ReadFile(sslkey) - if err != nil { - return nil, fmt.Errorf("unable to read sslkey: %w", err) - } - block, _ := pem.Decode(buf) - var pemKey []byte - var decryptedKey []byte - var decryptedError error - // If PEM is encrypted, attempt to decrypt using pass phrase - if x509.IsEncryptedPEMBlock(block) { - // Attempt decryption with pass phrase - // NOTE: only supports RSA (PKCS#1) - if sslpassword != "" { - decryptedKey, decryptedError = x509.DecryptPEMBlock(block, []byte(sslpassword)) - } - //if sslpassword not provided or has decryption error when use it - //try to find sslpassword with callback function - if sslpassword == "" || decryptedError != nil { - if parseConfigOptions.GetSSLPassword != nil { - sslpassword = parseConfigOptions.GetSSLPassword(context.Background()) - } - if sslpassword == "" { - return nil, fmt.Errorf("unable to find sslpassword") - } - } - decryptedKey, decryptedError = x509.DecryptPEMBlock(block, []byte(sslpassword)) - // Should we also provide warning for PKCS#1 needed? - if decryptedError != nil { - return nil, fmt.Errorf("unable to decrypt key: %w", err) - } - - pemBytes := pem.Block{ - Type: "RSA PRIVATE KEY", - Bytes: decryptedKey, - } - pemKey = pem.EncodeToMemory(&pemBytes) - } else { - pemKey = pem.EncodeToMemory(block) - } - certfile, err := ioutil.ReadFile(sslcert) - if err != nil { - return nil, fmt.Errorf("unable to read cert: %w", err) - } - cert, err := tls.X509KeyPair(certfile, pemKey) - if err != nil { - return nil, fmt.Errorf("unable to load cert: %w", err) - } - tlsConfig.Certificates = []tls.Certificate{cert} - } - - // Set Server Name Indication (SNI), if enabled by connection parameters. - // Per RFC 6066, do not set it if the host is a literal IP address (IPv4 - // or IPv6). - if sslsni == "1" && net.ParseIP(host) == nil { - tlsConfig.ServerName = host - } - - switch sslmode { - case "allow": - return []*tls.Config{nil, tlsConfig}, nil - case "prefer": - return []*tls.Config{tlsConfig, nil}, nil - case "require", "verify-ca", "verify-full": - return []*tls.Config{tlsConfig}, nil - default: - panic("BUG: bad sslmode should already have been caught") - } -} - -func parsePort(s string) (uint16, error) { - port, err := strconv.ParseUint(s, 10, 16) - if err != nil { - return 0, err - } - if port < 1 || port > math.MaxUint16 { - return 0, errors.New("outside range") - } - return uint16(port), nil -} - -func makeDefaultDialer() *net.Dialer { - return &net.Dialer{KeepAlive: 5 * time.Minute} -} - -func makeDefaultResolver() *net.Resolver { - return net.DefaultResolver -} - -func makeDefaultBuildFrontendFunc(minBufferLen int) BuildFrontendFunc { - return func(r io.Reader, w io.Writer) Frontend { - cr, err := chunkreader.NewConfig(r, chunkreader.Config{MinBufLen: minBufferLen}) - if err != nil { - panic(fmt.Sprintf("BUG: chunkreader.NewConfig failed: %v", err)) - } - frontend := pgproto3.NewFrontend(cr, w) - - return frontend - } -} - -func parseConnectTimeoutSetting(s string) (time.Duration, error) { - timeout, err := strconv.ParseInt(s, 10, 64) - if err != nil { - return 0, err - } - if timeout < 0 { - return 0, errors.New("negative timeout") - } - return time.Duration(timeout) * time.Second, nil -} - -func makeConnectTimeoutDialFunc(timeout time.Duration) DialFunc { - d := makeDefaultDialer() - d.Timeout = timeout - return d.DialContext -} - -// ValidateConnectTargetSessionAttrsReadWrite is an ValidateConnectFunc that implements libpq compatible -// target_session_attrs=read-write. -func ValidateConnectTargetSessionAttrsReadWrite(ctx context.Context, pgConn *PgConn) error { - result := pgConn.ExecParams(ctx, "show transaction_read_only", nil, nil, nil, nil).Read() - if result.Err != nil { - return result.Err - } - - if string(result.Rows[0][0]) == "on" { - return errors.New("read only connection") - } - - return nil -} - -// ValidateConnectTargetSessionAttrsReadOnly is an ValidateConnectFunc that implements libpq compatible -// target_session_attrs=read-only. -func ValidateConnectTargetSessionAttrsReadOnly(ctx context.Context, pgConn *PgConn) error { - result := pgConn.ExecParams(ctx, "show transaction_read_only", nil, nil, nil, nil).Read() - if result.Err != nil { - return result.Err - } - - if string(result.Rows[0][0]) != "on" { - return errors.New("connection is not read only") - } - - return nil -} - -// ValidateConnectTargetSessionAttrsStandby is an ValidateConnectFunc that implements libpq compatible -// target_session_attrs=standby. -func ValidateConnectTargetSessionAttrsStandby(ctx context.Context, pgConn *PgConn) error { - result := pgConn.ExecParams(ctx, "select pg_is_in_recovery()", nil, nil, nil, nil).Read() - if result.Err != nil { - return result.Err - } - - if string(result.Rows[0][0]) != "t" { - return errors.New("server is not in hot standby mode") - } - - return nil -} - -// ValidateConnectTargetSessionAttrsPrimary is an ValidateConnectFunc that implements libpq compatible -// target_session_attrs=primary. -func ValidateConnectTargetSessionAttrsPrimary(ctx context.Context, pgConn *PgConn) error { - result := pgConn.ExecParams(ctx, "select pg_is_in_recovery()", nil, nil, nil, nil).Read() - if result.Err != nil { - return result.Err - } - - if string(result.Rows[0][0]) == "t" { - return errors.New("server is in standby mode") - } - - return nil -} - -// ValidateConnectTargetSessionAttrsPreferStandby is an ValidateConnectFunc that implements libpq compatible -// target_session_attrs=prefer-standby. -func ValidateConnectTargetSessionAttrsPreferStandby(ctx context.Context, pgConn *PgConn) error { - result := pgConn.ExecParams(ctx, "select pg_is_in_recovery()", nil, nil, nil, nil).Read() - if result.Err != nil { - return result.Err - } - - if string(result.Rows[0][0]) != "t" { - return &NotPreferredError{err: errors.New("server is not in hot standby mode")} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgconn/defaults.go b/vendor/github.com/jackc/pgconn/defaults.go deleted file mode 100644 index c7209fdd..00000000 --- a/vendor/github.com/jackc/pgconn/defaults.go +++ /dev/null @@ -1,65 +0,0 @@ -//go:build !windows -// +build !windows - -package pgconn - -import ( - "os" - "os/user" - "path/filepath" -) - -func defaultSettings() map[string]string { - settings := make(map[string]string) - - settings["host"] = defaultHost() - settings["port"] = "5432" - - // Default to the OS user name. Purposely ignoring err getting user name from - // OS. The client application will simply have to specify the user in that - // case (which they typically will be doing anyway). - user, err := user.Current() - if err == nil { - settings["user"] = user.Username - settings["passfile"] = filepath.Join(user.HomeDir, ".pgpass") - settings["servicefile"] = filepath.Join(user.HomeDir, ".pg_service.conf") - sslcert := filepath.Join(user.HomeDir, ".postgresql", "postgresql.crt") - sslkey := filepath.Join(user.HomeDir, ".postgresql", "postgresql.key") - if _, err := os.Stat(sslcert); err == nil { - if _, err := os.Stat(sslkey); err == nil { - // Both the cert and key must be present to use them, or do not use either - settings["sslcert"] = sslcert - settings["sslkey"] = sslkey - } - } - sslrootcert := filepath.Join(user.HomeDir, ".postgresql", "root.crt") - if _, err := os.Stat(sslrootcert); err == nil { - settings["sslrootcert"] = sslrootcert - } - } - - settings["target_session_attrs"] = "any" - - settings["min_read_buffer_size"] = "8192" - - return settings -} - -// defaultHost attempts to mimic libpq's default host. libpq uses the default unix socket location on *nix and localhost -// on Windows. The default socket location is compiled into libpq. Since pgx does not have access to that default it -// checks the existence of common locations. -func defaultHost() string { - candidatePaths := []string{ - "/var/run/postgresql", // Debian - "/private/tmp", // OSX - homebrew - "/tmp", // standard PostgreSQL - } - - for _, path := range candidatePaths { - if _, err := os.Stat(path); err == nil { - return path - } - } - - return "localhost" -} diff --git a/vendor/github.com/jackc/pgconn/defaults_windows.go b/vendor/github.com/jackc/pgconn/defaults_windows.go deleted file mode 100644 index 71eb77db..00000000 --- a/vendor/github.com/jackc/pgconn/defaults_windows.go +++ /dev/null @@ -1,59 +0,0 @@ -package pgconn - -import ( - "os" - "os/user" - "path/filepath" - "strings" -) - -func defaultSettings() map[string]string { - settings := make(map[string]string) - - settings["host"] = defaultHost() - settings["port"] = "5432" - - // Default to the OS user name. Purposely ignoring err getting user name from - // OS. The client application will simply have to specify the user in that - // case (which they typically will be doing anyway). - user, err := user.Current() - appData := os.Getenv("APPDATA") - if err == nil { - // Windows gives us the username here as `DOMAIN\user` or `LOCALPCNAME\user`, - // but the libpq default is just the `user` portion, so we strip off the first part. - username := user.Username - if strings.Contains(username, "\\") { - username = username[strings.LastIndex(username, "\\")+1:] - } - - settings["user"] = username - settings["passfile"] = filepath.Join(appData, "postgresql", "pgpass.conf") - settings["servicefile"] = filepath.Join(user.HomeDir, ".pg_service.conf") - sslcert := filepath.Join(appData, "postgresql", "postgresql.crt") - sslkey := filepath.Join(appData, "postgresql", "postgresql.key") - if _, err := os.Stat(sslcert); err == nil { - if _, err := os.Stat(sslkey); err == nil { - // Both the cert and key must be present to use them, or do not use either - settings["sslcert"] = sslcert - settings["sslkey"] = sslkey - } - } - sslrootcert := filepath.Join(appData, "postgresql", "root.crt") - if _, err := os.Stat(sslrootcert); err == nil { - settings["sslrootcert"] = sslrootcert - } - } - - settings["target_session_attrs"] = "any" - - settings["min_read_buffer_size"] = "8192" - - return settings -} - -// defaultHost attempts to mimic libpq's default host. libpq uses the default unix socket location on *nix and localhost -// on Windows. The default socket location is compiled into libpq. Since pgx does not have access to that default it -// checks the existence of common locations. -func defaultHost() string { - return "localhost" -} diff --git a/vendor/github.com/jackc/pgconn/doc.go b/vendor/github.com/jackc/pgconn/doc.go deleted file mode 100644 index cde58cd8..00000000 --- a/vendor/github.com/jackc/pgconn/doc.go +++ /dev/null @@ -1,29 +0,0 @@ -// Package pgconn is a low-level PostgreSQL database driver. -/* -pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at -nearly the same level is the C library libpq. - -Establishing a Connection - -Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for -libpq style environment variables. - -Executing a Query - -ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method -reads all rows into memory. - -Executing Multiple Queries in a Single Round Trip - -Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query -result. The ReadAll method reads all query results into memory. - -Context Support - -All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the -method immediately returns. In most circumstances, this will close the underlying connection. - -The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the -client to abort. -*/ -package pgconn diff --git a/vendor/github.com/jackc/pgconn/errors.go b/vendor/github.com/jackc/pgconn/errors.go deleted file mode 100644 index 66d35584..00000000 --- a/vendor/github.com/jackc/pgconn/errors.go +++ /dev/null @@ -1,238 +0,0 @@ -package pgconn - -import ( - "context" - "errors" - "fmt" - "net" - "net/url" - "regexp" - "strings" -) - -// SafeToRetry checks if the err is guaranteed to have occurred before sending any data to the server. -func SafeToRetry(err error) bool { - if e, ok := err.(interface{ SafeToRetry() bool }); ok { - return e.SafeToRetry() - } - return false -} - -// Timeout checks if err was was caused by a timeout. To be specific, it is true if err was caused within pgconn by a -// context.Canceled, context.DeadlineExceeded or an implementer of net.Error where Timeout() is true. -func Timeout(err error) bool { - var timeoutErr *errTimeout - return errors.As(err, &timeoutErr) -} - -// PgError represents an error reported by the PostgreSQL server. See -// http://www.postgresql.org/docs/11/static/protocol-error-fields.html for -// detailed field description. -type PgError struct { - Severity string - Code string - Message string - Detail string - Hint string - Position int32 - InternalPosition int32 - InternalQuery string - Where string - SchemaName string - TableName string - ColumnName string - DataTypeName string - ConstraintName string - File string - Line int32 - Routine string -} - -func (pe *PgError) Error() string { - return pe.Severity + ": " + pe.Message + " (SQLSTATE " + pe.Code + ")" -} - -// SQLState returns the SQLState of the error. -func (pe *PgError) SQLState() string { - return pe.Code -} - -type connectError struct { - config *Config - msg string - err error -} - -func (e *connectError) Error() string { - sb := &strings.Builder{} - fmt.Fprintf(sb, "failed to connect to `host=%s user=%s database=%s`: %s", e.config.Host, e.config.User, e.config.Database, e.msg) - if e.err != nil { - fmt.Fprintf(sb, " (%s)", e.err.Error()) - } - return sb.String() -} - -func (e *connectError) Unwrap() error { - return e.err -} - -type connLockError struct { - status string -} - -func (e *connLockError) SafeToRetry() bool { - return true // a lock failure by definition happens before the connection is used. -} - -func (e *connLockError) Error() string { - return e.status -} - -type parseConfigError struct { - connString string - msg string - err error -} - -func (e *parseConfigError) Error() string { - connString := redactPW(e.connString) - if e.err == nil { - return fmt.Sprintf("cannot parse `%s`: %s", connString, e.msg) - } - return fmt.Sprintf("cannot parse `%s`: %s (%s)", connString, e.msg, e.err.Error()) -} - -func (e *parseConfigError) Unwrap() error { - return e.err -} - -// preferContextOverNetTimeoutError returns ctx.Err() if ctx.Err() is present and err is a net.Error with Timeout() == -// true. Otherwise returns err. -func preferContextOverNetTimeoutError(ctx context.Context, err error) error { - if err, ok := err.(net.Error); ok && err.Timeout() && ctx.Err() != nil { - return &errTimeout{err: ctx.Err()} - } - return err -} - -type pgconnError struct { - msg string - err error - safeToRetry bool -} - -func (e *pgconnError) Error() string { - if e.msg == "" { - return e.err.Error() - } - if e.err == nil { - return e.msg - } - return fmt.Sprintf("%s: %s", e.msg, e.err.Error()) -} - -func (e *pgconnError) SafeToRetry() bool { - return e.safeToRetry -} - -func (e *pgconnError) Unwrap() error { - return e.err -} - -// errTimeout occurs when an error was caused by a timeout. Specifically, it wraps an error which is -// context.Canceled, context.DeadlineExceeded, or an implementer of net.Error where Timeout() is true. -type errTimeout struct { - err error -} - -func (e *errTimeout) Error() string { - return fmt.Sprintf("timeout: %s", e.err.Error()) -} - -func (e *errTimeout) SafeToRetry() bool { - return SafeToRetry(e.err) -} - -func (e *errTimeout) Unwrap() error { - return e.err -} - -type contextAlreadyDoneError struct { - err error -} - -func (e *contextAlreadyDoneError) Error() string { - return fmt.Sprintf("context already done: %s", e.err.Error()) -} - -func (e *contextAlreadyDoneError) SafeToRetry() bool { - return true -} - -func (e *contextAlreadyDoneError) Unwrap() error { - return e.err -} - -// newContextAlreadyDoneError double-wraps a context error in `contextAlreadyDoneError` and `errTimeout`. -func newContextAlreadyDoneError(ctx context.Context) (err error) { - return &errTimeout{&contextAlreadyDoneError{err: ctx.Err()}} -} - -type writeError struct { - err error - safeToRetry bool -} - -func (e *writeError) Error() string { - return fmt.Sprintf("write failed: %s", e.err.Error()) -} - -func (e *writeError) SafeToRetry() bool { - return e.safeToRetry -} - -func (e *writeError) Unwrap() error { - return e.err -} - -func redactPW(connString string) string { - if strings.HasPrefix(connString, "postgres://") || strings.HasPrefix(connString, "postgresql://") { - if u, err := url.Parse(connString); err == nil { - return redactURL(u) - } - } - quotedDSN := regexp.MustCompile(`password='[^']*'`) - connString = quotedDSN.ReplaceAllLiteralString(connString, "password=xxxxx") - plainDSN := regexp.MustCompile(`password=[^ ]*`) - connString = plainDSN.ReplaceAllLiteralString(connString, "password=xxxxx") - brokenURL := regexp.MustCompile(`:[^:@]+?@`) - connString = brokenURL.ReplaceAllLiteralString(connString, ":xxxxxx@") - return connString -} - -func redactURL(u *url.URL) string { - if u == nil { - return "" - } - if _, pwSet := u.User.Password(); pwSet { - u.User = url.UserPassword(u.User.Username(), "xxxxx") - } - return u.String() -} - -type NotPreferredError struct { - err error - safeToRetry bool -} - -func (e *NotPreferredError) Error() string { - return fmt.Sprintf("standby server not found: %s", e.err.Error()) -} - -func (e *NotPreferredError) SafeToRetry() bool { - return e.safeToRetry -} - -func (e *NotPreferredError) Unwrap() error { - return e.err -} diff --git a/vendor/github.com/jackc/pgconn/internal/ctxwatch/context_watcher.go b/vendor/github.com/jackc/pgconn/internal/ctxwatch/context_watcher.go deleted file mode 100644 index b39cb3ee..00000000 --- a/vendor/github.com/jackc/pgconn/internal/ctxwatch/context_watcher.go +++ /dev/null @@ -1,73 +0,0 @@ -package ctxwatch - -import ( - "context" - "sync" -) - -// ContextWatcher watches a context and performs an action when the context is canceled. It can watch one context at a -// time. -type ContextWatcher struct { - onCancel func() - onUnwatchAfterCancel func() - unwatchChan chan struct{} - - lock sync.Mutex - watchInProgress bool - onCancelWasCalled bool -} - -// NewContextWatcher returns a ContextWatcher. onCancel will be called when a watched context is canceled. -// OnUnwatchAfterCancel will be called when Unwatch is called and the watched context had already been canceled and -// onCancel called. -func NewContextWatcher(onCancel func(), onUnwatchAfterCancel func()) *ContextWatcher { - cw := &ContextWatcher{ - onCancel: onCancel, - onUnwatchAfterCancel: onUnwatchAfterCancel, - unwatchChan: make(chan struct{}), - } - - return cw -} - -// Watch starts watching ctx. If ctx is canceled then the onCancel function passed to NewContextWatcher will be called. -func (cw *ContextWatcher) Watch(ctx context.Context) { - cw.lock.Lock() - defer cw.lock.Unlock() - - if cw.watchInProgress { - panic("Watch already in progress") - } - - cw.onCancelWasCalled = false - - if ctx.Done() != nil { - cw.watchInProgress = true - go func() { - select { - case <-ctx.Done(): - cw.onCancel() - cw.onCancelWasCalled = true - <-cw.unwatchChan - case <-cw.unwatchChan: - } - }() - } else { - cw.watchInProgress = false - } -} - -// Unwatch stops watching the previously watched context. If the onCancel function passed to NewContextWatcher was -// called then onUnwatchAfterCancel will also be called. -func (cw *ContextWatcher) Unwatch() { - cw.lock.Lock() - defer cw.lock.Unlock() - - if cw.watchInProgress { - cw.unwatchChan <- struct{}{} - if cw.onCancelWasCalled { - cw.onUnwatchAfterCancel() - } - cw.watchInProgress = false - } -} diff --git a/vendor/github.com/jackc/pgconn/krb5.go b/vendor/github.com/jackc/pgconn/krb5.go deleted file mode 100644 index 08427b8e..00000000 --- a/vendor/github.com/jackc/pgconn/krb5.go +++ /dev/null @@ -1,99 +0,0 @@ -package pgconn - -import ( - "errors" - "fmt" - - "github.com/jackc/pgproto3/v2" -) - -// NewGSSFunc creates a GSS authentication provider, for use with -// RegisterGSSProvider. -type NewGSSFunc func() (GSS, error) - -var newGSS NewGSSFunc - -// RegisterGSSProvider registers a GSS authentication provider. For example, if -// you need to use Kerberos to authenticate with your server, add this to your -// main package: -// -// import "github.com/otan/gopgkrb5" -// -// func init() { -// pgconn.RegisterGSSProvider(func() (pgconn.GSS, error) { return gopgkrb5.NewGSS() }) -// } -func RegisterGSSProvider(newGSSArg NewGSSFunc) { - newGSS = newGSSArg -} - -// GSS provides GSSAPI authentication (e.g., Kerberos). -type GSS interface { - GetInitToken(host string, service string) ([]byte, error) - GetInitTokenFromSPN(spn string) ([]byte, error) - Continue(inToken []byte) (done bool, outToken []byte, err error) -} - -func (c *PgConn) gssAuth() error { - if newGSS == nil { - return errors.New("kerberos error: no GSSAPI provider registered, see https://github.com/otan/gopgkrb5") - } - cli, err := newGSS() - if err != nil { - return err - } - - var nextData []byte - if c.config.KerberosSpn != "" { - // Use the supplied SPN if provided. - nextData, err = cli.GetInitTokenFromSPN(c.config.KerberosSpn) - } else { - // Allow the kerberos service name to be overridden - service := "postgres" - if c.config.KerberosSrvName != "" { - service = c.config.KerberosSrvName - } - nextData, err = cli.GetInitToken(c.config.Host, service) - } - if err != nil { - return err - } - - for { - gssResponse := &pgproto3.GSSResponse{ - Data: nextData, - } - _, err = c.conn.Write(gssResponse.Encode(nil)) - if err != nil { - return err - } - resp, err := c.rxGSSContinue() - if err != nil { - return err - } - var done bool - done, nextData, err = cli.Continue(resp.Data) - if err != nil { - return err - } - if done { - break - } - } - return nil -} - -func (c *PgConn) rxGSSContinue() (*pgproto3.AuthenticationGSSContinue, error) { - msg, err := c.receiveMessage() - if err != nil { - return nil, err - } - - switch m := msg.(type) { - case *pgproto3.AuthenticationGSSContinue: - return m, nil - case *pgproto3.ErrorResponse: - return nil, ErrorResponseToPgError(m) - } - - return nil, fmt.Errorf("expected AuthenticationGSSContinue message but received unexpected message %T", msg) -} diff --git a/vendor/github.com/jackc/pgconn/pgconn.go b/vendor/github.com/jackc/pgconn/pgconn.go deleted file mode 100644 index 6601194c..00000000 --- a/vendor/github.com/jackc/pgconn/pgconn.go +++ /dev/null @@ -1,1770 +0,0 @@ -package pgconn - -import ( - "context" - "crypto/md5" - "crypto/tls" - "encoding/binary" - "encoding/hex" - "errors" - "fmt" - "io" - "math" - "net" - "strconv" - "strings" - "sync" - "time" - - "github.com/jackc/pgconn/internal/ctxwatch" - "github.com/jackc/pgio" - "github.com/jackc/pgproto3/v2" -) - -const ( - connStatusUninitialized = iota - connStatusConnecting - connStatusClosed - connStatusIdle - connStatusBusy -) - -const wbufLen = 1024 - -// Notice represents a notice response message reported by the PostgreSQL server. Be aware that this is distinct from -// LISTEN/NOTIFY notification. -type Notice PgError - -// Notification is a message received from the PostgreSQL LISTEN/NOTIFY system -type Notification struct { - PID uint32 // backend pid that sent the notification - Channel string // channel from which notification was received - Payload string -} - -// DialFunc is a function that can be used to connect to a PostgreSQL server. -type DialFunc func(ctx context.Context, network, addr string) (net.Conn, error) - -// LookupFunc is a function that can be used to lookup IPs addrs from host. Optionally an ip:port combination can be -// returned in order to override the connection string's port. -type LookupFunc func(ctx context.Context, host string) (addrs []string, err error) - -// BuildFrontendFunc is a function that can be used to create Frontend implementation for connection. -type BuildFrontendFunc func(r io.Reader, w io.Writer) Frontend - -// NoticeHandler is a function that can handle notices received from the PostgreSQL server. Notices can be received at -// any time, usually during handling of a query response. The *PgConn is provided so the handler is aware of the origin -// of the notice, but it must not invoke any query method. Be aware that this is distinct from LISTEN/NOTIFY -// notification. -type NoticeHandler func(*PgConn, *Notice) - -// NotificationHandler is a function that can handle notifications received from the PostgreSQL server. Notifications -// can be received at any time, usually during handling of a query response. The *PgConn is provided so the handler is -// aware of the origin of the notice, but it must not invoke any query method. Be aware that this is distinct from a -// notice event. -type NotificationHandler func(*PgConn, *Notification) - -// Frontend used to receive messages from backend. -type Frontend interface { - Receive() (pgproto3.BackendMessage, error) -} - -// PgConn is a low-level PostgreSQL connection handle. It is not safe for concurrent usage. -type PgConn struct { - conn net.Conn // the underlying TCP or unix domain socket connection - pid uint32 // backend pid - secretKey uint32 // key to use to send a cancel query message to the server - parameterStatuses map[string]string // parameters that have been reported by the server - txStatus byte - frontend Frontend - - config *Config - - status byte // One of connStatus* constants - - bufferingReceive bool - bufferingReceiveMux sync.Mutex - bufferingReceiveMsg pgproto3.BackendMessage - bufferingReceiveErr error - - peekedMsg pgproto3.BackendMessage - - // Reusable / preallocated resources - wbuf []byte // write buffer - resultReader ResultReader - multiResultReader MultiResultReader - contextWatcher *ctxwatch.ContextWatcher - - cleanupDone chan struct{} -} - -// Connect establishes a connection to a PostgreSQL server using the environment and connString (in URL or DSN format) -// to provide configuration. See documentation for ParseConfig for details. ctx can be used to cancel a connect attempt. -func Connect(ctx context.Context, connString string) (*PgConn, error) { - config, err := ParseConfig(connString) - if err != nil { - return nil, err - } - - return ConnectConfig(ctx, config) -} - -// Connect establishes a connection to a PostgreSQL server using the environment and connString (in URL or DSN format) -// and ParseConfigOptions to provide additional configuration. See documentation for ParseConfig for details. ctx can be -// used to cancel a connect attempt. -func ConnectWithOptions(ctx context.Context, connString string, parseConfigOptions ParseConfigOptions) (*PgConn, error) { - config, err := ParseConfigWithOptions(connString, parseConfigOptions) - if err != nil { - return nil, err - } - - return ConnectConfig(ctx, config) -} - -// Connect establishes a connection to a PostgreSQL server using config. config must have been constructed with -// ParseConfig. ctx can be used to cancel a connect attempt. -// -// If config.Fallbacks are present they will sequentially be tried in case of error establishing network connection. An -// authentication error will terminate the chain of attempts (like libpq: -// https://www.postgresql.org/docs/11/libpq-connect.html#LIBPQ-MULTIPLE-HOSTS) and be returned as the error. Otherwise, -// if all attempts fail the last error is returned. -func ConnectConfig(octx context.Context, config *Config) (pgConn *PgConn, err error) { - // Default values are set in ParseConfig. Enforce initial creation by ParseConfig rather than setting defaults from - // zero values. - if !config.createdByParseConfig { - panic("config must be created by ParseConfig") - } - - // Simplify usage by treating primary config and fallbacks the same. - fallbackConfigs := []*FallbackConfig{ - { - Host: config.Host, - Port: config.Port, - TLSConfig: config.TLSConfig, - }, - } - fallbackConfigs = append(fallbackConfigs, config.Fallbacks...) - ctx := octx - fallbackConfigs, err = expandWithIPs(ctx, config.LookupFunc, fallbackConfigs) - if err != nil { - return nil, &connectError{config: config, msg: "hostname resolving error", err: err} - } - - if len(fallbackConfigs) == 0 { - return nil, &connectError{config: config, msg: "hostname resolving error", err: errors.New("ip addr wasn't found")} - } - - foundBestServer := false - var fallbackConfig *FallbackConfig - for _, fc := range fallbackConfigs { - // ConnectTimeout restricts the whole connection process. - if config.ConnectTimeout != 0 { - var cancel context.CancelFunc - ctx, cancel = context.WithTimeout(octx, config.ConnectTimeout) - defer cancel() - } else { - ctx = octx - } - pgConn, err = connect(ctx, config, fc, false) - if err == nil { - foundBestServer = true - break - } else if pgerr, ok := err.(*PgError); ok { - err = &connectError{config: config, msg: "server error", err: pgerr} - const ERRCODE_INVALID_PASSWORD = "28P01" // wrong password - const ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION = "28000" // wrong password or bad pg_hba.conf settings - const ERRCODE_INVALID_CATALOG_NAME = "3D000" // db does not exist - const ERRCODE_INSUFFICIENT_PRIVILEGE = "42501" // missing connect privilege - if pgerr.Code == ERRCODE_INVALID_PASSWORD || - pgerr.Code == ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION || - pgerr.Code == ERRCODE_INVALID_CATALOG_NAME || - pgerr.Code == ERRCODE_INSUFFICIENT_PRIVILEGE { - break - } - } else if cerr, ok := err.(*connectError); ok { - if _, ok := cerr.err.(*NotPreferredError); ok { - fallbackConfig = fc - } - } - } - - if !foundBestServer && fallbackConfig != nil { - pgConn, err = connect(ctx, config, fallbackConfig, true) - if pgerr, ok := err.(*PgError); ok { - err = &connectError{config: config, msg: "server error", err: pgerr} - } - } - - if err != nil { - return nil, err // no need to wrap in connectError because it will already be wrapped in all cases except PgError - } - - if config.AfterConnect != nil { - err := config.AfterConnect(ctx, pgConn) - if err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "AfterConnect error", err: err} - } - } - - return pgConn, nil -} - -func expandWithIPs(ctx context.Context, lookupFn LookupFunc, fallbacks []*FallbackConfig) ([]*FallbackConfig, error) { - var configs []*FallbackConfig - - for _, fb := range fallbacks { - // skip resolve for unix sockets - if isAbsolutePath(fb.Host) { - configs = append(configs, &FallbackConfig{ - Host: fb.Host, - Port: fb.Port, - TLSConfig: fb.TLSConfig, - }) - - continue - } - - ips, err := lookupFn(ctx, fb.Host) - if err != nil { - return nil, err - } - - for _, ip := range ips { - splitIP, splitPort, err := net.SplitHostPort(ip) - if err == nil { - port, err := strconv.ParseUint(splitPort, 10, 16) - if err != nil { - return nil, fmt.Errorf("error parsing port (%s) from lookup: %w", splitPort, err) - } - configs = append(configs, &FallbackConfig{ - Host: splitIP, - Port: uint16(port), - TLSConfig: fb.TLSConfig, - }) - } else { - configs = append(configs, &FallbackConfig{ - Host: ip, - Port: fb.Port, - TLSConfig: fb.TLSConfig, - }) - } - } - } - - return configs, nil -} - -func connect(ctx context.Context, config *Config, fallbackConfig *FallbackConfig, - ignoreNotPreferredErr bool) (*PgConn, error) { - pgConn := new(PgConn) - pgConn.config = config - pgConn.wbuf = make([]byte, 0, wbufLen) - pgConn.cleanupDone = make(chan struct{}) - - var err error - network, address := NetworkAddress(fallbackConfig.Host, fallbackConfig.Port) - netConn, err := config.DialFunc(ctx, network, address) - if err != nil { - var netErr net.Error - if errors.As(err, &netErr) && netErr.Timeout() { - err = &errTimeout{err: err} - } - return nil, &connectError{config: config, msg: "dial error", err: err} - } - - pgConn.conn = netConn - pgConn.contextWatcher = newContextWatcher(netConn) - pgConn.contextWatcher.Watch(ctx) - - if fallbackConfig.TLSConfig != nil { - tlsConn, err := startTLS(netConn, fallbackConfig.TLSConfig) - pgConn.contextWatcher.Unwatch() // Always unwatch `netConn` after TLS. - if err != nil { - netConn.Close() - return nil, &connectError{config: config, msg: "tls error", err: err} - } - - pgConn.conn = tlsConn - pgConn.contextWatcher = newContextWatcher(tlsConn) - pgConn.contextWatcher.Watch(ctx) - } - - defer pgConn.contextWatcher.Unwatch() - - pgConn.parameterStatuses = make(map[string]string) - pgConn.status = connStatusConnecting - pgConn.frontend = config.BuildFrontend(pgConn.conn, pgConn.conn) - - startupMsg := pgproto3.StartupMessage{ - ProtocolVersion: pgproto3.ProtocolVersionNumber, - Parameters: make(map[string]string), - } - - // Copy default run-time params - for k, v := range config.RuntimeParams { - startupMsg.Parameters[k] = v - } - - startupMsg.Parameters["user"] = config.User - if config.Database != "" { - startupMsg.Parameters["database"] = config.Database - } - - if _, err := pgConn.conn.Write(startupMsg.Encode(pgConn.wbuf)); err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "failed to write startup message", err: err} - } - - for { - msg, err := pgConn.receiveMessage() - if err != nil { - pgConn.conn.Close() - if err, ok := err.(*PgError); ok { - return nil, err - } - return nil, &connectError{config: config, msg: "failed to receive message", err: preferContextOverNetTimeoutError(ctx, err)} - } - - switch msg := msg.(type) { - case *pgproto3.BackendKeyData: - pgConn.pid = msg.ProcessID - pgConn.secretKey = msg.SecretKey - - case *pgproto3.AuthenticationOk: - case *pgproto3.AuthenticationCleartextPassword: - err = pgConn.txPasswordMessage(pgConn.config.Password) - if err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "failed to write password message", err: err} - } - case *pgproto3.AuthenticationMD5Password: - digestedPassword := "md5" + hexMD5(hexMD5(pgConn.config.Password+pgConn.config.User)+string(msg.Salt[:])) - err = pgConn.txPasswordMessage(digestedPassword) - if err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "failed to write password message", err: err} - } - case *pgproto3.AuthenticationSASL: - err = pgConn.scramAuth(msg.AuthMechanisms) - if err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "failed SASL auth", err: err} - } - case *pgproto3.AuthenticationGSS: - err = pgConn.gssAuth() - if err != nil { - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "failed GSS auth", err: err} - } - case *pgproto3.ReadyForQuery: - pgConn.status = connStatusIdle - if config.ValidateConnect != nil { - // ValidateConnect may execute commands that cause the context to be watched again. Unwatch first to avoid - // the watch already in progress panic. This is that last thing done by this method so there is no need to - // restart the watch after ValidateConnect returns. - // - // See https://github.com/jackc/pgconn/issues/40. - pgConn.contextWatcher.Unwatch() - - err := config.ValidateConnect(ctx, pgConn) - if err != nil { - if _, ok := err.(*NotPreferredError); ignoreNotPreferredErr && ok { - return pgConn, nil - } - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "ValidateConnect failed", err: err} - } - } - return pgConn, nil - case *pgproto3.ParameterStatus, *pgproto3.NoticeResponse: - // handled by ReceiveMessage - case *pgproto3.ErrorResponse: - pgConn.conn.Close() - return nil, ErrorResponseToPgError(msg) - default: - pgConn.conn.Close() - return nil, &connectError{config: config, msg: "received unexpected message", err: err} - } - } -} - -func newContextWatcher(conn net.Conn) *ctxwatch.ContextWatcher { - return ctxwatch.NewContextWatcher( - func() { conn.SetDeadline(time.Date(1, 1, 1, 1, 1, 1, 1, time.UTC)) }, - func() { conn.SetDeadline(time.Time{}) }, - ) -} - -func startTLS(conn net.Conn, tlsConfig *tls.Config) (net.Conn, error) { - err := binary.Write(conn, binary.BigEndian, []int32{8, 80877103}) - if err != nil { - return nil, err - } - - response := make([]byte, 1) - if _, err = io.ReadFull(conn, response); err != nil { - return nil, err - } - - if response[0] != 'S' { - return nil, errors.New("server refused TLS connection") - } - - return tls.Client(conn, tlsConfig), nil -} - -func (pgConn *PgConn) txPasswordMessage(password string) (err error) { - msg := &pgproto3.PasswordMessage{Password: password} - _, err = pgConn.conn.Write(msg.Encode(pgConn.wbuf)) - return err -} - -func hexMD5(s string) string { - hash := md5.New() - io.WriteString(hash, s) - return hex.EncodeToString(hash.Sum(nil)) -} - -func (pgConn *PgConn) signalMessage() chan struct{} { - if pgConn.bufferingReceive { - panic("BUG: signalMessage when already in progress") - } - - pgConn.bufferingReceive = true - pgConn.bufferingReceiveMux.Lock() - - ch := make(chan struct{}) - go func() { - pgConn.bufferingReceiveMsg, pgConn.bufferingReceiveErr = pgConn.frontend.Receive() - pgConn.bufferingReceiveMux.Unlock() - close(ch) - }() - - return ch -} - -// SendBytes sends buf to the PostgreSQL server. It must only be used when the connection is not busy. e.g. It is as -// error to call SendBytes while reading the result of a query. -// -// This is a very low level method that requires deep understanding of the PostgreSQL wire protocol to use correctly. -// See https://www.postgresql.org/docs/current/protocol.html. -func (pgConn *PgConn) SendBytes(ctx context.Context, buf []byte) error { - if err := pgConn.lock(); err != nil { - return err - } - defer pgConn.unlock() - - if ctx != context.Background() { - select { - case <-ctx.Done(): - return newContextAlreadyDoneError(ctx) - default: - } - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - return &writeError{err: err, safeToRetry: n == 0} - } - - return nil -} - -// ReceiveMessage receives one wire protocol message from the PostgreSQL server. It must only be used when the -// connection is not busy. e.g. It is an error to call ReceiveMessage while reading the result of a query. The messages -// are still handled by the core pgconn message handling system so receiving a NotificationResponse will still trigger -// the OnNotification callback. -// -// This is a very low level method that requires deep understanding of the PostgreSQL wire protocol to use correctly. -// See https://www.postgresql.org/docs/current/protocol.html. -func (pgConn *PgConn) ReceiveMessage(ctx context.Context) (pgproto3.BackendMessage, error) { - if err := pgConn.lock(); err != nil { - return nil, err - } - defer pgConn.unlock() - - if ctx != context.Background() { - select { - case <-ctx.Done(): - return nil, newContextAlreadyDoneError(ctx) - default: - } - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - msg, err := pgConn.receiveMessage() - if err != nil { - err = &pgconnError{ - msg: "receive message failed", - err: preferContextOverNetTimeoutError(ctx, err), - safeToRetry: true} - } - return msg, err -} - -// peekMessage peeks at the next message without setting up context cancellation. -func (pgConn *PgConn) peekMessage() (pgproto3.BackendMessage, error) { - if pgConn.peekedMsg != nil { - return pgConn.peekedMsg, nil - } - - var msg pgproto3.BackendMessage - var err error - if pgConn.bufferingReceive { - pgConn.bufferingReceiveMux.Lock() - msg = pgConn.bufferingReceiveMsg - err = pgConn.bufferingReceiveErr - pgConn.bufferingReceiveMux.Unlock() - pgConn.bufferingReceive = false - - // If a timeout error happened in the background try the read again. - var netErr net.Error - if errors.As(err, &netErr) && netErr.Timeout() { - msg, err = pgConn.frontend.Receive() - } - } else { - msg, err = pgConn.frontend.Receive() - } - - if err != nil { - // Close on anything other than timeout error - everything else is fatal - var netErr net.Error - isNetErr := errors.As(err, &netErr) - if !(isNetErr && netErr.Timeout()) { - pgConn.asyncClose() - } - - return nil, err - } - - pgConn.peekedMsg = msg - return msg, nil -} - -// receiveMessage receives a message without setting up context cancellation -func (pgConn *PgConn) receiveMessage() (pgproto3.BackendMessage, error) { - msg, err := pgConn.peekMessage() - if err != nil { - // Close on anything other than timeout error - everything else is fatal - var netErr net.Error - isNetErr := errors.As(err, &netErr) - if !(isNetErr && netErr.Timeout()) { - pgConn.asyncClose() - } - - return nil, err - } - pgConn.peekedMsg = nil - - switch msg := msg.(type) { - case *pgproto3.ReadyForQuery: - pgConn.txStatus = msg.TxStatus - case *pgproto3.ParameterStatus: - pgConn.parameterStatuses[msg.Name] = msg.Value - case *pgproto3.ErrorResponse: - if msg.Severity == "FATAL" { - pgConn.status = connStatusClosed - pgConn.conn.Close() // Ignore error as the connection is already broken and there is already an error to return. - close(pgConn.cleanupDone) - return nil, ErrorResponseToPgError(msg) - } - case *pgproto3.NoticeResponse: - if pgConn.config.OnNotice != nil { - pgConn.config.OnNotice(pgConn, noticeResponseToNotice(msg)) - } - case *pgproto3.NotificationResponse: - if pgConn.config.OnNotification != nil { - pgConn.config.OnNotification(pgConn, &Notification{PID: msg.PID, Channel: msg.Channel, Payload: msg.Payload}) - } - } - - return msg, nil -} - -// Conn returns the underlying net.Conn. -func (pgConn *PgConn) Conn() net.Conn { - return pgConn.conn -} - -// PID returns the backend PID. -func (pgConn *PgConn) PID() uint32 { - return pgConn.pid -} - -// TxStatus returns the current TxStatus as reported by the server in the ReadyForQuery message. -// -// Possible return values: -// 'I' - idle / not in transaction -// 'T' - in a transaction -// 'E' - in a failed transaction -// -// See https://www.postgresql.org/docs/current/protocol-message-formats.html. -func (pgConn *PgConn) TxStatus() byte { - return pgConn.txStatus -} - -// SecretKey returns the backend secret key used to send a cancel query message to the server. -func (pgConn *PgConn) SecretKey() uint32 { - return pgConn.secretKey -} - -// Close closes a connection. It is safe to call Close on a already closed connection. Close attempts a clean close by -// sending the exit message to PostgreSQL. However, this could block so ctx is available to limit the time to wait. The -// underlying net.Conn.Close() will always be called regardless of any other errors. -func (pgConn *PgConn) Close(ctx context.Context) error { - if pgConn.status == connStatusClosed { - return nil - } - pgConn.status = connStatusClosed - - defer close(pgConn.cleanupDone) - defer pgConn.conn.Close() - - if ctx != context.Background() { - // Close may be called while a cancellable query is in progress. This will most often be triggered by panic when - // a defer closes the connection (possibly indirectly via a transaction or a connection pool). Unwatch to end any - // previous watch. It is safe to Unwatch regardless of whether a watch is already is progress. - // - // See https://github.com/jackc/pgconn/issues/29 - pgConn.contextWatcher.Unwatch() - - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - // Ignore any errors sending Terminate message and waiting for server to close connection. - // This mimics the behavior of libpq PQfinish. It calls closePGconn which calls sendTerminateConn which purposefully - // ignores errors. - // - // See https://github.com/jackc/pgx/issues/637 - pgConn.conn.Write([]byte{'X', 0, 0, 0, 4}) - - return pgConn.conn.Close() -} - -// asyncClose marks the connection as closed and asynchronously sends a cancel query message and closes the underlying -// connection. -func (pgConn *PgConn) asyncClose() { - if pgConn.status == connStatusClosed { - return - } - pgConn.status = connStatusClosed - - go func() { - defer close(pgConn.cleanupDone) - defer pgConn.conn.Close() - - deadline := time.Now().Add(time.Second * 15) - - ctx, cancel := context.WithDeadline(context.Background(), deadline) - defer cancel() - - pgConn.CancelRequest(ctx) - - pgConn.conn.SetDeadline(deadline) - - pgConn.conn.Write([]byte{'X', 0, 0, 0, 4}) - }() -} - -// CleanupDone returns a channel that will be closed after all underlying resources have been cleaned up. A closed -// connection is no longer usable, but underlying resources, in particular the net.Conn, may not have finished closing -// yet. This is because certain errors such as a context cancellation require that the interrupted function call return -// immediately, but the error may also cause the connection to be closed. In these cases the underlying resources are -// closed asynchronously. -// -// This is only likely to be useful to connection pools. It gives them a way avoid establishing a new connection while -// an old connection is still being cleaned up and thereby exceeding the maximum pool size. -func (pgConn *PgConn) CleanupDone() chan (struct{}) { - return pgConn.cleanupDone -} - -// IsClosed reports if the connection has been closed. -// -// CleanupDone() can be used to determine if all cleanup has been completed. -func (pgConn *PgConn) IsClosed() bool { - return pgConn.status < connStatusIdle -} - -// IsBusy reports if the connection is busy. -func (pgConn *PgConn) IsBusy() bool { - return pgConn.status == connStatusBusy -} - -// lock locks the connection. -func (pgConn *PgConn) lock() error { - switch pgConn.status { - case connStatusBusy: - return &connLockError{status: "conn busy"} // This only should be possible in case of an application bug. - case connStatusClosed: - return &connLockError{status: "conn closed"} - case connStatusUninitialized: - return &connLockError{status: "conn uninitialized"} - } - pgConn.status = connStatusBusy - return nil -} - -func (pgConn *PgConn) unlock() { - switch pgConn.status { - case connStatusBusy: - pgConn.status = connStatusIdle - case connStatusClosed: - default: - panic("BUG: cannot unlock unlocked connection") // This should only be possible if there is a bug in this package. - } -} - -// ParameterStatus returns the value of a parameter reported by the server (e.g. -// server_version). Returns an empty string for unknown parameters. -func (pgConn *PgConn) ParameterStatus(key string) string { - return pgConn.parameterStatuses[key] -} - -// CommandTag is the result of an Exec function -type CommandTag []byte - -// RowsAffected returns the number of rows affected. If the CommandTag was not -// for a row affecting command (e.g. "CREATE TABLE") then it returns 0. -func (ct CommandTag) RowsAffected() int64 { - // Find last non-digit - idx := -1 - for i := len(ct) - 1; i >= 0; i-- { - if ct[i] >= '0' && ct[i] <= '9' { - idx = i - } else { - break - } - } - - if idx == -1 { - return 0 - } - - var n int64 - for _, b := range ct[idx:] { - n = n*10 + int64(b-'0') - } - - return n -} - -func (ct CommandTag) String() string { - return string(ct) -} - -// Insert is true if the command tag starts with "INSERT". -func (ct CommandTag) Insert() bool { - return len(ct) >= 6 && - ct[0] == 'I' && - ct[1] == 'N' && - ct[2] == 'S' && - ct[3] == 'E' && - ct[4] == 'R' && - ct[5] == 'T' -} - -// Update is true if the command tag starts with "UPDATE". -func (ct CommandTag) Update() bool { - return len(ct) >= 6 && - ct[0] == 'U' && - ct[1] == 'P' && - ct[2] == 'D' && - ct[3] == 'A' && - ct[4] == 'T' && - ct[5] == 'E' -} - -// Delete is true if the command tag starts with "DELETE". -func (ct CommandTag) Delete() bool { - return len(ct) >= 6 && - ct[0] == 'D' && - ct[1] == 'E' && - ct[2] == 'L' && - ct[3] == 'E' && - ct[4] == 'T' && - ct[5] == 'E' -} - -// Select is true if the command tag starts with "SELECT". -func (ct CommandTag) Select() bool { - return len(ct) >= 6 && - ct[0] == 'S' && - ct[1] == 'E' && - ct[2] == 'L' && - ct[3] == 'E' && - ct[4] == 'C' && - ct[5] == 'T' -} - -type StatementDescription struct { - Name string - SQL string - ParamOIDs []uint32 - Fields []pgproto3.FieldDescription -} - -// Prepare creates a prepared statement. If the name is empty, the anonymous prepared statement will be used. This -// allows Prepare to also to describe statements without creating a server-side prepared statement. -func (pgConn *PgConn) Prepare(ctx context.Context, name, sql string, paramOIDs []uint32) (*StatementDescription, error) { - if err := pgConn.lock(); err != nil { - return nil, err - } - defer pgConn.unlock() - - if ctx != context.Background() { - select { - case <-ctx.Done(): - return nil, newContextAlreadyDoneError(ctx) - default: - } - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - buf := pgConn.wbuf - buf = (&pgproto3.Parse{Name: name, Query: sql, ParameterOIDs: paramOIDs}).Encode(buf) - buf = (&pgproto3.Describe{ObjectType: 'S', Name: name}).Encode(buf) - buf = (&pgproto3.Sync{}).Encode(buf) - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - return nil, &writeError{err: err, safeToRetry: n == 0} - } - - psd := &StatementDescription{Name: name, SQL: sql} - - var parseErr error - -readloop: - for { - msg, err := pgConn.receiveMessage() - if err != nil { - pgConn.asyncClose() - return nil, preferContextOverNetTimeoutError(ctx, err) - } - - switch msg := msg.(type) { - case *pgproto3.ParameterDescription: - psd.ParamOIDs = make([]uint32, len(msg.ParameterOIDs)) - copy(psd.ParamOIDs, msg.ParameterOIDs) - case *pgproto3.RowDescription: - psd.Fields = make([]pgproto3.FieldDescription, len(msg.Fields)) - copy(psd.Fields, msg.Fields) - case *pgproto3.ErrorResponse: - parseErr = ErrorResponseToPgError(msg) - case *pgproto3.ReadyForQuery: - break readloop - } - } - - if parseErr != nil { - return nil, parseErr - } - return psd, nil -} - -// ErrorResponseToPgError converts a wire protocol error message to a *PgError. -func ErrorResponseToPgError(msg *pgproto3.ErrorResponse) *PgError { - return &PgError{ - Severity: msg.Severity, - Code: string(msg.Code), - Message: string(msg.Message), - Detail: string(msg.Detail), - Hint: msg.Hint, - Position: msg.Position, - InternalPosition: msg.InternalPosition, - InternalQuery: string(msg.InternalQuery), - Where: string(msg.Where), - SchemaName: string(msg.SchemaName), - TableName: string(msg.TableName), - ColumnName: string(msg.ColumnName), - DataTypeName: string(msg.DataTypeName), - ConstraintName: msg.ConstraintName, - File: string(msg.File), - Line: msg.Line, - Routine: string(msg.Routine), - } -} - -func noticeResponseToNotice(msg *pgproto3.NoticeResponse) *Notice { - pgerr := ErrorResponseToPgError((*pgproto3.ErrorResponse)(msg)) - return (*Notice)(pgerr) -} - -// CancelRequest sends a cancel request to the PostgreSQL server. It returns an error if unable to deliver the cancel -// request, but lack of an error does not ensure that the query was canceled. As specified in the documentation, there -// is no way to be sure a query was canceled. See https://www.postgresql.org/docs/11/protocol-flow.html#id-1.10.5.7.9 -func (pgConn *PgConn) CancelRequest(ctx context.Context) error { - // Open a cancellation request to the same server. The address is taken from the net.Conn directly instead of reusing - // the connection config. This is important in high availability configurations where fallback connections may be - // specified or DNS may be used to load balance. - serverAddr := pgConn.conn.RemoteAddr() - cancelConn, err := pgConn.config.DialFunc(ctx, serverAddr.Network(), serverAddr.String()) - if err != nil { - return err - } - defer cancelConn.Close() - - if ctx != context.Background() { - contextWatcher := ctxwatch.NewContextWatcher( - func() { cancelConn.SetDeadline(time.Date(1, 1, 1, 1, 1, 1, 1, time.UTC)) }, - func() { cancelConn.SetDeadline(time.Time{}) }, - ) - contextWatcher.Watch(ctx) - defer contextWatcher.Unwatch() - } - - buf := make([]byte, 16) - binary.BigEndian.PutUint32(buf[0:4], 16) - binary.BigEndian.PutUint32(buf[4:8], 80877102) - binary.BigEndian.PutUint32(buf[8:12], uint32(pgConn.pid)) - binary.BigEndian.PutUint32(buf[12:16], uint32(pgConn.secretKey)) - _, err = cancelConn.Write(buf) - if err != nil { - return err - } - - _, err = cancelConn.Read(buf) - if err != io.EOF { - return err - } - - return nil -} - -// WaitForNotification waits for a LISTON/NOTIFY message to be received. It returns an error if a notification was not -// received. -func (pgConn *PgConn) WaitForNotification(ctx context.Context) error { - if err := pgConn.lock(); err != nil { - return err - } - defer pgConn.unlock() - - if ctx != context.Background() { - select { - case <-ctx.Done(): - return newContextAlreadyDoneError(ctx) - default: - } - - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - for { - msg, err := pgConn.receiveMessage() - if err != nil { - return preferContextOverNetTimeoutError(ctx, err) - } - - switch msg.(type) { - case *pgproto3.NotificationResponse: - return nil - } - } -} - -// Exec executes SQL via the PostgreSQL simple query protocol. SQL may contain multiple queries. Execution is -// implicitly wrapped in a transaction unless a transaction is already in progress or SQL contains transaction control -// statements. -// -// Prefer ExecParams unless executing arbitrary SQL that may contain multiple queries. -func (pgConn *PgConn) Exec(ctx context.Context, sql string) *MultiResultReader { - if err := pgConn.lock(); err != nil { - return &MultiResultReader{ - closed: true, - err: err, - } - } - - pgConn.multiResultReader = MultiResultReader{ - pgConn: pgConn, - ctx: ctx, - } - multiResult := &pgConn.multiResultReader - if ctx != context.Background() { - select { - case <-ctx.Done(): - multiResult.closed = true - multiResult.err = newContextAlreadyDoneError(ctx) - pgConn.unlock() - return multiResult - default: - } - pgConn.contextWatcher.Watch(ctx) - } - - buf := pgConn.wbuf - buf = (&pgproto3.Query{String: sql}).Encode(buf) - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - pgConn.contextWatcher.Unwatch() - multiResult.closed = true - multiResult.err = &writeError{err: err, safeToRetry: n == 0} - pgConn.unlock() - return multiResult - } - - return multiResult -} - -// ReceiveResults reads the result that might be returned by Postgres after a SendBytes -// (e.a. after sending a CopyDone in a copy-both situation). -// -// This is a very low level method that requires deep understanding of the PostgreSQL wire protocol to use correctly. -// See https://www.postgresql.org/docs/current/protocol.html. -func (pgConn *PgConn) ReceiveResults(ctx context.Context) *MultiResultReader { - if err := pgConn.lock(); err != nil { - return &MultiResultReader{ - closed: true, - err: err, - } - } - - pgConn.multiResultReader = MultiResultReader{ - pgConn: pgConn, - ctx: ctx, - } - multiResult := &pgConn.multiResultReader - if ctx != context.Background() { - select { - case <-ctx.Done(): - multiResult.closed = true - multiResult.err = newContextAlreadyDoneError(ctx) - pgConn.unlock() - return multiResult - default: - } - pgConn.contextWatcher.Watch(ctx) - } - - return multiResult -} - -// ExecParams executes a command via the PostgreSQL extended query protocol. -// -// sql is a SQL command string. It may only contain one query. Parameter substitution is positional using $1, $2, $3, -// etc. -// -// paramValues are the parameter values. It must be encoded in the format given by paramFormats. -// -// paramOIDs is a slice of data type OIDs for paramValues. If paramOIDs is nil, the server will infer the data type for -// all parameters. Any paramOID element that is 0 that will cause the server to infer the data type for that parameter. -// ExecParams will panic if len(paramOIDs) is not 0, 1, or len(paramValues). -// -// paramFormats is a slice of format codes determining for each paramValue column whether it is encoded in text or -// binary format. If paramFormats is nil all params are text format. ExecParams will panic if -// len(paramFormats) is not 0, 1, or len(paramValues). -// -// resultFormats is a slice of format codes determining for each result column whether it is encoded in text or -// binary format. If resultFormats is nil all results will be in text format. -// -// ResultReader must be closed before PgConn can be used again. -func (pgConn *PgConn) ExecParams(ctx context.Context, sql string, paramValues [][]byte, paramOIDs []uint32, paramFormats []int16, resultFormats []int16) *ResultReader { - result := pgConn.execExtendedPrefix(ctx, paramValues) - if result.closed { - return result - } - - buf := pgConn.wbuf - buf = (&pgproto3.Parse{Query: sql, ParameterOIDs: paramOIDs}).Encode(buf) - buf = (&pgproto3.Bind{ParameterFormatCodes: paramFormats, Parameters: paramValues, ResultFormatCodes: resultFormats}).Encode(buf) - - pgConn.execExtendedSuffix(buf, result) - - return result -} - -// ExecPrepared enqueues the execution of a prepared statement via the PostgreSQL extended query protocol. -// -// paramValues are the parameter values. It must be encoded in the format given by paramFormats. -// -// paramFormats is a slice of format codes determining for each paramValue column whether it is encoded in text or -// binary format. If paramFormats is nil all params are text format. ExecPrepared will panic if -// len(paramFormats) is not 0, 1, or len(paramValues). -// -// resultFormats is a slice of format codes determining for each result column whether it is encoded in text or -// binary format. If resultFormats is nil all results will be in text format. -// -// ResultReader must be closed before PgConn can be used again. -func (pgConn *PgConn) ExecPrepared(ctx context.Context, stmtName string, paramValues [][]byte, paramFormats []int16, resultFormats []int16) *ResultReader { - result := pgConn.execExtendedPrefix(ctx, paramValues) - if result.closed { - return result - } - - buf := pgConn.wbuf - buf = (&pgproto3.Bind{PreparedStatement: stmtName, ParameterFormatCodes: paramFormats, Parameters: paramValues, ResultFormatCodes: resultFormats}).Encode(buf) - - pgConn.execExtendedSuffix(buf, result) - - return result -} - -func (pgConn *PgConn) execExtendedPrefix(ctx context.Context, paramValues [][]byte) *ResultReader { - pgConn.resultReader = ResultReader{ - pgConn: pgConn, - ctx: ctx, - } - result := &pgConn.resultReader - - if err := pgConn.lock(); err != nil { - result.concludeCommand(nil, err) - result.closed = true - return result - } - - if len(paramValues) > math.MaxUint16 { - result.concludeCommand(nil, fmt.Errorf("extended protocol limited to %v parameters", math.MaxUint16)) - result.closed = true - pgConn.unlock() - return result - } - - if ctx != context.Background() { - select { - case <-ctx.Done(): - result.concludeCommand(nil, newContextAlreadyDoneError(ctx)) - result.closed = true - pgConn.unlock() - return result - default: - } - pgConn.contextWatcher.Watch(ctx) - } - - return result -} - -func (pgConn *PgConn) execExtendedSuffix(buf []byte, result *ResultReader) { - buf = (&pgproto3.Describe{ObjectType: 'P'}).Encode(buf) - buf = (&pgproto3.Execute{}).Encode(buf) - buf = (&pgproto3.Sync{}).Encode(buf) - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - result.concludeCommand(nil, &writeError{err: err, safeToRetry: n == 0}) - pgConn.contextWatcher.Unwatch() - result.closed = true - pgConn.unlock() - return - } - - result.readUntilRowDescription() -} - -// CopyTo executes the copy command sql and copies the results to w. -func (pgConn *PgConn) CopyTo(ctx context.Context, w io.Writer, sql string) (CommandTag, error) { - if err := pgConn.lock(); err != nil { - return nil, err - } - - if ctx != context.Background() { - select { - case <-ctx.Done(): - pgConn.unlock() - return nil, newContextAlreadyDoneError(ctx) - default: - } - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - // Send copy to command - buf := pgConn.wbuf - buf = (&pgproto3.Query{String: sql}).Encode(buf) - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - pgConn.unlock() - return nil, &writeError{err: err, safeToRetry: n == 0} - } - - // Read results - var commandTag CommandTag - var pgErr error - for { - msg, err := pgConn.receiveMessage() - if err != nil { - pgConn.asyncClose() - return nil, preferContextOverNetTimeoutError(ctx, err) - } - - switch msg := msg.(type) { - case *pgproto3.CopyDone: - case *pgproto3.CopyData: - _, err := w.Write(msg.Data) - if err != nil { - pgConn.asyncClose() - return nil, err - } - case *pgproto3.ReadyForQuery: - pgConn.unlock() - return commandTag, pgErr - case *pgproto3.CommandComplete: - commandTag = CommandTag(msg.CommandTag) - case *pgproto3.ErrorResponse: - pgErr = ErrorResponseToPgError(msg) - } - } -} - -// CopyFrom executes the copy command sql and copies all of r to the PostgreSQL server. -// -// Note: context cancellation will only interrupt operations on the underlying PostgreSQL network connection. Reads on r -// could still block. -func (pgConn *PgConn) CopyFrom(ctx context.Context, r io.Reader, sql string) (CommandTag, error) { - if err := pgConn.lock(); err != nil { - return nil, err - } - defer pgConn.unlock() - - if ctx != context.Background() { - select { - case <-ctx.Done(): - return nil, newContextAlreadyDoneError(ctx) - default: - } - pgConn.contextWatcher.Watch(ctx) - defer pgConn.contextWatcher.Unwatch() - } - - // Send copy to command - buf := pgConn.wbuf - buf = (&pgproto3.Query{String: sql}).Encode(buf) - - n, err := pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - return nil, &writeError{err: err, safeToRetry: n == 0} - } - - // Send copy data - abortCopyChan := make(chan struct{}) - copyErrChan := make(chan error, 1) - signalMessageChan := pgConn.signalMessage() - var wg sync.WaitGroup - wg.Add(1) - - go func() { - defer wg.Done() - buf := make([]byte, 0, 65536) - buf = append(buf, 'd') - sp := len(buf) - - for { - n, readErr := r.Read(buf[5:cap(buf)]) - if n > 0 { - buf = buf[0 : n+5] - pgio.SetInt32(buf[sp:], int32(n+4)) - - _, writeErr := pgConn.conn.Write(buf) - if writeErr != nil { - // Write errors are always fatal, but we can't use asyncClose because we are in a different goroutine. - pgConn.conn.Close() - - copyErrChan <- writeErr - return - } - } - if readErr != nil { - copyErrChan <- readErr - return - } - - select { - case <-abortCopyChan: - return - default: - } - } - }() - - var pgErr error - var copyErr error - for copyErr == nil && pgErr == nil { - select { - case copyErr = <-copyErrChan: - case <-signalMessageChan: - msg, err := pgConn.receiveMessage() - if err != nil { - pgConn.asyncClose() - return nil, preferContextOverNetTimeoutError(ctx, err) - } - - switch msg := msg.(type) { - case *pgproto3.ErrorResponse: - pgErr = ErrorResponseToPgError(msg) - default: - signalMessageChan = pgConn.signalMessage() - } - } - } - close(abortCopyChan) - // Make sure io goroutine finishes before writing. - wg.Wait() - - buf = buf[:0] - if copyErr == io.EOF || pgErr != nil { - copyDone := &pgproto3.CopyDone{} - buf = copyDone.Encode(buf) - } else { - copyFail := &pgproto3.CopyFail{Message: copyErr.Error()} - buf = copyFail.Encode(buf) - } - _, err = pgConn.conn.Write(buf) - if err != nil { - pgConn.asyncClose() - return nil, err - } - - // Read results - var commandTag CommandTag - for { - msg, err := pgConn.receiveMessage() - if err != nil { - pgConn.asyncClose() - return nil, preferContextOverNetTimeoutError(ctx, err) - } - - switch msg := msg.(type) { - case *pgproto3.ReadyForQuery: - return commandTag, pgErr - case *pgproto3.CommandComplete: - commandTag = CommandTag(msg.CommandTag) - case *pgproto3.ErrorResponse: - pgErr = ErrorResponseToPgError(msg) - } - } -} - -// MultiResultReader is a reader for a command that could return multiple results such as Exec or ExecBatch. -type MultiResultReader struct { - pgConn *PgConn - ctx context.Context - - rr *ResultReader - - closed bool - err error -} - -// ReadAll reads all available results. Calling ReadAll is mutually exclusive with all other MultiResultReader methods. -func (mrr *MultiResultReader) ReadAll() ([]*Result, error) { - var results []*Result - - for mrr.NextResult() { - results = append(results, mrr.ResultReader().Read()) - } - err := mrr.Close() - - return results, err -} - -func (mrr *MultiResultReader) receiveMessage() (pgproto3.BackendMessage, error) { - msg, err := mrr.pgConn.receiveMessage() - - if err != nil { - mrr.pgConn.contextWatcher.Unwatch() - mrr.err = preferContextOverNetTimeoutError(mrr.ctx, err) - mrr.closed = true - mrr.pgConn.asyncClose() - return nil, mrr.err - } - - switch msg := msg.(type) { - case *pgproto3.ReadyForQuery: - mrr.pgConn.contextWatcher.Unwatch() - mrr.closed = true - mrr.pgConn.unlock() - case *pgproto3.ErrorResponse: - mrr.err = ErrorResponseToPgError(msg) - } - - return msg, nil -} - -// NextResult returns advances the MultiResultReader to the next result and returns true if a result is available. -func (mrr *MultiResultReader) NextResult() bool { - for !mrr.closed && mrr.err == nil { - msg, err := mrr.receiveMessage() - if err != nil { - return false - } - - switch msg := msg.(type) { - case *pgproto3.RowDescription: - mrr.pgConn.resultReader = ResultReader{ - pgConn: mrr.pgConn, - multiResultReader: mrr, - ctx: mrr.ctx, - fieldDescriptions: msg.Fields, - } - mrr.rr = &mrr.pgConn.resultReader - return true - case *pgproto3.CommandComplete: - mrr.pgConn.resultReader = ResultReader{ - commandTag: CommandTag(msg.CommandTag), - commandConcluded: true, - closed: true, - } - mrr.rr = &mrr.pgConn.resultReader - return true - case *pgproto3.EmptyQueryResponse: - return false - } - } - - return false -} - -// ResultReader returns the current ResultReader. -func (mrr *MultiResultReader) ResultReader() *ResultReader { - return mrr.rr -} - -// Close closes the MultiResultReader and returns the first error that occurred during the MultiResultReader's use. -func (mrr *MultiResultReader) Close() error { - for !mrr.closed { - _, err := mrr.receiveMessage() - if err != nil { - return mrr.err - } - } - - return mrr.err -} - -// ResultReader is a reader for the result of a single query. -type ResultReader struct { - pgConn *PgConn - multiResultReader *MultiResultReader - ctx context.Context - - fieldDescriptions []pgproto3.FieldDescription - rowValues [][]byte - commandTag CommandTag - commandConcluded bool - closed bool - err error -} - -// Result is the saved query response that is returned by calling Read on a ResultReader. -type Result struct { - FieldDescriptions []pgproto3.FieldDescription - Rows [][][]byte - CommandTag CommandTag - Err error -} - -// Read saves the query response to a Result. -func (rr *ResultReader) Read() *Result { - br := &Result{} - - for rr.NextRow() { - if br.FieldDescriptions == nil { - br.FieldDescriptions = make([]pgproto3.FieldDescription, len(rr.FieldDescriptions())) - copy(br.FieldDescriptions, rr.FieldDescriptions()) - } - - row := make([][]byte, len(rr.Values())) - copy(row, rr.Values()) - br.Rows = append(br.Rows, row) - } - - br.CommandTag, br.Err = rr.Close() - - return br -} - -// NextRow advances the ResultReader to the next row and returns true if a row is available. -func (rr *ResultReader) NextRow() bool { - for !rr.commandConcluded { - msg, err := rr.receiveMessage() - if err != nil { - return false - } - - switch msg := msg.(type) { - case *pgproto3.DataRow: - rr.rowValues = msg.Values - return true - } - } - - return false -} - -// FieldDescriptions returns the field descriptions for the current result set. The returned slice is only valid until -// the ResultReader is closed. -func (rr *ResultReader) FieldDescriptions() []pgproto3.FieldDescription { - return rr.fieldDescriptions -} - -// Values returns the current row data. NextRow must have been previously been called. The returned [][]byte is only -// valid until the next NextRow call or the ResultReader is closed. However, the underlying byte data is safe to -// retain a reference to and mutate. -func (rr *ResultReader) Values() [][]byte { - return rr.rowValues -} - -// Close consumes any remaining result data and returns the command tag or -// error. -func (rr *ResultReader) Close() (CommandTag, error) { - if rr.closed { - return rr.commandTag, rr.err - } - rr.closed = true - - for !rr.commandConcluded { - _, err := rr.receiveMessage() - if err != nil { - return nil, rr.err - } - } - - if rr.multiResultReader == nil { - for { - msg, err := rr.receiveMessage() - if err != nil { - return nil, rr.err - } - - switch msg := msg.(type) { - // Detect a deferred constraint violation where the ErrorResponse is sent after CommandComplete. - case *pgproto3.ErrorResponse: - rr.err = ErrorResponseToPgError(msg) - case *pgproto3.ReadyForQuery: - rr.pgConn.contextWatcher.Unwatch() - rr.pgConn.unlock() - return rr.commandTag, rr.err - } - } - } - - return rr.commandTag, rr.err -} - -// readUntilRowDescription ensures the ResultReader's fieldDescriptions are loaded. It does not return an error as any -// error will be stored in the ResultReader. -func (rr *ResultReader) readUntilRowDescription() { - for !rr.commandConcluded { - // Peek before receive to avoid consuming a DataRow if the result set does not include a RowDescription method. - // This should never happen under normal pgconn usage, but it is possible if SendBytes and ReceiveResults are - // manually used to construct a query that does not issue a describe statement. - msg, _ := rr.pgConn.peekMessage() - if _, ok := msg.(*pgproto3.DataRow); ok { - return - } - - // Consume the message - msg, _ = rr.receiveMessage() - if _, ok := msg.(*pgproto3.RowDescription); ok { - return - } - } -} - -func (rr *ResultReader) receiveMessage() (msg pgproto3.BackendMessage, err error) { - if rr.multiResultReader == nil { - msg, err = rr.pgConn.receiveMessage() - } else { - msg, err = rr.multiResultReader.receiveMessage() - } - - if err != nil { - err = preferContextOverNetTimeoutError(rr.ctx, err) - rr.concludeCommand(nil, err) - rr.pgConn.contextWatcher.Unwatch() - rr.closed = true - if rr.multiResultReader == nil { - rr.pgConn.asyncClose() - } - - return nil, rr.err - } - - switch msg := msg.(type) { - case *pgproto3.RowDescription: - rr.fieldDescriptions = msg.Fields - case *pgproto3.CommandComplete: - rr.concludeCommand(CommandTag(msg.CommandTag), nil) - case *pgproto3.EmptyQueryResponse: - rr.concludeCommand(nil, nil) - case *pgproto3.ErrorResponse: - rr.concludeCommand(nil, ErrorResponseToPgError(msg)) - } - - return msg, nil -} - -func (rr *ResultReader) concludeCommand(commandTag CommandTag, err error) { - // Keep the first error that is recorded. Store the error before checking if the command is already concluded to - // allow for receiving an error after CommandComplete but before ReadyForQuery. - if err != nil && rr.err == nil { - rr.err = err - } - - if rr.commandConcluded { - return - } - - rr.commandTag = commandTag - rr.rowValues = nil - rr.commandConcluded = true -} - -// Batch is a collection of queries that can be sent to the PostgreSQL server in a single round-trip. -type Batch struct { - buf []byte -} - -// ExecParams appends an ExecParams command to the batch. See PgConn.ExecParams for parameter descriptions. -func (batch *Batch) ExecParams(sql string, paramValues [][]byte, paramOIDs []uint32, paramFormats []int16, resultFormats []int16) { - batch.buf = (&pgproto3.Parse{Query: sql, ParameterOIDs: paramOIDs}).Encode(batch.buf) - batch.ExecPrepared("", paramValues, paramFormats, resultFormats) -} - -// ExecPrepared appends an ExecPrepared e command to the batch. See PgConn.ExecPrepared for parameter descriptions. -func (batch *Batch) ExecPrepared(stmtName string, paramValues [][]byte, paramFormats []int16, resultFormats []int16) { - batch.buf = (&pgproto3.Bind{PreparedStatement: stmtName, ParameterFormatCodes: paramFormats, Parameters: paramValues, ResultFormatCodes: resultFormats}).Encode(batch.buf) - batch.buf = (&pgproto3.Describe{ObjectType: 'P'}).Encode(batch.buf) - batch.buf = (&pgproto3.Execute{}).Encode(batch.buf) -} - -// ExecBatch executes all the queries in batch in a single round-trip. Execution is implicitly transactional unless a -// transaction is already in progress or SQL contains transaction control statements. -func (pgConn *PgConn) ExecBatch(ctx context.Context, batch *Batch) *MultiResultReader { - if err := pgConn.lock(); err != nil { - return &MultiResultReader{ - closed: true, - err: err, - } - } - - pgConn.multiResultReader = MultiResultReader{ - pgConn: pgConn, - ctx: ctx, - } - multiResult := &pgConn.multiResultReader - - if ctx != context.Background() { - select { - case <-ctx.Done(): - multiResult.closed = true - multiResult.err = newContextAlreadyDoneError(ctx) - pgConn.unlock() - return multiResult - default: - } - pgConn.contextWatcher.Watch(ctx) - } - - batch.buf = (&pgproto3.Sync{}).Encode(batch.buf) - - // A large batch can deadlock without concurrent reading and writing. If the Write fails the underlying net.Conn is - // closed. This is all that can be done without introducing a race condition or adding a concurrent safe communication - // channel to relay the error back. The practical effect of this is that the underlying Write error is not reported. - // The error the code reading the batch results receives will be a closed connection error. - // - // See https://github.com/jackc/pgx/issues/374. - go func() { - _, err := pgConn.conn.Write(batch.buf) - if err != nil { - pgConn.conn.Close() - } - }() - - return multiResult -} - -// EscapeString escapes a string such that it can safely be interpolated into a SQL command string. It does not include -// the surrounding single quotes. -// -// The current implementation requires that standard_conforming_strings=on and client_encoding="UTF8". If these -// conditions are not met an error will be returned. It is possible these restrictions will be lifted in the future. -func (pgConn *PgConn) EscapeString(s string) (string, error) { - if pgConn.ParameterStatus("standard_conforming_strings") != "on" { - return "", errors.New("EscapeString must be run with standard_conforming_strings=on") - } - - if pgConn.ParameterStatus("client_encoding") != "UTF8" { - return "", errors.New("EscapeString must be run with client_encoding=UTF8") - } - - return strings.Replace(s, "'", "''", -1), nil -} - -// HijackedConn is the result of hijacking a connection. -// -// Due to the necessary exposure of internal implementation details, it is not covered by the semantic versioning -// compatibility. -type HijackedConn struct { - Conn net.Conn // the underlying TCP or unix domain socket connection - PID uint32 // backend pid - SecretKey uint32 // key to use to send a cancel query message to the server - ParameterStatuses map[string]string // parameters that have been reported by the server - TxStatus byte - Frontend Frontend - Config *Config -} - -// Hijack extracts the internal connection data. pgConn must be in an idle state. pgConn is unusable after hijacking. -// Hijacking is typically only useful when using pgconn to establish a connection, but taking complete control of the -// raw connection after that (e.g. a load balancer or proxy). -// -// Due to the necessary exposure of internal implementation details, it is not covered by the semantic versioning -// compatibility. -func (pgConn *PgConn) Hijack() (*HijackedConn, error) { - if err := pgConn.lock(); err != nil { - return nil, err - } - pgConn.status = connStatusClosed - - return &HijackedConn{ - Conn: pgConn.conn, - PID: pgConn.pid, - SecretKey: pgConn.secretKey, - ParameterStatuses: pgConn.parameterStatuses, - TxStatus: pgConn.txStatus, - Frontend: pgConn.frontend, - Config: pgConn.config, - }, nil -} - -// Construct created a PgConn from an already established connection to a PostgreSQL server. This is the inverse of -// PgConn.Hijack. The connection must be in an idle state. -// -// Due to the necessary exposure of internal implementation details, it is not covered by the semantic versioning -// compatibility. -func Construct(hc *HijackedConn) (*PgConn, error) { - pgConn := &PgConn{ - conn: hc.Conn, - pid: hc.PID, - secretKey: hc.SecretKey, - parameterStatuses: hc.ParameterStatuses, - txStatus: hc.TxStatus, - frontend: hc.Frontend, - config: hc.Config, - - status: connStatusIdle, - - wbuf: make([]byte, 0, wbufLen), - cleanupDone: make(chan struct{}), - } - - pgConn.contextWatcher = newContextWatcher(pgConn.conn) - - return pgConn, nil -} diff --git a/vendor/github.com/jackc/pgconn/stmtcache/lru.go b/vendor/github.com/jackc/pgconn/stmtcache/lru.go deleted file mode 100644 index f0fb53b9..00000000 --- a/vendor/github.com/jackc/pgconn/stmtcache/lru.go +++ /dev/null @@ -1,169 +0,0 @@ -package stmtcache - -import ( - "container/list" - "context" - "fmt" - "sync/atomic" - - "github.com/jackc/pgconn" -) - -var lruCount uint64 - -// LRU implements Cache with a Least Recently Used (LRU) cache. -type LRU struct { - conn *pgconn.PgConn - mode int - cap int - prepareCount int - m map[string]*list.Element - l *list.List - psNamePrefix string - stmtsToClear []string -} - -// NewLRU creates a new LRU. mode is either ModePrepare or ModeDescribe. cap is the maximum size of the cache. -func NewLRU(conn *pgconn.PgConn, mode int, cap int) *LRU { - mustBeValidMode(mode) - mustBeValidCap(cap) - - n := atomic.AddUint64(&lruCount, 1) - - return &LRU{ - conn: conn, - mode: mode, - cap: cap, - m: make(map[string]*list.Element), - l: list.New(), - psNamePrefix: fmt.Sprintf("lrupsc_%d", n), - } -} - -// Get returns the prepared statement description for sql preparing or describing the sql on the server as needed. -func (c *LRU) Get(ctx context.Context, sql string) (*pgconn.StatementDescription, error) { - if ctx != context.Background() { - select { - case <-ctx.Done(): - return nil, ctx.Err() - default: - } - } - - // flush an outstanding bad statements - txStatus := c.conn.TxStatus() - if (txStatus == 'I' || txStatus == 'T') && len(c.stmtsToClear) > 0 { - for _, stmt := range c.stmtsToClear { - err := c.clearStmt(ctx, stmt) - if err != nil { - return nil, err - } - } - } - - if el, ok := c.m[sql]; ok { - c.l.MoveToFront(el) - return el.Value.(*pgconn.StatementDescription), nil - } - - if c.l.Len() == c.cap { - err := c.removeOldest(ctx) - if err != nil { - return nil, err - } - } - - psd, err := c.prepare(ctx, sql) - if err != nil { - return nil, err - } - - el := c.l.PushFront(psd) - c.m[sql] = el - - return psd, nil -} - -// Clear removes all entries in the cache. Any prepared statements will be deallocated from the PostgreSQL session. -func (c *LRU) Clear(ctx context.Context) error { - for c.l.Len() > 0 { - err := c.removeOldest(ctx) - if err != nil { - return err - } - } - - return nil -} - -func (c *LRU) StatementErrored(sql string, err error) { - pgErr, ok := err.(*pgconn.PgError) - if !ok { - return - } - - // https://github.com/jackc/pgx/issues/1162 - // - // We used to look for the message "cached plan must not change result type". However, that message can be localized. - // Unfortunately, error code "0A000" - "FEATURE NOT SUPPORTED" is used for many different errors and the only way to - // tell the difference is by the message. But all that happens is we clear a statement that we otherwise wouldn't - // have so it should be safe. - possibleInvalidCachedPlanError := pgErr.Code == "0A000" - if possibleInvalidCachedPlanError { - c.stmtsToClear = append(c.stmtsToClear, sql) - } -} - -func (c *LRU) clearStmt(ctx context.Context, sql string) error { - elem, inMap := c.m[sql] - if !inMap { - // The statement probably fell off the back of the list. In that case, we've - // ensured that it isn't in the cache, so we can declare victory. - return nil - } - - c.l.Remove(elem) - - psd := elem.Value.(*pgconn.StatementDescription) - delete(c.m, psd.SQL) - if c.mode == ModePrepare { - return c.conn.Exec(ctx, fmt.Sprintf("deallocate %s", psd.Name)).Close() - } - return nil -} - -// Len returns the number of cached prepared statement descriptions. -func (c *LRU) Len() int { - return c.l.Len() -} - -// Cap returns the maximum number of cached prepared statement descriptions. -func (c *LRU) Cap() int { - return c.cap -} - -// Mode returns the mode of the cache (ModePrepare or ModeDescribe) -func (c *LRU) Mode() int { - return c.mode -} - -func (c *LRU) prepare(ctx context.Context, sql string) (*pgconn.StatementDescription, error) { - var name string - if c.mode == ModePrepare { - name = fmt.Sprintf("%s_%d", c.psNamePrefix, c.prepareCount) - c.prepareCount += 1 - } - - return c.conn.Prepare(ctx, name, sql, nil) -} - -func (c *LRU) removeOldest(ctx context.Context) error { - oldest := c.l.Back() - c.l.Remove(oldest) - psd := oldest.Value.(*pgconn.StatementDescription) - delete(c.m, psd.SQL) - if c.mode == ModePrepare { - return c.conn.Exec(ctx, fmt.Sprintf("deallocate %s", psd.Name)).Close() - } - return nil -} diff --git a/vendor/github.com/jackc/pgconn/stmtcache/stmtcache.go b/vendor/github.com/jackc/pgconn/stmtcache/stmtcache.go deleted file mode 100644 index d083e1b4..00000000 --- a/vendor/github.com/jackc/pgconn/stmtcache/stmtcache.go +++ /dev/null @@ -1,58 +0,0 @@ -// Package stmtcache is a cache that can be used to implement lazy prepared statements. -package stmtcache - -import ( - "context" - - "github.com/jackc/pgconn" -) - -const ( - ModePrepare = iota // Cache should prepare named statements. - ModeDescribe // Cache should prepare the anonymous prepared statement to only fetch the description of the statement. -) - -// Cache prepares and caches prepared statement descriptions. -type Cache interface { - // Get returns the prepared statement description for sql preparing or describing the sql on the server as needed. - Get(ctx context.Context, sql string) (*pgconn.StatementDescription, error) - - // Clear removes all entries in the cache. Any prepared statements will be deallocated from the PostgreSQL session. - Clear(ctx context.Context) error - - // StatementErrored informs the cache that the given statement resulted in an error when it - // was last used against the database. In some cases, this will cause the cache to maer that - // statement as bad. The bad statement will instead be flushed during the next call to Get - // that occurs outside of a failed transaction. - StatementErrored(sql string, err error) - - // Len returns the number of cached prepared statement descriptions. - Len() int - - // Cap returns the maximum number of cached prepared statement descriptions. - Cap() int - - // Mode returns the mode of the cache (ModePrepare or ModeDescribe) - Mode() int -} - -// New returns the preferred cache implementation for mode and cap. mode is either ModePrepare or ModeDescribe. cap is -// the maximum size of the cache. -func New(conn *pgconn.PgConn, mode int, cap int) Cache { - mustBeValidMode(mode) - mustBeValidCap(cap) - - return NewLRU(conn, mode, cap) -} - -func mustBeValidMode(mode int) { - if mode != ModePrepare && mode != ModeDescribe { - panic("mode must be ModePrepare or ModeDescribe") - } -} - -func mustBeValidCap(cap int) { - if cap < 1 { - panic("cache must have cap of >= 1") - } -} diff --git a/vendor/github.com/jackc/pgio/.travis.yml b/vendor/github.com/jackc/pgio/.travis.yml deleted file mode 100644 index e176228e..00000000 --- a/vendor/github.com/jackc/pgio/.travis.yml +++ /dev/null @@ -1,9 +0,0 @@ -language: go - -go: - - 1.x - - tip - -matrix: - allow_failures: - - go: tip diff --git a/vendor/github.com/jackc/pgio/LICENSE b/vendor/github.com/jackc/pgio/LICENSE deleted file mode 100644 index c1c4f50f..00000000 --- a/vendor/github.com/jackc/pgio/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2019 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/pgio/README.md b/vendor/github.com/jackc/pgio/README.md deleted file mode 100644 index 1952ed86..00000000 --- a/vendor/github.com/jackc/pgio/README.md +++ /dev/null @@ -1,11 +0,0 @@ -[![](https://godoc.org/github.com/jackc/pgio?status.svg)](https://godoc.org/github.com/jackc/pgio) -[![Build Status](https://travis-ci.org/jackc/pgio.svg)](https://travis-ci.org/jackc/pgio) - -# pgio - -Package pgio is a low-level toolkit building messages in the PostgreSQL wire protocol. - -pgio provides functions for appending integers to a []byte while doing byte -order conversion. - -Extracted from original implementation in https://github.com/jackc/pgx. diff --git a/vendor/github.com/jackc/pgio/doc.go b/vendor/github.com/jackc/pgio/doc.go deleted file mode 100644 index ef2dcc7f..00000000 --- a/vendor/github.com/jackc/pgio/doc.go +++ /dev/null @@ -1,6 +0,0 @@ -// Package pgio is a low-level toolkit building messages in the PostgreSQL wire protocol. -/* -pgio provides functions for appending integers to a []byte while doing byte -order conversion. -*/ -package pgio diff --git a/vendor/github.com/jackc/pgio/write.go b/vendor/github.com/jackc/pgio/write.go deleted file mode 100644 index 96aedf9d..00000000 --- a/vendor/github.com/jackc/pgio/write.go +++ /dev/null @@ -1,40 +0,0 @@ -package pgio - -import "encoding/binary" - -func AppendUint16(buf []byte, n uint16) []byte { - wp := len(buf) - buf = append(buf, 0, 0) - binary.BigEndian.PutUint16(buf[wp:], n) - return buf -} - -func AppendUint32(buf []byte, n uint32) []byte { - wp := len(buf) - buf = append(buf, 0, 0, 0, 0) - binary.BigEndian.PutUint32(buf[wp:], n) - return buf -} - -func AppendUint64(buf []byte, n uint64) []byte { - wp := len(buf) - buf = append(buf, 0, 0, 0, 0, 0, 0, 0, 0) - binary.BigEndian.PutUint64(buf[wp:], n) - return buf -} - -func AppendInt16(buf []byte, n int16) []byte { - return AppendUint16(buf, uint16(n)) -} - -func AppendInt32(buf []byte, n int32) []byte { - return AppendUint32(buf, uint32(n)) -} - -func AppendInt64(buf []byte, n int64) []byte { - return AppendUint64(buf, uint64(n)) -} - -func SetInt32(buf []byte, n int32) { - binary.BigEndian.PutUint32(buf, uint32(n)) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/.travis.yml b/vendor/github.com/jackc/pgproto3/v2/.travis.yml deleted file mode 100644 index e176228e..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/.travis.yml +++ /dev/null @@ -1,9 +0,0 @@ -language: go - -go: - - 1.x - - tip - -matrix: - allow_failures: - - go: tip diff --git a/vendor/github.com/jackc/pgproto3/v2/LICENSE b/vendor/github.com/jackc/pgproto3/v2/LICENSE deleted file mode 100644 index c1c4f50f..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2019 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/pgproto3/v2/README.md b/vendor/github.com/jackc/pgproto3/v2/README.md deleted file mode 100644 index 77a31700..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/README.md +++ /dev/null @@ -1,18 +0,0 @@ -[![](https://godoc.org/github.com/jackc/pgproto3?status.svg)](https://godoc.org/github.com/jackc/pgproto3) -[![Build Status](https://travis-ci.org/jackc/pgproto3.svg)](https://travis-ci.org/jackc/pgproto3) - ---- - -This version is used with pgx `v4`. In pgx `v5` it is part of the https://github.com/jackc/pgx repository. - ---- - -# pgproto3 - -Package pgproto3 is a encoder and decoder of the PostgreSQL wire protocol version 3. - -pgproto3 can be used as a foundation for PostgreSQL drivers, proxies, mock servers, load balancers and more. - -See example/pgfortune for a playful example of a fake PostgreSQL server. - -Extracted from original implementation in https://github.com/jackc/pgx. diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_cleartext_password.go b/vendor/github.com/jackc/pgproto3/v2/authentication_cleartext_password.go deleted file mode 100644 index 241fa600..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_cleartext_password.go +++ /dev/null @@ -1,52 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationCleartextPassword is a message sent from the backend indicating that a clear-text password is required. -type AuthenticationCleartextPassword struct { -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationCleartextPassword) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationCleartextPassword) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationCleartextPassword) Decode(src []byte) error { - if len(src) != 4 { - return errors.New("bad authentication message size") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeCleartextPassword { - return errors.New("bad auth type") - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationCleartextPassword) Encode(dst []byte) []byte { - dst = append(dst, 'R') - dst = pgio.AppendInt32(dst, 8) - dst = pgio.AppendUint32(dst, AuthTypeCleartextPassword) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src AuthenticationCleartextPassword) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "AuthenticationCleartextPassword", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_gss.go b/vendor/github.com/jackc/pgproto3/v2/authentication_gss.go deleted file mode 100644 index 5a3f3b1d..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_gss.go +++ /dev/null @@ -1,58 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - "github.com/jackc/pgio" -) - -type AuthenticationGSS struct{} - -func (a *AuthenticationGSS) Backend() {} - -func (a *AuthenticationGSS) AuthenticationResponse() {} - -func (a *AuthenticationGSS) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("authentication message too short") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeGSS { - return errors.New("bad auth type") - } - return nil -} - -func (a *AuthenticationGSS) Encode(dst []byte) []byte { - dst = append(dst, 'R') - dst = pgio.AppendInt32(dst, 4) - dst = pgio.AppendUint32(dst, AuthTypeGSS) - return dst -} - -func (a *AuthenticationGSS) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data []byte - }{ - Type: "AuthenticationGSS", - }) -} - -func (a *AuthenticationGSS) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Type string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_gss_continue.go b/vendor/github.com/jackc/pgproto3/v2/authentication_gss_continue.go deleted file mode 100644 index cf8b1834..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_gss_continue.go +++ /dev/null @@ -1,67 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - "github.com/jackc/pgio" -) - -type AuthenticationGSSContinue struct { - Data []byte -} - -func (a *AuthenticationGSSContinue) Backend() {} - -func (a *AuthenticationGSSContinue) AuthenticationResponse() {} - -func (a *AuthenticationGSSContinue) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("authentication message too short") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeGSSCont { - return errors.New("bad auth type") - } - - a.Data = src[4:] - return nil -} - -func (a *AuthenticationGSSContinue) Encode(dst []byte) []byte { - dst = append(dst, 'R') - dst = pgio.AppendInt32(dst, int32(len(a.Data))+8) - dst = pgio.AppendUint32(dst, AuthTypeGSSCont) - dst = append(dst, a.Data...) - return dst -} - -func (a *AuthenticationGSSContinue) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data []byte - }{ - Type: "AuthenticationGSSContinue", - Data: a.Data, - }) -} - -func (a *AuthenticationGSSContinue) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Type string - Data []byte - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - a.Data = msg.Data - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_md5_password.go b/vendor/github.com/jackc/pgproto3/v2/authentication_md5_password.go deleted file mode 100644 index 32ec0390..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_md5_password.go +++ /dev/null @@ -1,77 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationMD5Password is a message sent from the backend indicating that an MD5 hashed password is required. -type AuthenticationMD5Password struct { - Salt [4]byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationMD5Password) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationMD5Password) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationMD5Password) Decode(src []byte) error { - if len(src) != 8 { - return errors.New("bad authentication message size") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeMD5Password { - return errors.New("bad auth type") - } - - copy(dst.Salt[:], src[4:8]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationMD5Password) Encode(dst []byte) []byte { - dst = append(dst, 'R') - dst = pgio.AppendInt32(dst, 12) - dst = pgio.AppendUint32(dst, AuthTypeMD5Password) - dst = append(dst, src.Salt[:]...) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src AuthenticationMD5Password) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Salt [4]byte - }{ - Type: "AuthenticationMD5Password", - Salt: src.Salt, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *AuthenticationMD5Password) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Type string - Salt [4]byte - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Salt = msg.Salt - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_ok.go b/vendor/github.com/jackc/pgproto3/v2/authentication_ok.go deleted file mode 100644 index 2b476fe5..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_ok.go +++ /dev/null @@ -1,52 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationOk is a message sent from the backend indicating that authentication was successful. -type AuthenticationOk struct { -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationOk) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationOk) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationOk) Decode(src []byte) error { - if len(src) != 4 { - return errors.New("bad authentication message size") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeOk { - return errors.New("bad auth type") - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationOk) Encode(dst []byte) []byte { - dst = append(dst, 'R') - dst = pgio.AppendInt32(dst, 8) - dst = pgio.AppendUint32(dst, AuthTypeOk) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src AuthenticationOk) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "AuthenticationOK", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl.go b/vendor/github.com/jackc/pgproto3/v2/authentication_sasl.go deleted file mode 100644 index bdcb2c36..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl.go +++ /dev/null @@ -1,75 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationSASL is a message sent from the backend indicating that SASL authentication is required. -type AuthenticationSASL struct { - AuthMechanisms []string -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationSASL) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationSASL) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationSASL) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("authentication message too short") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeSASL { - return errors.New("bad auth type") - } - - authMechanisms := src[4:] - for len(authMechanisms) > 1 { - idx := bytes.IndexByte(authMechanisms, 0) - if idx > 0 { - dst.AuthMechanisms = append(dst.AuthMechanisms, string(authMechanisms[:idx])) - authMechanisms = authMechanisms[idx+1:] - } - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationSASL) Encode(dst []byte) []byte { - dst = append(dst, 'R') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - dst = pgio.AppendUint32(dst, AuthTypeSASL) - - for _, s := range src.AuthMechanisms { - dst = append(dst, []byte(s)...) - dst = append(dst, 0) - } - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src AuthenticationSASL) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - AuthMechanisms []string - }{ - Type: "AuthenticationSASL", - AuthMechanisms: src.AuthMechanisms, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_continue.go b/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_continue.go deleted file mode 100644 index 7f4a9c23..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_continue.go +++ /dev/null @@ -1,81 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationSASLContinue is a message sent from the backend containing a SASL challenge. -type AuthenticationSASLContinue struct { - Data []byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationSASLContinue) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationSASLContinue) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationSASLContinue) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("authentication message too short") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeSASLContinue { - return errors.New("bad auth type") - } - - dst.Data = src[4:] - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationSASLContinue) Encode(dst []byte) []byte { - dst = append(dst, 'R') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - dst = pgio.AppendUint32(dst, AuthTypeSASLContinue) - - dst = append(dst, src.Data...) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src AuthenticationSASLContinue) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data string - }{ - Type: "AuthenticationSASLContinue", - Data: string(src.Data), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *AuthenticationSASLContinue) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Data string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Data = []byte(msg.Data) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_final.go b/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_final.go deleted file mode 100644 index d82b9ee4..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/authentication_sasl_final.go +++ /dev/null @@ -1,81 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -// AuthenticationSASLFinal is a message sent from the backend indicating a SASL authentication has completed. -type AuthenticationSASLFinal struct { - Data []byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*AuthenticationSASLFinal) Backend() {} - -// Backend identifies this message as an authentication response. -func (*AuthenticationSASLFinal) AuthenticationResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *AuthenticationSASLFinal) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("authentication message too short") - } - - authType := binary.BigEndian.Uint32(src) - - if authType != AuthTypeSASLFinal { - return errors.New("bad auth type") - } - - dst.Data = src[4:] - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *AuthenticationSASLFinal) Encode(dst []byte) []byte { - dst = append(dst, 'R') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - dst = pgio.AppendUint32(dst, AuthTypeSASLFinal) - - dst = append(dst, src.Data...) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Unmarshaler. -func (src AuthenticationSASLFinal) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data string - }{ - Type: "AuthenticationSASLFinal", - Data: string(src.Data), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *AuthenticationSASLFinal) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Data string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Data = []byte(msg.Data) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/backend.go b/vendor/github.com/jackc/pgproto3/v2/backend.go deleted file mode 100644 index 1f143652..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/backend.go +++ /dev/null @@ -1,213 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "errors" - "fmt" - "io" -) - -// Backend acts as a server for the PostgreSQL wire protocol version 3. -type Backend struct { - cr ChunkReader - w io.Writer - - // Frontend message flyweights - bind Bind - cancelRequest CancelRequest - _close Close - copyFail CopyFail - copyData CopyData - copyDone CopyDone - describe Describe - execute Execute - flush Flush - functionCall FunctionCall - gssEncRequest GSSEncRequest - parse Parse - query Query - sslRequest SSLRequest - startupMessage StartupMessage - sync Sync - terminate Terminate - - bodyLen int - msgType byte - partialMsg bool - authType uint32 -} - -const ( - minStartupPacketLen = 4 // minStartupPacketLen is a single 32-bit int version or code. - maxStartupPacketLen = 10000 // maxStartupPacketLen is MAX_STARTUP_PACKET_LENGTH from PG source. -) - -// NewBackend creates a new Backend. -func NewBackend(cr ChunkReader, w io.Writer) *Backend { - return &Backend{cr: cr, w: w} -} - -// Send sends a message to the frontend. -func (b *Backend) Send(msg BackendMessage) error { - _, err := b.w.Write(msg.Encode(nil)) - return err -} - -// ReceiveStartupMessage receives the initial connection message. This method is used of the normal Receive method -// because the initial connection message is "special" and does not include the message type as the first byte. This -// will return either a StartupMessage, SSLRequest, GSSEncRequest, or CancelRequest. -func (b *Backend) ReceiveStartupMessage() (FrontendMessage, error) { - buf, err := b.cr.Next(4) - if err != nil { - return nil, err - } - msgSize := int(binary.BigEndian.Uint32(buf) - 4) - - if msgSize < minStartupPacketLen || msgSize > maxStartupPacketLen { - return nil, fmt.Errorf("invalid length of startup packet: %d", msgSize) - } - - buf, err = b.cr.Next(msgSize) - if err != nil { - return nil, translateEOFtoErrUnexpectedEOF(err) - } - - code := binary.BigEndian.Uint32(buf) - - switch code { - case ProtocolVersionNumber: - err = b.startupMessage.Decode(buf) - if err != nil { - return nil, err - } - return &b.startupMessage, nil - case sslRequestNumber: - err = b.sslRequest.Decode(buf) - if err != nil { - return nil, err - } - return &b.sslRequest, nil - case cancelRequestCode: - err = b.cancelRequest.Decode(buf) - if err != nil { - return nil, err - } - return &b.cancelRequest, nil - case gssEncReqNumber: - err = b.gssEncRequest.Decode(buf) - if err != nil { - return nil, err - } - return &b.gssEncRequest, nil - default: - return nil, fmt.Errorf("unknown startup message code: %d", code) - } -} - -// Receive receives a message from the frontend. The returned message is only valid until the next call to Receive. -func (b *Backend) Receive() (FrontendMessage, error) { - if !b.partialMsg { - header, err := b.cr.Next(5) - if err != nil { - return nil, translateEOFtoErrUnexpectedEOF(err) - } - - b.msgType = header[0] - b.bodyLen = int(binary.BigEndian.Uint32(header[1:])) - 4 - b.partialMsg = true - if b.bodyLen < 0 { - return nil, errors.New("invalid message with negative body length received") - } - } - - var msg FrontendMessage - switch b.msgType { - case 'B': - msg = &b.bind - case 'C': - msg = &b._close - case 'D': - msg = &b.describe - case 'E': - msg = &b.execute - case 'F': - msg = &b.functionCall - case 'f': - msg = &b.copyFail - case 'd': - msg = &b.copyData - case 'c': - msg = &b.copyDone - case 'H': - msg = &b.flush - case 'P': - msg = &b.parse - case 'p': - switch b.authType { - case AuthTypeSASL: - msg = &SASLInitialResponse{} - case AuthTypeSASLContinue: - msg = &SASLResponse{} - case AuthTypeSASLFinal: - msg = &SASLResponse{} - case AuthTypeGSS, AuthTypeGSSCont: - msg = &GSSResponse{} - case AuthTypeCleartextPassword, AuthTypeMD5Password: - fallthrough - default: - // to maintain backwards compatability - msg = &PasswordMessage{} - } - case 'Q': - msg = &b.query - case 'S': - msg = &b.sync - case 'X': - msg = &b.terminate - default: - return nil, fmt.Errorf("unknown message type: %c", b.msgType) - } - - msgBody, err := b.cr.Next(b.bodyLen) - if err != nil { - return nil, translateEOFtoErrUnexpectedEOF(err) - } - - b.partialMsg = false - - err = msg.Decode(msgBody) - return msg, err -} - -// SetAuthType sets the authentication type in the backend. -// Since multiple message types can start with 'p', SetAuthType allows -// contextual identification of FrontendMessages. For example, in the -// PG message flow documentation for PasswordMessage: -// -// Byte1('p') -// -// Identifies the message as a password response. Note that this is also used for -// GSSAPI, SSPI and SASL response messages. The exact message type can be deduced from -// the context. -// -// Since the Frontend does not know about the state of a backend, it is important -// to call SetAuthType() after an authentication request is received by the Frontend. -func (b *Backend) SetAuthType(authType uint32) error { - switch authType { - case AuthTypeOk, - AuthTypeCleartextPassword, - AuthTypeMD5Password, - AuthTypeSCMCreds, - AuthTypeGSS, - AuthTypeGSSCont, - AuthTypeSSPI, - AuthTypeSASL, - AuthTypeSASLContinue, - AuthTypeSASLFinal: - b.authType = authType - default: - return fmt.Errorf("authType not recognized: %d", authType) - } - - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/backend_key_data.go b/vendor/github.com/jackc/pgproto3/v2/backend_key_data.go deleted file mode 100644 index ca20dd25..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/backend_key_data.go +++ /dev/null @@ -1,51 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -type BackendKeyData struct { - ProcessID uint32 - SecretKey uint32 -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*BackendKeyData) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *BackendKeyData) Decode(src []byte) error { - if len(src) != 8 { - return &invalidMessageLenErr{messageType: "BackendKeyData", expectedLen: 8, actualLen: len(src)} - } - - dst.ProcessID = binary.BigEndian.Uint32(src[:4]) - dst.SecretKey = binary.BigEndian.Uint32(src[4:]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *BackendKeyData) Encode(dst []byte) []byte { - dst = append(dst, 'K') - dst = pgio.AppendUint32(dst, 12) - dst = pgio.AppendUint32(dst, src.ProcessID) - dst = pgio.AppendUint32(dst, src.SecretKey) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src BackendKeyData) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ProcessID uint32 - SecretKey uint32 - }{ - Type: "BackendKeyData", - ProcessID: src.ProcessID, - SecretKey: src.SecretKey, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/big_endian.go b/vendor/github.com/jackc/pgproto3/v2/big_endian.go deleted file mode 100644 index f7bdb97e..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/big_endian.go +++ /dev/null @@ -1,37 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" -) - -type BigEndianBuf [8]byte - -func (b BigEndianBuf) Int16(n int16) []byte { - buf := b[0:2] - binary.BigEndian.PutUint16(buf, uint16(n)) - return buf -} - -func (b BigEndianBuf) Uint16(n uint16) []byte { - buf := b[0:2] - binary.BigEndian.PutUint16(buf, n) - return buf -} - -func (b BigEndianBuf) Int32(n int32) []byte { - buf := b[0:4] - binary.BigEndian.PutUint32(buf, uint32(n)) - return buf -} - -func (b BigEndianBuf) Uint32(n uint32) []byte { - buf := b[0:4] - binary.BigEndian.PutUint32(buf, n) - return buf -} - -func (b BigEndianBuf) Int64(n int64) []byte { - buf := b[0:8] - binary.BigEndian.PutUint64(buf, uint64(n)) - return buf -} diff --git a/vendor/github.com/jackc/pgproto3/v2/bind.go b/vendor/github.com/jackc/pgproto3/v2/bind.go deleted file mode 100644 index e9664f59..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/bind.go +++ /dev/null @@ -1,216 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/hex" - "encoding/json" - "fmt" - - "github.com/jackc/pgio" -) - -type Bind struct { - DestinationPortal string - PreparedStatement string - ParameterFormatCodes []int16 - Parameters [][]byte - ResultFormatCodes []int16 -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Bind) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Bind) Decode(src []byte) error { - *dst = Bind{} - - idx := bytes.IndexByte(src, 0) - if idx < 0 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - dst.DestinationPortal = string(src[:idx]) - rp := idx + 1 - - idx = bytes.IndexByte(src[rp:], 0) - if idx < 0 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - dst.PreparedStatement = string(src[rp : rp+idx]) - rp += idx + 1 - - if len(src[rp:]) < 2 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - parameterFormatCodeCount := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - if parameterFormatCodeCount > 0 { - dst.ParameterFormatCodes = make([]int16, parameterFormatCodeCount) - - if len(src[rp:]) < len(dst.ParameterFormatCodes)*2 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - for i := 0; i < parameterFormatCodeCount; i++ { - dst.ParameterFormatCodes[i] = int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - } - } - - if len(src[rp:]) < 2 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - parameterCount := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - if parameterCount > 0 { - dst.Parameters = make([][]byte, parameterCount) - - for i := 0; i < parameterCount; i++ { - if len(src[rp:]) < 4 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - - msgSize := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - - // null - if msgSize == -1 { - continue - } - - if len(src[rp:]) < msgSize { - return &invalidMessageFormatErr{messageType: "Bind"} - } - - dst.Parameters[i] = src[rp : rp+msgSize] - rp += msgSize - } - } - - if len(src[rp:]) < 2 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - resultFormatCodeCount := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - dst.ResultFormatCodes = make([]int16, resultFormatCodeCount) - if len(src[rp:]) < len(dst.ResultFormatCodes)*2 { - return &invalidMessageFormatErr{messageType: "Bind"} - } - for i := 0; i < resultFormatCodeCount; i++ { - dst.ResultFormatCodes[i] = int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Bind) Encode(dst []byte) []byte { - dst = append(dst, 'B') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.DestinationPortal...) - dst = append(dst, 0) - dst = append(dst, src.PreparedStatement...) - dst = append(dst, 0) - - dst = pgio.AppendUint16(dst, uint16(len(src.ParameterFormatCodes))) - for _, fc := range src.ParameterFormatCodes { - dst = pgio.AppendInt16(dst, fc) - } - - dst = pgio.AppendUint16(dst, uint16(len(src.Parameters))) - for _, p := range src.Parameters { - if p == nil { - dst = pgio.AppendInt32(dst, -1) - continue - } - - dst = pgio.AppendInt32(dst, int32(len(p))) - dst = append(dst, p...) - } - - dst = pgio.AppendUint16(dst, uint16(len(src.ResultFormatCodes))) - for _, fc := range src.ResultFormatCodes { - dst = pgio.AppendInt16(dst, fc) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Bind) MarshalJSON() ([]byte, error) { - formattedParameters := make([]map[string]string, len(src.Parameters)) - for i, p := range src.Parameters { - if p == nil { - continue - } - - textFormat := true - if len(src.ParameterFormatCodes) == 1 { - textFormat = src.ParameterFormatCodes[0] == 0 - } else if len(src.ParameterFormatCodes) > 1 { - textFormat = src.ParameterFormatCodes[i] == 0 - } - - if textFormat { - formattedParameters[i] = map[string]string{"text": string(p)} - } else { - formattedParameters[i] = map[string]string{"binary": hex.EncodeToString(p)} - } - } - - return json.Marshal(struct { - Type string - DestinationPortal string - PreparedStatement string - ParameterFormatCodes []int16 - Parameters []map[string]string - ResultFormatCodes []int16 - }{ - Type: "Bind", - DestinationPortal: src.DestinationPortal, - PreparedStatement: src.PreparedStatement, - ParameterFormatCodes: src.ParameterFormatCodes, - Parameters: formattedParameters, - ResultFormatCodes: src.ResultFormatCodes, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *Bind) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - DestinationPortal string - PreparedStatement string - ParameterFormatCodes []int16 - Parameters []map[string]string - ResultFormatCodes []int16 - } - err := json.Unmarshal(data, &msg) - if err != nil { - return err - } - dst.DestinationPortal = msg.DestinationPortal - dst.PreparedStatement = msg.PreparedStatement - dst.ParameterFormatCodes = msg.ParameterFormatCodes - dst.Parameters = make([][]byte, len(msg.Parameters)) - dst.ResultFormatCodes = msg.ResultFormatCodes - for n, parameter := range msg.Parameters { - dst.Parameters[n], err = getValueFromJSON(parameter) - if err != nil { - return fmt.Errorf("cannot get param %d: %w", n, err) - } - } - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/bind_complete.go b/vendor/github.com/jackc/pgproto3/v2/bind_complete.go deleted file mode 100644 index 3be256c8..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/bind_complete.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type BindComplete struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*BindComplete) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *BindComplete) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "BindComplete", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *BindComplete) Encode(dst []byte) []byte { - return append(dst, '2', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src BindComplete) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "BindComplete", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/cancel_request.go b/vendor/github.com/jackc/pgproto3/v2/cancel_request.go deleted file mode 100644 index 942e404b..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/cancel_request.go +++ /dev/null @@ -1,58 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -const cancelRequestCode = 80877102 - -type CancelRequest struct { - ProcessID uint32 - SecretKey uint32 -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*CancelRequest) Frontend() {} - -func (dst *CancelRequest) Decode(src []byte) error { - if len(src) != 12 { - return errors.New("bad cancel request size") - } - - requestCode := binary.BigEndian.Uint32(src) - - if requestCode != cancelRequestCode { - return errors.New("bad cancel request code") - } - - dst.ProcessID = binary.BigEndian.Uint32(src[4:]) - dst.SecretKey = binary.BigEndian.Uint32(src[8:]) - - return nil -} - -// Encode encodes src into dst. dst will include the 4 byte message length. -func (src *CancelRequest) Encode(dst []byte) []byte { - dst = pgio.AppendInt32(dst, 16) - dst = pgio.AppendInt32(dst, cancelRequestCode) - dst = pgio.AppendUint32(dst, src.ProcessID) - dst = pgio.AppendUint32(dst, src.SecretKey) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CancelRequest) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ProcessID uint32 - SecretKey uint32 - }{ - Type: "CancelRequest", - ProcessID: src.ProcessID, - SecretKey: src.SecretKey, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/chunkreader.go b/vendor/github.com/jackc/pgproto3/v2/chunkreader.go deleted file mode 100644 index 92206f35..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/chunkreader.go +++ /dev/null @@ -1,19 +0,0 @@ -package pgproto3 - -import ( - "io" - - "github.com/jackc/chunkreader/v2" -) - -// ChunkReader is an interface to decouple github.com/jackc/chunkreader from this package. -type ChunkReader interface { - // Next returns buf filled with the next n bytes. If an error (including a partial read) occurs, - // buf must be nil. Next must preserve any partially read data. Next must not reuse buf. - Next(n int) (buf []byte, err error) -} - -// NewChunkReader creates and returns a new default ChunkReader. -func NewChunkReader(r io.Reader) ChunkReader { - return chunkreader.New(r) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/close.go b/vendor/github.com/jackc/pgproto3/v2/close.go deleted file mode 100644 index a45f2b93..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/close.go +++ /dev/null @@ -1,89 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type Close struct { - ObjectType byte // 'S' = prepared statement, 'P' = portal - Name string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Close) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Close) Decode(src []byte) error { - if len(src) < 2 { - return &invalidMessageFormatErr{messageType: "Close"} - } - - dst.ObjectType = src[0] - rp := 1 - - idx := bytes.IndexByte(src[rp:], 0) - if idx != len(src[rp:])-1 { - return &invalidMessageFormatErr{messageType: "Close"} - } - - dst.Name = string(src[rp : len(src)-1]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Close) Encode(dst []byte) []byte { - dst = append(dst, 'C') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.ObjectType) - dst = append(dst, src.Name...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Close) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ObjectType string - Name string - }{ - Type: "Close", - ObjectType: string(src.ObjectType), - Name: src.Name, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *Close) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - ObjectType string - Name string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - if len(msg.ObjectType) != 1 { - return errors.New("invalid length for Close.ObjectType") - } - - dst.ObjectType = byte(msg.ObjectType[0]) - dst.Name = msg.Name - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/close_complete.go b/vendor/github.com/jackc/pgproto3/v2/close_complete.go deleted file mode 100644 index 1d7b8f08..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/close_complete.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type CloseComplete struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CloseComplete) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CloseComplete) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "CloseComplete", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CloseComplete) Encode(dst []byte) []byte { - return append(dst, '3', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CloseComplete) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "CloseComplete", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/command_complete.go b/vendor/github.com/jackc/pgproto3/v2/command_complete.go deleted file mode 100644 index cdc49f39..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/command_complete.go +++ /dev/null @@ -1,71 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - - "github.com/jackc/pgio" -) - -type CommandComplete struct { - CommandTag []byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CommandComplete) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CommandComplete) Decode(src []byte) error { - idx := bytes.IndexByte(src, 0) - if idx != len(src)-1 { - return &invalidMessageFormatErr{messageType: "CommandComplete"} - } - - dst.CommandTag = src[:idx] - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CommandComplete) Encode(dst []byte) []byte { - dst = append(dst, 'C') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.CommandTag...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CommandComplete) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - CommandTag string - }{ - Type: "CommandComplete", - CommandTag: string(src.CommandTag), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *CommandComplete) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - CommandTag string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.CommandTag = []byte(msg.CommandTag) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_both_response.go b/vendor/github.com/jackc/pgproto3/v2/copy_both_response.go deleted file mode 100644 index 4a1c3a07..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_both_response.go +++ /dev/null @@ -1,95 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type CopyBothResponse struct { - OverallFormat byte - ColumnFormatCodes []uint16 -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CopyBothResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyBothResponse) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - if buf.Len() < 3 { - return &invalidMessageFormatErr{messageType: "CopyBothResponse"} - } - - overallFormat := buf.Next(1)[0] - - columnCount := int(binary.BigEndian.Uint16(buf.Next(2))) - if buf.Len() != columnCount*2 { - return &invalidMessageFormatErr{messageType: "CopyBothResponse"} - } - - columnFormatCodes := make([]uint16, columnCount) - for i := 0; i < columnCount; i++ { - columnFormatCodes[i] = binary.BigEndian.Uint16(buf.Next(2)) - } - - *dst = CopyBothResponse{OverallFormat: overallFormat, ColumnFormatCodes: columnFormatCodes} - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyBothResponse) Encode(dst []byte) []byte { - dst = append(dst, 'W') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - dst = append(dst, src.OverallFormat) - dst = pgio.AppendUint16(dst, uint16(len(src.ColumnFormatCodes))) - for _, fc := range src.ColumnFormatCodes { - dst = pgio.AppendUint16(dst, fc) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyBothResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ColumnFormatCodes []uint16 - }{ - Type: "CopyBothResponse", - ColumnFormatCodes: src.ColumnFormatCodes, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *CopyBothResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - OverallFormat string - ColumnFormatCodes []uint16 - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - if len(msg.OverallFormat) != 1 { - return errors.New("invalid length for CopyBothResponse.OverallFormat") - } - - dst.OverallFormat = msg.OverallFormat[0] - dst.ColumnFormatCodes = msg.ColumnFormatCodes - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_data.go b/vendor/github.com/jackc/pgproto3/v2/copy_data.go deleted file mode 100644 index 128aa198..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_data.go +++ /dev/null @@ -1,62 +0,0 @@ -package pgproto3 - -import ( - "encoding/hex" - "encoding/json" - - "github.com/jackc/pgio" -) - -type CopyData struct { - Data []byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CopyData) Backend() {} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*CopyData) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyData) Decode(src []byte) error { - dst.Data = src - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyData) Encode(dst []byte) []byte { - dst = append(dst, 'd') - dst = pgio.AppendInt32(dst, int32(4+len(src.Data))) - dst = append(dst, src.Data...) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyData) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data string - }{ - Type: "CopyData", - Data: hex.EncodeToString(src.Data), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *CopyData) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Data string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Data = []byte(msg.Data) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_done.go b/vendor/github.com/jackc/pgproto3/v2/copy_done.go deleted file mode 100644 index 0e13282b..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_done.go +++ /dev/null @@ -1,38 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type CopyDone struct { -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CopyDone) Backend() {} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*CopyDone) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyDone) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "CopyDone", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyDone) Encode(dst []byte) []byte { - return append(dst, 'c', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyDone) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "CopyDone", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_fail.go b/vendor/github.com/jackc/pgproto3/v2/copy_fail.go deleted file mode 100644 index 78ff0b30..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_fail.go +++ /dev/null @@ -1,53 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - - "github.com/jackc/pgio" -) - -type CopyFail struct { - Message string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*CopyFail) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyFail) Decode(src []byte) error { - idx := bytes.IndexByte(src, 0) - if idx != len(src)-1 { - return &invalidMessageFormatErr{messageType: "CopyFail"} - } - - dst.Message = string(src[:idx]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyFail) Encode(dst []byte) []byte { - dst = append(dst, 'f') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.Message...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyFail) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Message string - }{ - Type: "CopyFail", - Message: src.Message, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_in_response.go b/vendor/github.com/jackc/pgproto3/v2/copy_in_response.go deleted file mode 100644 index 80733adc..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_in_response.go +++ /dev/null @@ -1,96 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type CopyInResponse struct { - OverallFormat byte - ColumnFormatCodes []uint16 -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*CopyInResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyInResponse) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - if buf.Len() < 3 { - return &invalidMessageFormatErr{messageType: "CopyInResponse"} - } - - overallFormat := buf.Next(1)[0] - - columnCount := int(binary.BigEndian.Uint16(buf.Next(2))) - if buf.Len() != columnCount*2 { - return &invalidMessageFormatErr{messageType: "CopyInResponse"} - } - - columnFormatCodes := make([]uint16, columnCount) - for i := 0; i < columnCount; i++ { - columnFormatCodes[i] = binary.BigEndian.Uint16(buf.Next(2)) - } - - *dst = CopyInResponse{OverallFormat: overallFormat, ColumnFormatCodes: columnFormatCodes} - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyInResponse) Encode(dst []byte) []byte { - dst = append(dst, 'G') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.OverallFormat) - dst = pgio.AppendUint16(dst, uint16(len(src.ColumnFormatCodes))) - for _, fc := range src.ColumnFormatCodes { - dst = pgio.AppendUint16(dst, fc) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyInResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ColumnFormatCodes []uint16 - }{ - Type: "CopyInResponse", - ColumnFormatCodes: src.ColumnFormatCodes, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *CopyInResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - OverallFormat string - ColumnFormatCodes []uint16 - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - if len(msg.OverallFormat) != 1 { - return errors.New("invalid length for CopyInResponse.OverallFormat") - } - - dst.OverallFormat = msg.OverallFormat[0] - dst.ColumnFormatCodes = msg.ColumnFormatCodes - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/copy_out_response.go b/vendor/github.com/jackc/pgproto3/v2/copy_out_response.go deleted file mode 100644 index 5e607e3a..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/copy_out_response.go +++ /dev/null @@ -1,96 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type CopyOutResponse struct { - OverallFormat byte - ColumnFormatCodes []uint16 -} - -func (*CopyOutResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *CopyOutResponse) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - if buf.Len() < 3 { - return &invalidMessageFormatErr{messageType: "CopyOutResponse"} - } - - overallFormat := buf.Next(1)[0] - - columnCount := int(binary.BigEndian.Uint16(buf.Next(2))) - if buf.Len() != columnCount*2 { - return &invalidMessageFormatErr{messageType: "CopyOutResponse"} - } - - columnFormatCodes := make([]uint16, columnCount) - for i := 0; i < columnCount; i++ { - columnFormatCodes[i] = binary.BigEndian.Uint16(buf.Next(2)) - } - - *dst = CopyOutResponse{OverallFormat: overallFormat, ColumnFormatCodes: columnFormatCodes} - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *CopyOutResponse) Encode(dst []byte) []byte { - dst = append(dst, 'H') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.OverallFormat) - - dst = pgio.AppendUint16(dst, uint16(len(src.ColumnFormatCodes))) - for _, fc := range src.ColumnFormatCodes { - dst = pgio.AppendUint16(dst, fc) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src CopyOutResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ColumnFormatCodes []uint16 - }{ - Type: "CopyOutResponse", - ColumnFormatCodes: src.ColumnFormatCodes, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *CopyOutResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - OverallFormat string - ColumnFormatCodes []uint16 - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - if len(msg.OverallFormat) != 1 { - return errors.New("invalid length for CopyOutResponse.OverallFormat") - } - - dst.OverallFormat = msg.OverallFormat[0] - dst.ColumnFormatCodes = msg.ColumnFormatCodes - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/data_row.go b/vendor/github.com/jackc/pgproto3/v2/data_row.go deleted file mode 100644 index 63768761..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/data_row.go +++ /dev/null @@ -1,142 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/hex" - "encoding/json" - - "github.com/jackc/pgio" -) - -type DataRow struct { - Values [][]byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*DataRow) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *DataRow) Decode(src []byte) error { - if len(src) < 2 { - return &invalidMessageFormatErr{messageType: "DataRow"} - } - rp := 0 - fieldCount := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - // If the capacity of the values slice is too small OR substantially too - // large reallocate. This is too avoid one row with many columns from - // permanently allocating memory. - if cap(dst.Values) < fieldCount || cap(dst.Values)-fieldCount > 32 { - newCap := 32 - if newCap < fieldCount { - newCap = fieldCount - } - dst.Values = make([][]byte, fieldCount, newCap) - } else { - dst.Values = dst.Values[:fieldCount] - } - - for i := 0; i < fieldCount; i++ { - if len(src[rp:]) < 4 { - return &invalidMessageFormatErr{messageType: "DataRow"} - } - - msgSize := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - - // null - if msgSize == -1 { - dst.Values[i] = nil - } else { - if len(src[rp:]) < msgSize { - return &invalidMessageFormatErr{messageType: "DataRow"} - } - - dst.Values[i] = src[rp : rp+msgSize : rp+msgSize] - rp += msgSize - } - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *DataRow) Encode(dst []byte) []byte { - dst = append(dst, 'D') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = pgio.AppendUint16(dst, uint16(len(src.Values))) - for _, v := range src.Values { - if v == nil { - dst = pgio.AppendInt32(dst, -1) - continue - } - - dst = pgio.AppendInt32(dst, int32(len(v))) - dst = append(dst, v...) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src DataRow) MarshalJSON() ([]byte, error) { - formattedValues := make([]map[string]string, len(src.Values)) - for i, v := range src.Values { - if v == nil { - continue - } - - var hasNonPrintable bool - for _, b := range v { - if b < 32 { - hasNonPrintable = true - break - } - } - - if hasNonPrintable { - formattedValues[i] = map[string]string{"binary": hex.EncodeToString(v)} - } else { - formattedValues[i] = map[string]string{"text": string(v)} - } - } - - return json.Marshal(struct { - Type string - Values []map[string]string - }{ - Type: "DataRow", - Values: formattedValues, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *DataRow) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Values []map[string]string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Values = make([][]byte, len(msg.Values)) - for n, parameter := range msg.Values { - var err error - dst.Values[n], err = getValueFromJSON(parameter) - if err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/describe.go b/vendor/github.com/jackc/pgproto3/v2/describe.go deleted file mode 100644 index 0d825db1..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/describe.go +++ /dev/null @@ -1,88 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type Describe struct { - ObjectType byte // 'S' = prepared statement, 'P' = portal - Name string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Describe) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Describe) Decode(src []byte) error { - if len(src) < 2 { - return &invalidMessageFormatErr{messageType: "Describe"} - } - - dst.ObjectType = src[0] - rp := 1 - - idx := bytes.IndexByte(src[rp:], 0) - if idx != len(src[rp:])-1 { - return &invalidMessageFormatErr{messageType: "Describe"} - } - - dst.Name = string(src[rp : len(src)-1]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Describe) Encode(dst []byte) []byte { - dst = append(dst, 'D') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.ObjectType) - dst = append(dst, src.Name...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Describe) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ObjectType string - Name string - }{ - Type: "Describe", - ObjectType: string(src.ObjectType), - Name: src.Name, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *Describe) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - ObjectType string - Name string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - if len(msg.ObjectType) != 1 { - return errors.New("invalid length for Describe.ObjectType") - } - - dst.ObjectType = byte(msg.ObjectType[0]) - dst.Name = msg.Name - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/doc.go b/vendor/github.com/jackc/pgproto3/v2/doc.go deleted file mode 100644 index 8226dc98..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/doc.go +++ /dev/null @@ -1,4 +0,0 @@ -// Package pgproto3 is a encoder and decoder of the PostgreSQL wire protocol version 3. -// -// See https://www.postgresql.org/docs/current/protocol-message-formats.html for meanings of the different messages. -package pgproto3 diff --git a/vendor/github.com/jackc/pgproto3/v2/empty_query_response.go b/vendor/github.com/jackc/pgproto3/v2/empty_query_response.go deleted file mode 100644 index 2b85e744..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/empty_query_response.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type EmptyQueryResponse struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*EmptyQueryResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *EmptyQueryResponse) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "EmptyQueryResponse", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *EmptyQueryResponse) Encode(dst []byte) []byte { - return append(dst, 'I', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src EmptyQueryResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "EmptyQueryResponse", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/error_response.go b/vendor/github.com/jackc/pgproto3/v2/error_response.go deleted file mode 100644 index ec51e019..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/error_response.go +++ /dev/null @@ -1,334 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "strconv" -) - -type ErrorResponse struct { - Severity string - SeverityUnlocalized string // only in 9.6 and greater - Code string - Message string - Detail string - Hint string - Position int32 - InternalPosition int32 - InternalQuery string - Where string - SchemaName string - TableName string - ColumnName string - DataTypeName string - ConstraintName string - File string - Line int32 - Routine string - - UnknownFields map[byte]string -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*ErrorResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *ErrorResponse) Decode(src []byte) error { - *dst = ErrorResponse{} - - buf := bytes.NewBuffer(src) - - for { - k, err := buf.ReadByte() - if err != nil { - return err - } - if k == 0 { - break - } - - vb, err := buf.ReadBytes(0) - if err != nil { - return err - } - v := string(vb[:len(vb)-1]) - - switch k { - case 'S': - dst.Severity = v - case 'V': - dst.SeverityUnlocalized = v - case 'C': - dst.Code = v - case 'M': - dst.Message = v - case 'D': - dst.Detail = v - case 'H': - dst.Hint = v - case 'P': - s := v - n, _ := strconv.ParseInt(s, 10, 32) - dst.Position = int32(n) - case 'p': - s := v - n, _ := strconv.ParseInt(s, 10, 32) - dst.InternalPosition = int32(n) - case 'q': - dst.InternalQuery = v - case 'W': - dst.Where = v - case 's': - dst.SchemaName = v - case 't': - dst.TableName = v - case 'c': - dst.ColumnName = v - case 'd': - dst.DataTypeName = v - case 'n': - dst.ConstraintName = v - case 'F': - dst.File = v - case 'L': - s := v - n, _ := strconv.ParseInt(s, 10, 32) - dst.Line = int32(n) - case 'R': - dst.Routine = v - - default: - if dst.UnknownFields == nil { - dst.UnknownFields = make(map[byte]string) - } - dst.UnknownFields[k] = v - } - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *ErrorResponse) Encode(dst []byte) []byte { - return append(dst, src.marshalBinary('E')...) -} - -func (src *ErrorResponse) marshalBinary(typeByte byte) []byte { - var bigEndian BigEndianBuf - buf := &bytes.Buffer{} - - buf.WriteByte(typeByte) - buf.Write(bigEndian.Uint32(0)) - - if src.Severity != "" { - buf.WriteByte('S') - buf.WriteString(src.Severity) - buf.WriteByte(0) - } - if src.SeverityUnlocalized != "" { - buf.WriteByte('V') - buf.WriteString(src.SeverityUnlocalized) - buf.WriteByte(0) - } - if src.Code != "" { - buf.WriteByte('C') - buf.WriteString(src.Code) - buf.WriteByte(0) - } - if src.Message != "" { - buf.WriteByte('M') - buf.WriteString(src.Message) - buf.WriteByte(0) - } - if src.Detail != "" { - buf.WriteByte('D') - buf.WriteString(src.Detail) - buf.WriteByte(0) - } - if src.Hint != "" { - buf.WriteByte('H') - buf.WriteString(src.Hint) - buf.WriteByte(0) - } - if src.Position != 0 { - buf.WriteByte('P') - buf.WriteString(strconv.Itoa(int(src.Position))) - buf.WriteByte(0) - } - if src.InternalPosition != 0 { - buf.WriteByte('p') - buf.WriteString(strconv.Itoa(int(src.InternalPosition))) - buf.WriteByte(0) - } - if src.InternalQuery != "" { - buf.WriteByte('q') - buf.WriteString(src.InternalQuery) - buf.WriteByte(0) - } - if src.Where != "" { - buf.WriteByte('W') - buf.WriteString(src.Where) - buf.WriteByte(0) - } - if src.SchemaName != "" { - buf.WriteByte('s') - buf.WriteString(src.SchemaName) - buf.WriteByte(0) - } - if src.TableName != "" { - buf.WriteByte('t') - buf.WriteString(src.TableName) - buf.WriteByte(0) - } - if src.ColumnName != "" { - buf.WriteByte('c') - buf.WriteString(src.ColumnName) - buf.WriteByte(0) - } - if src.DataTypeName != "" { - buf.WriteByte('d') - buf.WriteString(src.DataTypeName) - buf.WriteByte(0) - } - if src.ConstraintName != "" { - buf.WriteByte('n') - buf.WriteString(src.ConstraintName) - buf.WriteByte(0) - } - if src.File != "" { - buf.WriteByte('F') - buf.WriteString(src.File) - buf.WriteByte(0) - } - if src.Line != 0 { - buf.WriteByte('L') - buf.WriteString(strconv.Itoa(int(src.Line))) - buf.WriteByte(0) - } - if src.Routine != "" { - buf.WriteByte('R') - buf.WriteString(src.Routine) - buf.WriteByte(0) - } - - for k, v := range src.UnknownFields { - buf.WriteByte(k) - buf.WriteByte(0) - buf.WriteString(v) - buf.WriteByte(0) - } - - buf.WriteByte(0) - - binary.BigEndian.PutUint32(buf.Bytes()[1:5], uint32(buf.Len()-1)) - - return buf.Bytes() -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src ErrorResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Severity string - SeverityUnlocalized string // only in 9.6 and greater - Code string - Message string - Detail string - Hint string - Position int32 - InternalPosition int32 - InternalQuery string - Where string - SchemaName string - TableName string - ColumnName string - DataTypeName string - ConstraintName string - File string - Line int32 - Routine string - - UnknownFields map[byte]string - }{ - Type: "ErrorResponse", - Severity: src.Severity, - SeverityUnlocalized: src.SeverityUnlocalized, - Code: src.Code, - Message: src.Message, - Detail: src.Detail, - Hint: src.Hint, - Position: src.Position, - InternalPosition: src.InternalPosition, - InternalQuery: src.InternalQuery, - Where: src.Where, - SchemaName: src.SchemaName, - TableName: src.TableName, - ColumnName: src.ColumnName, - DataTypeName: src.DataTypeName, - ConstraintName: src.ConstraintName, - File: src.File, - Line: src.Line, - Routine: src.Routine, - UnknownFields: src.UnknownFields, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *ErrorResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Type string - Severity string - SeverityUnlocalized string // only in 9.6 and greater - Code string - Message string - Detail string - Hint string - Position int32 - InternalPosition int32 - InternalQuery string - Where string - SchemaName string - TableName string - ColumnName string - DataTypeName string - ConstraintName string - File string - Line int32 - Routine string - - UnknownFields map[byte]string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - - dst.Severity = msg.Severity - dst.SeverityUnlocalized = msg.SeverityUnlocalized - dst.Code = msg.Code - dst.Message = msg.Message - dst.Detail = msg.Detail - dst.Hint = msg.Hint - dst.Position = msg.Position - dst.InternalPosition = msg.InternalPosition - dst.InternalQuery = msg.InternalQuery - dst.Where = msg.Where - dst.SchemaName = msg.SchemaName - dst.TableName = msg.TableName - dst.ColumnName = msg.ColumnName - dst.DataTypeName = msg.DataTypeName - dst.ConstraintName = msg.ConstraintName - dst.File = msg.File - dst.Line = msg.Line - dst.Routine = msg.Routine - - dst.UnknownFields = msg.UnknownFields - - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/execute.go b/vendor/github.com/jackc/pgproto3/v2/execute.go deleted file mode 100644 index 8bae6133..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/execute.go +++ /dev/null @@ -1,65 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -type Execute struct { - Portal string - MaxRows uint32 -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Execute) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Execute) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - b, err := buf.ReadBytes(0) - if err != nil { - return err - } - dst.Portal = string(b[:len(b)-1]) - - if buf.Len() < 4 { - return &invalidMessageFormatErr{messageType: "Execute"} - } - dst.MaxRows = binary.BigEndian.Uint32(buf.Next(4)) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Execute) Encode(dst []byte) []byte { - dst = append(dst, 'E') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.Portal...) - dst = append(dst, 0) - - dst = pgio.AppendUint32(dst, src.MaxRows) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Execute) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Portal string - MaxRows uint32 - }{ - Type: "Execute", - Portal: src.Portal, - MaxRows: src.MaxRows, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/flush.go b/vendor/github.com/jackc/pgproto3/v2/flush.go deleted file mode 100644 index 2725f689..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/flush.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type Flush struct{} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Flush) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Flush) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "Flush", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Flush) Encode(dst []byte) []byte { - return append(dst, 'H', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Flush) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "Flush", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/frontend.go b/vendor/github.com/jackc/pgproto3/v2/frontend.go deleted file mode 100644 index 5be8de80..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/frontend.go +++ /dev/null @@ -1,206 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "errors" - "fmt" - "io" -) - -// Frontend acts as a client for the PostgreSQL wire protocol version 3. -type Frontend struct { - cr ChunkReader - w io.Writer - - // Backend message flyweights - authenticationOk AuthenticationOk - authenticationCleartextPassword AuthenticationCleartextPassword - authenticationMD5Password AuthenticationMD5Password - authenticationGSS AuthenticationGSS - authenticationGSSContinue AuthenticationGSSContinue - authenticationSASL AuthenticationSASL - authenticationSASLContinue AuthenticationSASLContinue - authenticationSASLFinal AuthenticationSASLFinal - backendKeyData BackendKeyData - bindComplete BindComplete - closeComplete CloseComplete - commandComplete CommandComplete - copyBothResponse CopyBothResponse - copyData CopyData - copyInResponse CopyInResponse - copyOutResponse CopyOutResponse - copyDone CopyDone - dataRow DataRow - emptyQueryResponse EmptyQueryResponse - errorResponse ErrorResponse - functionCallResponse FunctionCallResponse - noData NoData - noticeResponse NoticeResponse - notificationResponse NotificationResponse - parameterDescription ParameterDescription - parameterStatus ParameterStatus - parseComplete ParseComplete - readyForQuery ReadyForQuery - rowDescription RowDescription - portalSuspended PortalSuspended - - bodyLen int - msgType byte - partialMsg bool - authType uint32 -} - -// NewFrontend creates a new Frontend. -func NewFrontend(cr ChunkReader, w io.Writer) *Frontend { - return &Frontend{cr: cr, w: w} -} - -// Send sends a message to the backend. -func (f *Frontend) Send(msg FrontendMessage) error { - _, err := f.w.Write(msg.Encode(nil)) - return err -} - -func translateEOFtoErrUnexpectedEOF(err error) error { - if err == io.EOF { - return io.ErrUnexpectedEOF - } - return err -} - -// Receive receives a message from the backend. The returned message is only valid until the next call to Receive. -func (f *Frontend) Receive() (BackendMessage, error) { - if !f.partialMsg { - header, err := f.cr.Next(5) - if err != nil { - return nil, translateEOFtoErrUnexpectedEOF(err) - } - - f.msgType = header[0] - f.bodyLen = int(binary.BigEndian.Uint32(header[1:])) - 4 - f.partialMsg = true - if f.bodyLen < 0 { - return nil, errors.New("invalid message with negative body length received") - } - } - - msgBody, err := f.cr.Next(f.bodyLen) - if err != nil { - return nil, translateEOFtoErrUnexpectedEOF(err) - } - - f.partialMsg = false - - var msg BackendMessage - switch f.msgType { - case '1': - msg = &f.parseComplete - case '2': - msg = &f.bindComplete - case '3': - msg = &f.closeComplete - case 'A': - msg = &f.notificationResponse - case 'c': - msg = &f.copyDone - case 'C': - msg = &f.commandComplete - case 'd': - msg = &f.copyData - case 'D': - msg = &f.dataRow - case 'E': - msg = &f.errorResponse - case 'G': - msg = &f.copyInResponse - case 'H': - msg = &f.copyOutResponse - case 'I': - msg = &f.emptyQueryResponse - case 'K': - msg = &f.backendKeyData - case 'n': - msg = &f.noData - case 'N': - msg = &f.noticeResponse - case 'R': - var err error - msg, err = f.findAuthenticationMessageType(msgBody) - if err != nil { - return nil, err - } - case 's': - msg = &f.portalSuspended - case 'S': - msg = &f.parameterStatus - case 't': - msg = &f.parameterDescription - case 'T': - msg = &f.rowDescription - case 'V': - msg = &f.functionCallResponse - case 'W': - msg = &f.copyBothResponse - case 'Z': - msg = &f.readyForQuery - default: - return nil, fmt.Errorf("unknown message type: %c", f.msgType) - } - - err = msg.Decode(msgBody) - return msg, err -} - -// Authentication message type constants. -// See src/include/libpq/pqcomm.h for all -// constants. -const ( - AuthTypeOk = 0 - AuthTypeCleartextPassword = 3 - AuthTypeMD5Password = 5 - AuthTypeSCMCreds = 6 - AuthTypeGSS = 7 - AuthTypeGSSCont = 8 - AuthTypeSSPI = 9 - AuthTypeSASL = 10 - AuthTypeSASLContinue = 11 - AuthTypeSASLFinal = 12 -) - -func (f *Frontend) findAuthenticationMessageType(src []byte) (BackendMessage, error) { - if len(src) < 4 { - return nil, errors.New("authentication message too short") - } - f.authType = binary.BigEndian.Uint32(src[:4]) - - switch f.authType { - case AuthTypeOk: - return &f.authenticationOk, nil - case AuthTypeCleartextPassword: - return &f.authenticationCleartextPassword, nil - case AuthTypeMD5Password: - return &f.authenticationMD5Password, nil - case AuthTypeSCMCreds: - return nil, errors.New("AuthTypeSCMCreds is unimplemented") - case AuthTypeGSS: - return &f.authenticationGSS, nil - case AuthTypeGSSCont: - return &f.authenticationGSSContinue, nil - case AuthTypeSSPI: - return nil, errors.New("AuthTypeSSPI is unimplemented") - case AuthTypeSASL: - return &f.authenticationSASL, nil - case AuthTypeSASLContinue: - return &f.authenticationSASLContinue, nil - case AuthTypeSASLFinal: - return &f.authenticationSASLFinal, nil - default: - return nil, fmt.Errorf("unknown authentication type: %d", f.authType) - } -} - -// GetAuthType returns the authType used in the current state of the frontend. -// See SetAuthType for more information. -func (f *Frontend) GetAuthType() uint32 { - return f.authType -} diff --git a/vendor/github.com/jackc/pgproto3/v2/function_call.go b/vendor/github.com/jackc/pgproto3/v2/function_call.go deleted file mode 100644 index b3a22c4f..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/function_call.go +++ /dev/null @@ -1,94 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "github.com/jackc/pgio" -) - -type FunctionCall struct { - Function uint32 - ArgFormatCodes []uint16 - Arguments [][]byte - ResultFormatCode uint16 -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*FunctionCall) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *FunctionCall) Decode(src []byte) error { - *dst = FunctionCall{} - rp := 0 - // Specifies the object ID of the function to call. - dst.Function = binary.BigEndian.Uint32(src[rp:]) - rp += 4 - // The number of argument format codes that follow (denoted C below). - // This can be zero to indicate that there are no arguments or that the arguments all use the default format (text); - // or one, in which case the specified format code is applied to all arguments; - // or it can equal the actual number of arguments. - nArgumentCodes := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - argumentCodes := make([]uint16, nArgumentCodes) - for i := 0; i < nArgumentCodes; i++ { - // The argument format codes. Each must presently be zero (text) or one (binary). - ac := binary.BigEndian.Uint16(src[rp:]) - if ac != 0 && ac != 1 { - return &invalidMessageFormatErr{messageType: "FunctionCall"} - } - argumentCodes[i] = ac - rp += 2 - } - dst.ArgFormatCodes = argumentCodes - - // Specifies the number of arguments being supplied to the function. - nArguments := int(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - arguments := make([][]byte, nArguments) - for i := 0; i < nArguments; i++ { - // The length of the argument value, in bytes (this count does not include itself). Can be zero. - // As a special case, -1 indicates a NULL argument value. No value bytes follow in the NULL case. - argumentLength := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - if argumentLength == -1 { - arguments[i] = nil - } else { - // The value of the argument, in the format indicated by the associated format code. n is the above length. - argumentValue := src[rp : rp+argumentLength] - rp += argumentLength - arguments[i] = argumentValue - } - } - dst.Arguments = arguments - // The format code for the function result. Must presently be zero (text) or one (binary). - resultFormatCode := binary.BigEndian.Uint16(src[rp:]) - if resultFormatCode != 0 && resultFormatCode != 1 { - return &invalidMessageFormatErr{messageType: "FunctionCall"} - } - dst.ResultFormatCode = resultFormatCode - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *FunctionCall) Encode(dst []byte) []byte { - dst = append(dst, 'F') - sp := len(dst) - dst = pgio.AppendUint32(dst, 0) // Unknown length, set it at the end - dst = pgio.AppendUint32(dst, src.Function) - dst = pgio.AppendUint16(dst, uint16(len(src.ArgFormatCodes))) - for _, argFormatCode := range src.ArgFormatCodes { - dst = pgio.AppendUint16(dst, argFormatCode) - } - dst = pgio.AppendUint16(dst, uint16(len(src.Arguments))) - for _, argument := range src.Arguments { - if argument == nil { - dst = pgio.AppendInt32(dst, -1) - } else { - dst = pgio.AppendInt32(dst, int32(len(argument))) - dst = append(dst, argument...) - } - } - dst = pgio.AppendUint16(dst, src.ResultFormatCode) - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - return dst -} diff --git a/vendor/github.com/jackc/pgproto3/v2/function_call_response.go b/vendor/github.com/jackc/pgproto3/v2/function_call_response.go deleted file mode 100644 index 53d64222..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/function_call_response.go +++ /dev/null @@ -1,101 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/hex" - "encoding/json" - - "github.com/jackc/pgio" -) - -type FunctionCallResponse struct { - Result []byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*FunctionCallResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *FunctionCallResponse) Decode(src []byte) error { - if len(src) < 4 { - return &invalidMessageFormatErr{messageType: "FunctionCallResponse"} - } - rp := 0 - resultSize := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if resultSize == -1 { - dst.Result = nil - return nil - } - - if len(src[rp:]) != resultSize { - return &invalidMessageFormatErr{messageType: "FunctionCallResponse"} - } - - dst.Result = src[rp:] - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *FunctionCallResponse) Encode(dst []byte) []byte { - dst = append(dst, 'V') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - if src.Result == nil { - dst = pgio.AppendInt32(dst, -1) - } else { - dst = pgio.AppendInt32(dst, int32(len(src.Result))) - dst = append(dst, src.Result...) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src FunctionCallResponse) MarshalJSON() ([]byte, error) { - var formattedValue map[string]string - var hasNonPrintable bool - for _, b := range src.Result { - if b < 32 { - hasNonPrintable = true - break - } - } - - if hasNonPrintable { - formattedValue = map[string]string{"binary": hex.EncodeToString(src.Result)} - } else { - formattedValue = map[string]string{"text": string(src.Result)} - } - - return json.Marshal(struct { - Type string - Result map[string]string - }{ - Type: "FunctionCallResponse", - Result: formattedValue, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *FunctionCallResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - Result map[string]string - } - err := json.Unmarshal(data, &msg) - if err != nil { - return err - } - dst.Result, err = getValueFromJSON(msg.Result) - return err -} diff --git a/vendor/github.com/jackc/pgproto3/v2/gss_enc_request.go b/vendor/github.com/jackc/pgproto3/v2/gss_enc_request.go deleted file mode 100644 index cf405a3e..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/gss_enc_request.go +++ /dev/null @@ -1,49 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -const gssEncReqNumber = 80877104 - -type GSSEncRequest struct { -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*GSSEncRequest) Frontend() {} - -func (dst *GSSEncRequest) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("gss encoding request too short") - } - - requestCode := binary.BigEndian.Uint32(src) - - if requestCode != gssEncReqNumber { - return errors.New("bad gss encoding request code") - } - - return nil -} - -// Encode encodes src into dst. dst will include the 4 byte message length. -func (src *GSSEncRequest) Encode(dst []byte) []byte { - dst = pgio.AppendInt32(dst, 8) - dst = pgio.AppendInt32(dst, gssEncReqNumber) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src GSSEncRequest) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ProtocolVersion uint32 - Parameters map[string]string - }{ - Type: "GSSEncRequest", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/gss_response.go b/vendor/github.com/jackc/pgproto3/v2/gss_response.go deleted file mode 100644 index 62da99c7..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/gss_response.go +++ /dev/null @@ -1,48 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" - "github.com/jackc/pgio" -) - -type GSSResponse struct { - Data []byte -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (g *GSSResponse) Frontend() {} - -func (g *GSSResponse) Decode(data []byte) error { - g.Data = data - return nil -} - -func (g *GSSResponse) Encode(dst []byte) []byte { - dst = append(dst, 'p') - dst = pgio.AppendInt32(dst, int32(4+len(g.Data))) - dst = append(dst, g.Data...) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (g *GSSResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data []byte - }{ - Type: "GSSResponse", - Data: g.Data, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (g *GSSResponse) UnmarshalJSON(data []byte) error { - var msg struct { - Data []byte - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - g.Data = msg.Data - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/no_data.go b/vendor/github.com/jackc/pgproto3/v2/no_data.go deleted file mode 100644 index d8f85d38..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/no_data.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type NoData struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*NoData) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *NoData) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "NoData", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *NoData) Encode(dst []byte) []byte { - return append(dst, 'n', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src NoData) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "NoData", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/notice_response.go b/vendor/github.com/jackc/pgproto3/v2/notice_response.go deleted file mode 100644 index 4ac28a79..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/notice_response.go +++ /dev/null @@ -1,17 +0,0 @@ -package pgproto3 - -type NoticeResponse ErrorResponse - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*NoticeResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *NoticeResponse) Decode(src []byte) error { - return (*ErrorResponse)(dst).Decode(src) -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *NoticeResponse) Encode(dst []byte) []byte { - return append(dst, (*ErrorResponse)(src).marshalBinary('N')...) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/notification_response.go b/vendor/github.com/jackc/pgproto3/v2/notification_response.go deleted file mode 100644 index e762eb96..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/notification_response.go +++ /dev/null @@ -1,73 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -type NotificationResponse struct { - PID uint32 - Channel string - Payload string -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*NotificationResponse) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *NotificationResponse) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - pid := binary.BigEndian.Uint32(buf.Next(4)) - - b, err := buf.ReadBytes(0) - if err != nil { - return err - } - channel := string(b[:len(b)-1]) - - b, err = buf.ReadBytes(0) - if err != nil { - return err - } - payload := string(b[:len(b)-1]) - - *dst = NotificationResponse{PID: pid, Channel: channel, Payload: payload} - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *NotificationResponse) Encode(dst []byte) []byte { - dst = append(dst, 'A') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = pgio.AppendUint32(dst, src.PID) - dst = append(dst, src.Channel...) - dst = append(dst, 0) - dst = append(dst, src.Payload...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src NotificationResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - PID uint32 - Channel string - Payload string - }{ - Type: "NotificationResponse", - PID: src.PID, - Channel: src.Channel, - Payload: src.Payload, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/parameter_description.go b/vendor/github.com/jackc/pgproto3/v2/parameter_description.go deleted file mode 100644 index e28965c8..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/parameter_description.go +++ /dev/null @@ -1,66 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -type ParameterDescription struct { - ParameterOIDs []uint32 -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*ParameterDescription) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *ParameterDescription) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - if buf.Len() < 2 { - return &invalidMessageFormatErr{messageType: "ParameterDescription"} - } - - // Reported parameter count will be incorrect when number of args is greater than uint16 - buf.Next(2) - // Instead infer parameter count by remaining size of message - parameterCount := buf.Len() / 4 - - *dst = ParameterDescription{ParameterOIDs: make([]uint32, parameterCount)} - - for i := 0; i < parameterCount; i++ { - dst.ParameterOIDs[i] = binary.BigEndian.Uint32(buf.Next(4)) - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *ParameterDescription) Encode(dst []byte) []byte { - dst = append(dst, 't') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = pgio.AppendUint16(dst, uint16(len(src.ParameterOIDs))) - for _, oid := range src.ParameterOIDs { - dst = pgio.AppendUint32(dst, oid) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src ParameterDescription) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ParameterOIDs []uint32 - }{ - Type: "ParameterDescription", - ParameterOIDs: src.ParameterOIDs, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/parameter_status.go b/vendor/github.com/jackc/pgproto3/v2/parameter_status.go deleted file mode 100644 index c4021d92..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/parameter_status.go +++ /dev/null @@ -1,66 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - - "github.com/jackc/pgio" -) - -type ParameterStatus struct { - Name string - Value string -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*ParameterStatus) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *ParameterStatus) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - b, err := buf.ReadBytes(0) - if err != nil { - return err - } - name := string(b[:len(b)-1]) - - b, err = buf.ReadBytes(0) - if err != nil { - return err - } - value := string(b[:len(b)-1]) - - *dst = ParameterStatus{Name: name, Value: value} - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *ParameterStatus) Encode(dst []byte) []byte { - dst = append(dst, 'S') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.Name...) - dst = append(dst, 0) - dst = append(dst, src.Value...) - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (ps ParameterStatus) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Name string - Value string - }{ - Type: "ParameterStatus", - Name: ps.Name, - Value: ps.Value, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/parse.go b/vendor/github.com/jackc/pgproto3/v2/parse.go deleted file mode 100644 index 723885d4..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/parse.go +++ /dev/null @@ -1,88 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -type Parse struct { - Name string - Query string - ParameterOIDs []uint32 -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Parse) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Parse) Decode(src []byte) error { - *dst = Parse{} - - buf := bytes.NewBuffer(src) - - b, err := buf.ReadBytes(0) - if err != nil { - return err - } - dst.Name = string(b[:len(b)-1]) - - b, err = buf.ReadBytes(0) - if err != nil { - return err - } - dst.Query = string(b[:len(b)-1]) - - if buf.Len() < 2 { - return &invalidMessageFormatErr{messageType: "Parse"} - } - parameterOIDCount := int(binary.BigEndian.Uint16(buf.Next(2))) - - for i := 0; i < parameterOIDCount; i++ { - if buf.Len() < 4 { - return &invalidMessageFormatErr{messageType: "Parse"} - } - dst.ParameterOIDs = append(dst.ParameterOIDs, binary.BigEndian.Uint32(buf.Next(4))) - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Parse) Encode(dst []byte) []byte { - dst = append(dst, 'P') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, src.Name...) - dst = append(dst, 0) - dst = append(dst, src.Query...) - dst = append(dst, 0) - - dst = pgio.AppendUint16(dst, uint16(len(src.ParameterOIDs))) - for _, oid := range src.ParameterOIDs { - dst = pgio.AppendUint32(dst, oid) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Parse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Name string - Query string - ParameterOIDs []uint32 - }{ - Type: "Parse", - Name: src.Name, - Query: src.Query, - ParameterOIDs: src.ParameterOIDs, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/parse_complete.go b/vendor/github.com/jackc/pgproto3/v2/parse_complete.go deleted file mode 100644 index 92c9498b..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/parse_complete.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type ParseComplete struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*ParseComplete) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *ParseComplete) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "ParseComplete", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *ParseComplete) Encode(dst []byte) []byte { - return append(dst, '1', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src ParseComplete) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "ParseComplete", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/password_message.go b/vendor/github.com/jackc/pgproto3/v2/password_message.go deleted file mode 100644 index cae76c50..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/password_message.go +++ /dev/null @@ -1,54 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - - "github.com/jackc/pgio" -) - -type PasswordMessage struct { - Password string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*PasswordMessage) Frontend() {} - -// Frontend identifies this message as an authentication response. -func (*PasswordMessage) InitialResponse() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *PasswordMessage) Decode(src []byte) error { - buf := bytes.NewBuffer(src) - - b, err := buf.ReadBytes(0) - if err != nil { - return err - } - dst.Password = string(b[:len(b)-1]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *PasswordMessage) Encode(dst []byte) []byte { - dst = append(dst, 'p') - dst = pgio.AppendInt32(dst, int32(4+len(src.Password)+1)) - - dst = append(dst, src.Password...) - dst = append(dst, 0) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src PasswordMessage) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Password string - }{ - Type: "PasswordMessage", - Password: src.Password, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/pgproto3.go b/vendor/github.com/jackc/pgproto3/v2/pgproto3.go deleted file mode 100644 index 70c825e3..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/pgproto3.go +++ /dev/null @@ -1,65 +0,0 @@ -package pgproto3 - -import ( - "encoding/hex" - "errors" - "fmt" -) - -// Message is the interface implemented by an object that can decode and encode -// a particular PostgreSQL message. -type Message interface { - // Decode is allowed and expected to retain a reference to data after - // returning (unlike encoding.BinaryUnmarshaler). - Decode(data []byte) error - - // Encode appends itself to dst and returns the new buffer. - Encode(dst []byte) []byte -} - -type FrontendMessage interface { - Message - Frontend() // no-op method to distinguish frontend from backend methods -} - -type BackendMessage interface { - Message - Backend() // no-op method to distinguish frontend from backend methods -} - -type AuthenticationResponseMessage interface { - BackendMessage - AuthenticationResponse() // no-op method to distinguish authentication responses -} - -type invalidMessageLenErr struct { - messageType string - expectedLen int - actualLen int -} - -func (e *invalidMessageLenErr) Error() string { - return fmt.Sprintf("%s body must have length of %d, but it is %d", e.messageType, e.expectedLen, e.actualLen) -} - -type invalidMessageFormatErr struct { - messageType string -} - -func (e *invalidMessageFormatErr) Error() string { - return fmt.Sprintf("%s body is invalid", e.messageType) -} - -// getValueFromJSON gets the value from a protocol message representation in JSON. -func getValueFromJSON(v map[string]string) ([]byte, error) { - if v == nil { - return nil, nil - } - if text, ok := v["text"]; ok { - return []byte(text), nil - } - if binary, ok := v["binary"]; ok { - return hex.DecodeString(binary) - } - return nil, errors.New("unknown protocol representation") -} diff --git a/vendor/github.com/jackc/pgproto3/v2/portal_suspended.go b/vendor/github.com/jackc/pgproto3/v2/portal_suspended.go deleted file mode 100644 index 1a9e7bfb..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/portal_suspended.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type PortalSuspended struct{} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*PortalSuspended) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *PortalSuspended) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "PortalSuspended", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *PortalSuspended) Encode(dst []byte) []byte { - return append(dst, 's', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src PortalSuspended) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "PortalSuspended", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/query.go b/vendor/github.com/jackc/pgproto3/v2/query.go deleted file mode 100644 index 41c93b4a..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/query.go +++ /dev/null @@ -1,50 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - - "github.com/jackc/pgio" -) - -type Query struct { - String string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Query) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Query) Decode(src []byte) error { - i := bytes.IndexByte(src, 0) - if i != len(src)-1 { - return &invalidMessageFormatErr{messageType: "Query"} - } - - dst.String = string(src[:i]) - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Query) Encode(dst []byte) []byte { - dst = append(dst, 'Q') - dst = pgio.AppendInt32(dst, int32(4+len(src.String)+1)) - - dst = append(dst, src.String...) - dst = append(dst, 0) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Query) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - String string - }{ - Type: "Query", - String: src.String, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/ready_for_query.go b/vendor/github.com/jackc/pgproto3/v2/ready_for_query.go deleted file mode 100644 index 67a39be3..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/ready_for_query.go +++ /dev/null @@ -1,61 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" - "errors" -) - -type ReadyForQuery struct { - TxStatus byte -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*ReadyForQuery) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *ReadyForQuery) Decode(src []byte) error { - if len(src) != 1 { - return &invalidMessageLenErr{messageType: "ReadyForQuery", expectedLen: 1, actualLen: len(src)} - } - - dst.TxStatus = src[0] - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *ReadyForQuery) Encode(dst []byte) []byte { - return append(dst, 'Z', 0, 0, 0, 5, src.TxStatus) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src ReadyForQuery) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - TxStatus string - }{ - Type: "ReadyForQuery", - TxStatus: string(src.TxStatus), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *ReadyForQuery) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - TxStatus string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - if len(msg.TxStatus) != 1 { - return errors.New("invalid length for ReadyForQuery.TxStatus") - } - dst.TxStatus = msg.TxStatus[0] - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/row_description.go b/vendor/github.com/jackc/pgproto3/v2/row_description.go deleted file mode 100644 index a2e0d28e..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/row_description.go +++ /dev/null @@ -1,165 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - - "github.com/jackc/pgio" -) - -const ( - TextFormat = 0 - BinaryFormat = 1 -) - -type FieldDescription struct { - Name []byte - TableOID uint32 - TableAttributeNumber uint16 - DataTypeOID uint32 - DataTypeSize int16 - TypeModifier int32 - Format int16 -} - -// MarshalJSON implements encoding/json.Marshaler. -func (fd FieldDescription) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Name string - TableOID uint32 - TableAttributeNumber uint16 - DataTypeOID uint32 - DataTypeSize int16 - TypeModifier int32 - Format int16 - }{ - Name: string(fd.Name), - TableOID: fd.TableOID, - TableAttributeNumber: fd.TableAttributeNumber, - DataTypeOID: fd.DataTypeOID, - DataTypeSize: fd.DataTypeSize, - TypeModifier: fd.TypeModifier, - Format: fd.Format, - }) -} - -type RowDescription struct { - Fields []FieldDescription -} - -// Backend identifies this message as sendable by the PostgreSQL backend. -func (*RowDescription) Backend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *RowDescription) Decode(src []byte) error { - - if len(src) < 2 { - return &invalidMessageFormatErr{messageType: "RowDescription"} - } - fieldCount := int(binary.BigEndian.Uint16(src)) - rp := 2 - - dst.Fields = dst.Fields[0:0] - - for i := 0; i < fieldCount; i++ { - var fd FieldDescription - - idx := bytes.IndexByte(src[rp:], 0) - if idx < 0 { - return &invalidMessageFormatErr{messageType: "RowDescription"} - } - fd.Name = src[rp : rp+idx] - rp += idx + 1 - - // Since buf.Next() doesn't return an error if we hit the end of the buffer - // check Len ahead of time - if len(src[rp:]) < 18 { - return &invalidMessageFormatErr{messageType: "RowDescription"} - } - - fd.TableOID = binary.BigEndian.Uint32(src[rp:]) - rp += 4 - fd.TableAttributeNumber = binary.BigEndian.Uint16(src[rp:]) - rp += 2 - fd.DataTypeOID = binary.BigEndian.Uint32(src[rp:]) - rp += 4 - fd.DataTypeSize = int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - fd.TypeModifier = int32(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - fd.Format = int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - dst.Fields = append(dst.Fields, fd) - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *RowDescription) Encode(dst []byte) []byte { - dst = append(dst, 'T') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = pgio.AppendUint16(dst, uint16(len(src.Fields))) - for _, fd := range src.Fields { - dst = append(dst, fd.Name...) - dst = append(dst, 0) - - dst = pgio.AppendUint32(dst, fd.TableOID) - dst = pgio.AppendUint16(dst, fd.TableAttributeNumber) - dst = pgio.AppendUint32(dst, fd.DataTypeOID) - dst = pgio.AppendInt16(dst, fd.DataTypeSize) - dst = pgio.AppendInt32(dst, fd.TypeModifier) - dst = pgio.AppendInt16(dst, fd.Format) - } - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src RowDescription) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Fields []FieldDescription - }{ - Type: "RowDescription", - Fields: src.Fields, - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *RowDescription) UnmarshalJSON(data []byte) error { - var msg struct { - Fields []struct { - Name string - TableOID uint32 - TableAttributeNumber uint16 - DataTypeOID uint32 - DataTypeSize int16 - TypeModifier int32 - Format int16 - } - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - dst.Fields = make([]FieldDescription, len(msg.Fields)) - for n, field := range msg.Fields { - dst.Fields[n] = FieldDescription{ - Name: []byte(field.Name), - TableOID: field.TableOID, - TableAttributeNumber: field.TableAttributeNumber, - DataTypeOID: field.DataTypeOID, - DataTypeSize: field.DataTypeSize, - TypeModifier: field.TypeModifier, - Format: field.Format, - } - } - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/sasl_initial_response.go b/vendor/github.com/jackc/pgproto3/v2/sasl_initial_response.go deleted file mode 100644 index a6b553e7..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/sasl_initial_response.go +++ /dev/null @@ -1,87 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -type SASLInitialResponse struct { - AuthMechanism string - Data []byte -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*SASLInitialResponse) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *SASLInitialResponse) Decode(src []byte) error { - *dst = SASLInitialResponse{} - - rp := 0 - - idx := bytes.IndexByte(src, 0) - if idx < 0 { - return errors.New("invalid SASLInitialResponse") - } - - dst.AuthMechanism = string(src[rp:idx]) - rp = idx + 1 - - rp += 4 // The rest of the message is data so we can just skip the size - dst.Data = src[rp:] - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *SASLInitialResponse) Encode(dst []byte) []byte { - dst = append(dst, 'p') - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = append(dst, []byte(src.AuthMechanism)...) - dst = append(dst, 0) - - dst = pgio.AppendInt32(dst, int32(len(src.Data))) - dst = append(dst, src.Data...) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src SASLInitialResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - AuthMechanism string - Data string - }{ - Type: "SASLInitialResponse", - AuthMechanism: src.AuthMechanism, - Data: string(src.Data), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *SASLInitialResponse) UnmarshalJSON(data []byte) error { - // Ignore null, like in the main JSON package. - if string(data) == "null" { - return nil - } - - var msg struct { - AuthMechanism string - Data string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - dst.AuthMechanism = msg.AuthMechanism - dst.Data = []byte(msg.Data) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/sasl_response.go b/vendor/github.com/jackc/pgproto3/v2/sasl_response.go deleted file mode 100644 index d3e5d6a5..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/sasl_response.go +++ /dev/null @@ -1,54 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" - - "github.com/jackc/pgio" -) - -type SASLResponse struct { - Data []byte -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*SASLResponse) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *SASLResponse) Decode(src []byte) error { - *dst = SASLResponse{Data: src} - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *SASLResponse) Encode(dst []byte) []byte { - dst = append(dst, 'p') - dst = pgio.AppendInt32(dst, int32(4+len(src.Data))) - - dst = append(dst, src.Data...) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src SASLResponse) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - Data string - }{ - Type: "SASLResponse", - Data: string(src.Data), - }) -} - -// UnmarshalJSON implements encoding/json.Unmarshaler. -func (dst *SASLResponse) UnmarshalJSON(data []byte) error { - var msg struct { - Data string - } - if err := json.Unmarshal(data, &msg); err != nil { - return err - } - dst.Data = []byte(msg.Data) - return nil -} diff --git a/vendor/github.com/jackc/pgproto3/v2/ssl_request.go b/vendor/github.com/jackc/pgproto3/v2/ssl_request.go deleted file mode 100644 index 96ce489e..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/ssl_request.go +++ /dev/null @@ -1,49 +0,0 @@ -package pgproto3 - -import ( - "encoding/binary" - "encoding/json" - "errors" - - "github.com/jackc/pgio" -) - -const sslRequestNumber = 80877103 - -type SSLRequest struct { -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*SSLRequest) Frontend() {} - -func (dst *SSLRequest) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("ssl request too short") - } - - requestCode := binary.BigEndian.Uint32(src) - - if requestCode != sslRequestNumber { - return errors.New("bad ssl request code") - } - - return nil -} - -// Encode encodes src into dst. dst will include the 4 byte message length. -func (src *SSLRequest) Encode(dst []byte) []byte { - dst = pgio.AppendInt32(dst, 8) - dst = pgio.AppendInt32(dst, sslRequestNumber) - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src SSLRequest) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ProtocolVersion uint32 - Parameters map[string]string - }{ - Type: "SSLRequest", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/startup_message.go b/vendor/github.com/jackc/pgproto3/v2/startup_message.go deleted file mode 100644 index 5f1cd24f..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/startup_message.go +++ /dev/null @@ -1,96 +0,0 @@ -package pgproto3 - -import ( - "bytes" - "encoding/binary" - "encoding/json" - "errors" - "fmt" - - "github.com/jackc/pgio" -) - -const ProtocolVersionNumber = 196608 // 3.0 - -type StartupMessage struct { - ProtocolVersion uint32 - Parameters map[string]string -} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*StartupMessage) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *StartupMessage) Decode(src []byte) error { - if len(src) < 4 { - return errors.New("startup message too short") - } - - dst.ProtocolVersion = binary.BigEndian.Uint32(src) - rp := 4 - - if dst.ProtocolVersion != ProtocolVersionNumber { - return fmt.Errorf("Bad startup message version number. Expected %d, got %d", ProtocolVersionNumber, dst.ProtocolVersion) - } - - dst.Parameters = make(map[string]string) - for { - idx := bytes.IndexByte(src[rp:], 0) - if idx < 0 { - return &invalidMessageFormatErr{messageType: "StartupMesage"} - } - key := string(src[rp : rp+idx]) - rp += idx + 1 - - idx = bytes.IndexByte(src[rp:], 0) - if idx < 0 { - return &invalidMessageFormatErr{messageType: "StartupMesage"} - } - value := string(src[rp : rp+idx]) - rp += idx + 1 - - dst.Parameters[key] = value - - if len(src[rp:]) == 1 { - if src[rp] != 0 { - return fmt.Errorf("Bad startup message last byte. Expected 0, got %d", src[rp]) - } - break - } - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *StartupMessage) Encode(dst []byte) []byte { - sp := len(dst) - dst = pgio.AppendInt32(dst, -1) - - dst = pgio.AppendUint32(dst, src.ProtocolVersion) - for k, v := range src.Parameters { - dst = append(dst, k...) - dst = append(dst, 0) - dst = append(dst, v...) - dst = append(dst, 0) - } - dst = append(dst, 0) - - pgio.SetInt32(dst[sp:], int32(len(dst[sp:]))) - - return dst -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src StartupMessage) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - ProtocolVersion uint32 - Parameters map[string]string - }{ - Type: "StartupMessage", - ProtocolVersion: src.ProtocolVersion, - Parameters: src.Parameters, - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/sync.go b/vendor/github.com/jackc/pgproto3/v2/sync.go deleted file mode 100644 index 5db8e07a..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/sync.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type Sync struct{} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Sync) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Sync) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "Sync", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Sync) Encode(dst []byte) []byte { - return append(dst, 'S', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Sync) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "Sync", - }) -} diff --git a/vendor/github.com/jackc/pgproto3/v2/terminate.go b/vendor/github.com/jackc/pgproto3/v2/terminate.go deleted file mode 100644 index 135191ea..00000000 --- a/vendor/github.com/jackc/pgproto3/v2/terminate.go +++ /dev/null @@ -1,34 +0,0 @@ -package pgproto3 - -import ( - "encoding/json" -) - -type Terminate struct{} - -// Frontend identifies this message as sendable by a PostgreSQL frontend. -func (*Terminate) Frontend() {} - -// Decode decodes src into dst. src must contain the complete message with the exception of the initial 1 byte message -// type identifier and 4 byte message length. -func (dst *Terminate) Decode(src []byte) error { - if len(src) != 0 { - return &invalidMessageLenErr{messageType: "Terminate", expectedLen: 0, actualLen: len(src)} - } - - return nil -} - -// Encode encodes src into dst. dst will include the 1 byte message type identifier and the 4 byte message length. -func (src *Terminate) Encode(dst []byte) []byte { - return append(dst, 'X', 0, 0, 0, 4) -} - -// MarshalJSON implements encoding/json.Marshaler. -func (src Terminate) MarshalJSON() ([]byte, error) { - return json.Marshal(struct { - Type string - }{ - Type: "Terminate", - }) -} diff --git a/vendor/github.com/jackc/pgtype/CHANGELOG.md b/vendor/github.com/jackc/pgtype/CHANGELOG.md deleted file mode 100644 index a362a1df..00000000 --- a/vendor/github.com/jackc/pgtype/CHANGELOG.md +++ /dev/null @@ -1,164 +0,0 @@ -# 1.14.0 (February 11, 2023) - -* Fix: BC timestamp text format support (jozeflami) -* Add Scanner and Valuer interfaces to CIDR (Yurii Popivniak) -* Fix crash when nilifying pointer to sql.Scanner - -# 1.13.0 (December 1, 2022) - -* Fix: Reset jsonb before unmarshal (Tomas Odinas) -* Fix: return correct zero value when UUID conversion fails (ndrpnt) -* Fix: EncodeText for Lseg includes [ and ] -* Support sql Value and Scan for custom date type (Hubert Krauze) -* Support Ltree binary encoding (AmineChikhaoui) -* Fix: dates with "BC" (jozeflami) - -# 1.12.0 (August 6, 2022) - -* Add JSONArray (Jakob Ackermann) -* Support Inet from fmt.Stringer and encoding.TextMarshaler (Ville Skyttä) -* Support UUID from fmt.Stringer interface (Lasse Hyldahl Jensen) -* Fix: shopspring-numeric extension does not panic on NaN -* Numeric can be assigned to string -* Fix: Do not send IPv4 networks as IPv4-mapped IPv6 (William Storey) -* Fix: PlanScan for interface{}(nil) (James Hartig) -* Fix: *sql.Scanner for NULL handling (James Hartig) -* Timestamp[tz].Set() supports string (Harmen) -* Fix: Hstore AssignTo with map of *string (Diego Becciolini) - -# 1.11.0 (April 21, 2022) - -* Add multirange for numeric, int4, and int8 (Vu) -* JSONBArray now supports json.RawMessage (Jens Emil Schulz Østergaard) -* Add RecordArray (WGH) -* Add UnmarshalJSON to pgtype.Int2 -* Hstore.Set accepts map[string]Text - -# 1.10.0 (February 7, 2022) - -* Normalize UTC timestamps to comply with stdlib (Torkel Rogstad) -* Assign Numeric to *big.Rat (Oleg Lomaka) -* Fix typo in float8 error message (Pinank Solanki) -* Scan type aliases for floating point types (Collin Forsyth) - -# 1.9.1 (November 28, 2021) - -* Fix: binary timestamp is assumed to be in UTC (restored behavior changed in v1.9.0) - -# 1.9.0 (November 20, 2021) - -* Fix binary hstore null decoding -* Add shopspring/decimal.NullDecimal support to integration (Eli Treuherz) -* Inet.Set supports bare IP address (Carl Dunham) -* Add zeronull.Float8 -* Fix NULL being lost when scanning unknown OID into sql.Scanner -* Fix BPChar.AssignTo **rune -* Add support for fmt.Stringer and driver.Valuer in String fields encoding (Jan Dubsky) -* Fix really big timestamp(tz)s binary format parsing (e.g. year 294276) (Jim Tsao) -* Support `map[string]*string` as hstore (Adrian Sieger) -* Fix parsing text array with negative bounds -* Add infinity support for numeric (Jim Tsao) - -# 1.8.1 (July 24, 2021) - -* Cleaned up Go module dependency chain - -# 1.8.0 (July 10, 2021) - -* Maintain host bits for inet types (Cameron Daniel) -* Support pointers of wrapping structs (Ivan Daunis) -* Register JSONBArray at NewConnInfo() (Rueian) -* CompositeTextScanner handles backslash escapes - -# 1.7.0 (March 25, 2021) - -* Fix scanning int into **sql.Scanner implementor -* Add tsrange array type (Vasilii Novikov) -* Fix: escaped strings when they start or end with a newline char (Stephane Martin) -* Accept nil *time.Time in Time.Set -* Fix numeric NaN support -* Use Go 1.13 errors instead of xerrors - -# 1.6.2 (December 3, 2020) - -* Fix panic on assigning empty array to non-slice or array -* Fix text array parsing disambiguates NULL and "NULL" -* Fix Timestamptz.DecodeText with too short text - -# 1.6.1 (October 31, 2020) - -* Fix simple protocol empty array support - -# 1.6.0 (October 24, 2020) - -* Fix AssignTo pointer to pointer to slice and named types. -* Fix zero length array assignment (Simo Haasanen) -* Add float64, float32 convert to int2, int4, int8 (lqu3j) -* Support setting infinite timestamps (Erik Agsjö) -* Polygon improvements (duohedron) -* Fix Inet.Set with nil (Tomas Volf) - -# 1.5.0 (September 26, 2020) - -* Add slice of slice mapping to multi-dimensional arrays (Simo Haasanen) -* Fix JSONBArray -* Fix selecting empty array -* Text formatted values except bytea can be directly scanned to []byte -* Add JSON marshalling for UUID (bakmataliev) -* Improve point type conversions (bakmataliev) - -# 1.4.2 (July 22, 2020) - -* Fix encoding of a large composite data type (Yaz Saito) - -# 1.4.1 (July 14, 2020) - -* Fix ArrayType DecodeBinary empty array breaks future reads - -# 1.4.0 (June 27, 2020) - -* Add JSON support to ext/gofrs-uuid -* Performance improvements in Scan path -* Improved ext/shopspring-numeric binary decoding performance -* Add composite type support (Maxim Ivanov and Jack Christensen) -* Add better generic enum type support -* Add generic array type support -* Clarify and normalize Value semantics -* Fix hstore with empty string values -* Numeric supports NaN values (leighhopcroft) -* Add slice of pointer support to array types (megaturbo) -* Add jsonb array type (tserakhau) -* Allow converting intervals with months and days to duration - -# 1.3.0 (March 30, 2020) - -* Get implemented on T instead of *T -* Set will call Get on src if possible -* Range types Set method supports its own type, string, and nil -* Date.Set parses string -* Fix correct format verb for unknown type error (Robert Welin) -* Truncate nanoseconds in EncodeText for Timestamptz and Timestamp - -# 1.2.0 (February 5, 2020) - -* Add zeronull package for easier NULL <-> zero conversion -* Add JSON marshalling for shopspring-numeric extension -* Add JSON marshalling for Bool, Date, JSON/B, Timestamptz (Jeffrey Stiles) -* Fix null status in UnmarshalJSON for some types (Jeffrey Stiles) - -# 1.1.0 (January 11, 2020) - -* Add PostgreSQL time type support -* Add more automatic conversions of integer arrays of different types (Jean-Philippe Quéméner) - -# 1.0.3 (November 16, 2019) - -* Support initializing Array types from a slice of the value (Alex Gaynor) - -# 1.0.2 (October 22, 2019) - -* Fix scan into null into pointer to pointer implementing Decode* interface. (Jeremy Altavilla) - -# 1.0.1 (September 19, 2019) - -* Fix daterange OID diff --git a/vendor/github.com/jackc/pgtype/LICENSE b/vendor/github.com/jackc/pgtype/LICENSE deleted file mode 100644 index 5c486c39..00000000 --- a/vendor/github.com/jackc/pgtype/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2013-2021 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/pgtype/README.md b/vendor/github.com/jackc/pgtype/README.md deleted file mode 100644 index 72dadcfc..00000000 --- a/vendor/github.com/jackc/pgtype/README.md +++ /dev/null @@ -1,14 +0,0 @@ -[![](https://godoc.org/github.com/jackc/pgtype?status.svg)](https://godoc.org/github.com/jackc/pgtype) -![CI](https://github.com/jackc/pgtype/workflows/CI/badge.svg) - ---- - -This version is used with pgx `v4`. In pgx `v5` it is part of the https://github.com/jackc/pgx repository. - ---- - -# pgtype - -pgtype implements Go types for over 70 PostgreSQL types. pgtype is the type system underlying the -https://github.com/jackc/pgx PostgreSQL driver. These types support the binary format for enhanced performance with pgx. -They also support the database/sql `Scan` and `Value` interfaces and can be used with https://github.com/lib/pq. diff --git a/vendor/github.com/jackc/pgtype/aclitem.go b/vendor/github.com/jackc/pgtype/aclitem.go deleted file mode 100644 index 9f6587be..00000000 --- a/vendor/github.com/jackc/pgtype/aclitem.go +++ /dev/null @@ -1,138 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" -) - -// ACLItem is used for PostgreSQL's aclitem data type. A sample aclitem -// might look like this: -// -// postgres=arwdDxt/postgres -// -// Note, however, that because the user/role name part of an aclitem is -// an identifier, it follows all the usual formatting rules for SQL -// identifiers: if it contains spaces and other special characters, -// it should appear in double-quotes: -// -// postgres=arwdDxt/"role with spaces" -// -type ACLItem struct { - String string - Status Status -} - -func (dst *ACLItem) Set(src interface{}) error { - if src == nil { - *dst = ACLItem{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case string: - *dst = ACLItem{String: value, Status: Present} - case *string: - if value == nil { - *dst = ACLItem{Status: Null} - } else { - *dst = ACLItem{String: *value, Status: Present} - } - default: - if originalSrc, ok := underlyingStringType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to ACLItem", value) - } - - return nil -} - -func (dst ACLItem) Get() interface{} { - switch dst.Status { - case Present: - return dst.String - case Null: - return nil - default: - return dst.Status - } -} - -func (src *ACLItem) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *string: - *v = src.String - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *ACLItem) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = ACLItem{Status: Null} - return nil - } - - *dst = ACLItem{String: string(src), Status: Present} - return nil -} - -func (src ACLItem) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.String...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *ACLItem) Scan(src interface{}) error { - if src == nil { - *dst = ACLItem{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src ACLItem) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.String, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/aclitem_array.go b/vendor/github.com/jackc/pgtype/aclitem_array.go deleted file mode 100644 index 4e3be3bd..00000000 --- a/vendor/github.com/jackc/pgtype/aclitem_array.go +++ /dev/null @@ -1,428 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "fmt" - "reflect" -) - -type ACLItemArray struct { - Elements []ACLItem - Dimensions []ArrayDimension - Status Status -} - -func (dst *ACLItemArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = ACLItemArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = ACLItemArray{Status: Null} - } else if len(value) == 0 { - *dst = ACLItemArray{Status: Present} - } else { - elements := make([]ACLItem, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = ACLItemArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = ACLItemArray{Status: Null} - } else if len(value) == 0 { - *dst = ACLItemArray{Status: Present} - } else { - elements := make([]ACLItem, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = ACLItemArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []ACLItem: - if value == nil { - *dst = ACLItemArray{Status: Null} - } else if len(value) == 0 { - *dst = ACLItemArray{Status: Present} - } else { - *dst = ACLItemArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = ACLItemArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for ACLItemArray", src) - } - if elementsLength == 0 { - *dst = ACLItemArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to ACLItemArray", src) - } - - *dst = ACLItemArray{ - Elements: make([]ACLItem, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]ACLItem, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to ACLItemArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *ACLItemArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to ACLItemArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in ACLItemArray", err) - } - index++ - - return index, nil -} - -func (dst ACLItemArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *ACLItemArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *ACLItemArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from ACLItemArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from ACLItemArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *ACLItemArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = ACLItemArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []ACLItem - - if len(uta.Elements) > 0 { - elements = make([]ACLItem, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem ACLItem - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = ACLItemArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (src ACLItemArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *ACLItemArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src ACLItemArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/array.go b/vendor/github.com/jackc/pgtype/array.go deleted file mode 100644 index 174007c1..00000000 --- a/vendor/github.com/jackc/pgtype/array.go +++ /dev/null @@ -1,381 +0,0 @@ -package pgtype - -import ( - "bytes" - "encoding/binary" - "fmt" - "io" - "reflect" - "strconv" - "strings" - "unicode" - - "github.com/jackc/pgio" -) - -// Information on the internals of PostgreSQL arrays can be found in -// src/include/utils/array.h and src/backend/utils/adt/arrayfuncs.c. Of -// particular interest is the array_send function. - -type ArrayHeader struct { - ContainsNull bool - ElementOID int32 - Dimensions []ArrayDimension -} - -type ArrayDimension struct { - Length int32 - LowerBound int32 -} - -func (dst *ArrayHeader) DecodeBinary(ci *ConnInfo, src []byte) (int, error) { - if len(src) < 12 { - return 0, fmt.Errorf("array header too short: %d", len(src)) - } - - rp := 0 - - numDims := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - dst.ContainsNull = binary.BigEndian.Uint32(src[rp:]) == 1 - rp += 4 - - dst.ElementOID = int32(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if numDims > 0 { - dst.Dimensions = make([]ArrayDimension, numDims) - } - if len(src) < 12+numDims*8 { - return 0, fmt.Errorf("array header too short for %d dimensions: %d", numDims, len(src)) - } - for i := range dst.Dimensions { - dst.Dimensions[i].Length = int32(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - dst.Dimensions[i].LowerBound = int32(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - } - - return rp, nil -} - -func (src ArrayHeader) EncodeBinary(ci *ConnInfo, buf []byte) []byte { - buf = pgio.AppendInt32(buf, int32(len(src.Dimensions))) - - var containsNull int32 - if src.ContainsNull { - containsNull = 1 - } - buf = pgio.AppendInt32(buf, containsNull) - - buf = pgio.AppendInt32(buf, src.ElementOID) - - for i := range src.Dimensions { - buf = pgio.AppendInt32(buf, src.Dimensions[i].Length) - buf = pgio.AppendInt32(buf, src.Dimensions[i].LowerBound) - } - - return buf -} - -type UntypedTextArray struct { - Elements []string - Quoted []bool - Dimensions []ArrayDimension -} - -func ParseUntypedTextArray(src string) (*UntypedTextArray, error) { - dst := &UntypedTextArray{} - - buf := bytes.NewBufferString(src) - - skipWhitespace(buf) - - r, _, err := buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - var explicitDimensions []ArrayDimension - - // Array has explicit dimensions - if r == '[' { - buf.UnreadRune() - - for { - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - if r == '=' { - break - } else if r != '[' { - return nil, fmt.Errorf("invalid array, expected '[' or '=' got %v", r) - } - - lower, err := arrayParseInteger(buf) - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - if r != ':' { - return nil, fmt.Errorf("invalid array, expected ':' got %v", r) - } - - upper, err := arrayParseInteger(buf) - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - if r != ']' { - return nil, fmt.Errorf("invalid array, expected ']' got %v", r) - } - - explicitDimensions = append(explicitDimensions, ArrayDimension{LowerBound: lower, Length: upper - lower + 1}) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - } - - if r != '{' { - return nil, fmt.Errorf("invalid array, expected '{': %v", err) - } - - implicitDimensions := []ArrayDimension{{LowerBound: 1, Length: 0}} - - // Consume all initial opening brackets. This provides number of dimensions. - for { - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - if r == '{' { - implicitDimensions[len(implicitDimensions)-1].Length = 1 - implicitDimensions = append(implicitDimensions, ArrayDimension{LowerBound: 1}) - } else { - buf.UnreadRune() - break - } - } - currentDim := len(implicitDimensions) - 1 - counterDim := currentDim - - for { - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - switch r { - case '{': - if currentDim == counterDim { - implicitDimensions[currentDim].Length++ - } - currentDim++ - case ',': - case '}': - currentDim-- - if currentDim < counterDim { - counterDim = currentDim - } - default: - buf.UnreadRune() - value, quoted, err := arrayParseValue(buf) - if err != nil { - return nil, fmt.Errorf("invalid array value: %v", err) - } - if currentDim == counterDim { - implicitDimensions[currentDim].Length++ - } - dst.Quoted = append(dst.Quoted, quoted) - dst.Elements = append(dst.Elements, value) - } - - if currentDim < 0 { - break - } - } - - skipWhitespace(buf) - - if buf.Len() > 0 { - return nil, fmt.Errorf("unexpected trailing data: %v", buf.String()) - } - - if len(dst.Elements) == 0 { - dst.Dimensions = nil - } else if len(explicitDimensions) > 0 { - dst.Dimensions = explicitDimensions - } else { - dst.Dimensions = implicitDimensions - } - - return dst, nil -} - -func skipWhitespace(buf *bytes.Buffer) { - var r rune - var err error - for r, _, _ = buf.ReadRune(); unicode.IsSpace(r); r, _, _ = buf.ReadRune() { - } - - if err != io.EOF { - buf.UnreadRune() - } -} - -func arrayParseValue(buf *bytes.Buffer) (string, bool, error) { - r, _, err := buf.ReadRune() - if err != nil { - return "", false, err - } - if r == '"' { - return arrayParseQuotedValue(buf) - } - buf.UnreadRune() - - s := &bytes.Buffer{} - - for { - r, _, err := buf.ReadRune() - if err != nil { - return "", false, err - } - - switch r { - case ',', '}': - buf.UnreadRune() - return s.String(), false, nil - } - - s.WriteRune(r) - } -} - -func arrayParseQuotedValue(buf *bytes.Buffer) (string, bool, error) { - s := &bytes.Buffer{} - - for { - r, _, err := buf.ReadRune() - if err != nil { - return "", false, err - } - - switch r { - case '\\': - r, _, err = buf.ReadRune() - if err != nil { - return "", false, err - } - case '"': - r, _, err = buf.ReadRune() - if err != nil { - return "", false, err - } - buf.UnreadRune() - return s.String(), true, nil - } - s.WriteRune(r) - } -} - -func arrayParseInteger(buf *bytes.Buffer) (int32, error) { - s := &bytes.Buffer{} - - for { - r, _, err := buf.ReadRune() - if err != nil { - return 0, err - } - - if ('0' <= r && r <= '9') || r == '-' { - s.WriteRune(r) - } else { - buf.UnreadRune() - n, err := strconv.ParseInt(s.String(), 10, 32) - if err != nil { - return 0, err - } - return int32(n), nil - } - } -} - -func EncodeTextArrayDimensions(buf []byte, dimensions []ArrayDimension) []byte { - var customDimensions bool - for _, dim := range dimensions { - if dim.LowerBound != 1 { - customDimensions = true - } - } - - if !customDimensions { - return buf - } - - for _, dim := range dimensions { - buf = append(buf, '[') - buf = append(buf, strconv.FormatInt(int64(dim.LowerBound), 10)...) - buf = append(buf, ':') - buf = append(buf, strconv.FormatInt(int64(dim.LowerBound+dim.Length-1), 10)...) - buf = append(buf, ']') - } - - return append(buf, '=') -} - -var quoteArrayReplacer = strings.NewReplacer(`\`, `\\`, `"`, `\"`) - -func quoteArrayElement(src string) string { - return `"` + quoteArrayReplacer.Replace(src) + `"` -} - -func isSpace(ch byte) bool { - // see https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/parser/scansup.c#L224 - return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r' || ch == '\f' -} - -func QuoteArrayElementIfNeeded(src string) string { - if src == "" || (len(src) == 4 && strings.ToLower(src) == "null") || isSpace(src[0]) || isSpace(src[len(src)-1]) || strings.ContainsAny(src, `{},"\`) { - return quoteArrayElement(src) - } - return src -} - -func findDimensionsFromValue(value reflect.Value, dimensions []ArrayDimension, elementsLength int) ([]ArrayDimension, int, bool) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - length := value.Len() - if 0 == elementsLength { - elementsLength = length - } else { - elementsLength *= length - } - dimensions = append(dimensions, ArrayDimension{Length: int32(length), LowerBound: 1}) - for i := 0; i < length; i++ { - if d, l, ok := findDimensionsFromValue(value.Index(i), dimensions, elementsLength); ok { - return d, l, true - } - } - } - return dimensions, elementsLength, true -} diff --git a/vendor/github.com/jackc/pgtype/array_type.go b/vendor/github.com/jackc/pgtype/array_type.go deleted file mode 100644 index 71466554..00000000 --- a/vendor/github.com/jackc/pgtype/array_type.go +++ /dev/null @@ -1,353 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -// ArrayType represents an array type. While it implements Value, this is only in service of its type conversion duties -// when registered as a data type in a ConnType. It should not be used directly as a Value. ArrayType is a convenience -// type for types that do not have a concrete array type. -type ArrayType struct { - elements []ValueTranscoder - dimensions []ArrayDimension - - typeName string - newElement func() ValueTranscoder - - elementOID uint32 - status Status -} - -func NewArrayType(typeName string, elementOID uint32, newElement func() ValueTranscoder) *ArrayType { - return &ArrayType{typeName: typeName, elementOID: elementOID, newElement: newElement} -} - -func (at *ArrayType) NewTypeValue() Value { - return &ArrayType{ - elements: at.elements, - dimensions: at.dimensions, - status: at.status, - - typeName: at.typeName, - elementOID: at.elementOID, - newElement: at.newElement, - } -} - -func (at *ArrayType) TypeName() string { - return at.typeName -} - -func (dst *ArrayType) setNil() { - dst.elements = nil - dst.dimensions = nil - dst.status = Null -} - -func (dst *ArrayType) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - dst.setNil() - return nil - } - - sliceVal := reflect.ValueOf(src) - if sliceVal.Kind() != reflect.Slice { - return fmt.Errorf("cannot set non-slice") - } - - if sliceVal.IsNil() { - dst.setNil() - return nil - } - - dst.elements = make([]ValueTranscoder, sliceVal.Len()) - for i := range dst.elements { - v := dst.newElement() - err := v.Set(sliceVal.Index(i).Interface()) - if err != nil { - return err - } - - dst.elements[i] = v - } - dst.dimensions = []ArrayDimension{{Length: int32(len(dst.elements)), LowerBound: 1}} - dst.status = Present - - return nil -} - -func (dst ArrayType) Get() interface{} { - switch dst.status { - case Present: - elementValues := make([]interface{}, len(dst.elements)) - for i := range dst.elements { - elementValues[i] = dst.elements[i].Get() - } - return elementValues - case Null: - return nil - default: - return dst.status - } -} - -func (src *ArrayType) AssignTo(dst interface{}) error { - ptrSlice := reflect.ValueOf(dst) - if ptrSlice.Kind() != reflect.Ptr { - return fmt.Errorf("cannot assign to non-pointer") - } - - sliceVal := ptrSlice.Elem() - sliceType := sliceVal.Type() - - if sliceType.Kind() != reflect.Slice { - return fmt.Errorf("cannot assign to pointer to non-slice") - } - - switch src.status { - case Present: - slice := reflect.MakeSlice(sliceType, len(src.elements), len(src.elements)) - elemType := sliceType.Elem() - - for i := range src.elements { - ptrElem := reflect.New(elemType) - err := src.elements[i].AssignTo(ptrElem.Interface()) - if err != nil { - return err - } - - slice.Index(i).Set(ptrElem.Elem()) - } - - sliceVal.Set(slice) - return nil - case Null: - sliceVal.Set(reflect.Zero(sliceType)) - return nil - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *ArrayType) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - dst.setNil() - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []ValueTranscoder - - if len(uta.Elements) > 0 { - elements = make([]ValueTranscoder, len(uta.Elements)) - - for i, s := range uta.Elements { - elem := dst.newElement() - var elemSrc []byte - if s != "NULL" { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - dst.elements = elements - dst.dimensions = uta.Dimensions - dst.status = Present - - return nil -} - -func (dst *ArrayType) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - dst.setNil() - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - var elements []ValueTranscoder - - if len(arrayHeader.Dimensions) == 0 { - dst.elements = elements - dst.dimensions = arrayHeader.Dimensions - dst.status = Present - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements = make([]ValueTranscoder, elementCount) - - for i := range elements { - elem := dst.newElement() - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elem.DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - - dst.elements = elements - dst.dimensions = arrayHeader.Dimensions - dst.status = Present - - return nil -} - -func (src ArrayType) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.dimensions)) - dimElemCounts[len(src.dimensions)-1] = int(src.dimensions[len(src.dimensions)-1].Length) - for i := len(src.dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src ArrayType) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.dimensions, - ElementOID: int32(src.elementOID), - } - - for i := range src.elements { - if src.elements[i].Get() == nil { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *ArrayType) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src ArrayType) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/bit.go b/vendor/github.com/jackc/pgtype/bit.go deleted file mode 100644 index c1709e6b..00000000 --- a/vendor/github.com/jackc/pgtype/bit.go +++ /dev/null @@ -1,45 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -type Bit Varbit - -func (dst *Bit) Set(src interface{}) error { - return (*Varbit)(dst).Set(src) -} - -func (dst Bit) Get() interface{} { - return (Varbit)(dst).Get() -} - -func (src *Bit) AssignTo(dst interface{}) error { - return (*Varbit)(src).AssignTo(dst) -} - -func (dst *Bit) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Varbit)(dst).DecodeBinary(ci, src) -} - -func (src Bit) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Varbit)(src).EncodeBinary(ci, buf) -} - -func (dst *Bit) DecodeText(ci *ConnInfo, src []byte) error { - return (*Varbit)(dst).DecodeText(ci, src) -} - -func (src Bit) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Varbit)(src).EncodeText(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *Bit) Scan(src interface{}) error { - return (*Varbit)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Bit) Value() (driver.Value, error) { - return (Varbit)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/bool.go b/vendor/github.com/jackc/pgtype/bool.go deleted file mode 100644 index 676c8e5d..00000000 --- a/vendor/github.com/jackc/pgtype/bool.go +++ /dev/null @@ -1,217 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/json" - "fmt" - "strconv" -) - -type Bool struct { - Bool bool - Status Status -} - -func (dst *Bool) Set(src interface{}) error { - if src == nil { - *dst = Bool{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case bool: - *dst = Bool{Bool: value, Status: Present} - case string: - bb, err := strconv.ParseBool(value) - if err != nil { - return err - } - *dst = Bool{Bool: bb, Status: Present} - case *bool: - if value == nil { - *dst = Bool{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Bool{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingBoolType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Bool", value) - } - - return nil -} - -func (dst Bool) Get() interface{} { - switch dst.Status { - case Present: - return dst.Bool - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Bool) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *bool: - *v = src.Bool - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Bool) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Bool{Status: Null} - return nil - } - - if len(src) != 1 { - return fmt.Errorf("invalid length for bool: %v", len(src)) - } - - *dst = Bool{Bool: src[0] == 't', Status: Present} - return nil -} - -func (dst *Bool) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Bool{Status: Null} - return nil - } - - if len(src) != 1 { - return fmt.Errorf("invalid length for bool: %v", len(src)) - } - - *dst = Bool{Bool: src[0] == 1, Status: Present} - return nil -} - -func (src Bool) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if src.Bool { - buf = append(buf, 't') - } else { - buf = append(buf, 'f') - } - - return buf, nil -} - -func (src Bool) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if src.Bool { - buf = append(buf, 1) - } else { - buf = append(buf, 0) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Bool) Scan(src interface{}) error { - if src == nil { - *dst = Bool{Status: Null} - return nil - } - - switch src := src.(type) { - case bool: - *dst = Bool{Bool: src, Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Bool) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.Bool, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Bool) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - if src.Bool { - return []byte("true"), nil - } else { - return []byte("false"), nil - } - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *Bool) UnmarshalJSON(b []byte) error { - var v *bool - err := json.Unmarshal(b, &v) - if err != nil { - return err - } - - if v == nil { - *dst = Bool{Status: Null} - } else { - *dst = Bool{Bool: *v, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/bool_array.go b/vendor/github.com/jackc/pgtype/bool_array.go deleted file mode 100644 index 6558d971..00000000 --- a/vendor/github.com/jackc/pgtype/bool_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type BoolArray struct { - Elements []Bool - Dimensions []ArrayDimension - Status Status -} - -func (dst *BoolArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = BoolArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []bool: - if value == nil { - *dst = BoolArray{Status: Null} - } else if len(value) == 0 { - *dst = BoolArray{Status: Present} - } else { - elements := make([]Bool, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = BoolArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*bool: - if value == nil { - *dst = BoolArray{Status: Null} - } else if len(value) == 0 { - *dst = BoolArray{Status: Present} - } else { - elements := make([]Bool, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = BoolArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Bool: - if value == nil { - *dst = BoolArray{Status: Null} - } else if len(value) == 0 { - *dst = BoolArray{Status: Present} - } else { - *dst = BoolArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = BoolArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for BoolArray", src) - } - if elementsLength == 0 { - *dst = BoolArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to BoolArray", src) - } - - *dst = BoolArray{ - Elements: make([]Bool, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Bool, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to BoolArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *BoolArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to BoolArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in BoolArray", err) - } - index++ - - return index, nil -} - -func (dst BoolArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *BoolArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]bool: - *v = make([]bool, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*bool: - *v = make([]*bool, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *BoolArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from BoolArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from BoolArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *BoolArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = BoolArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Bool - - if len(uta.Elements) > 0 { - elements = make([]Bool, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Bool - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = BoolArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *BoolArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = BoolArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = BoolArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Bool, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = BoolArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src BoolArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src BoolArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("bool"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "bool") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *BoolArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src BoolArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/box.go b/vendor/github.com/jackc/pgtype/box.go deleted file mode 100644 index 27fb829e..00000000 --- a/vendor/github.com/jackc/pgtype/box.go +++ /dev/null @@ -1,165 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Box struct { - P [2]Vec2 - Status Status -} - -func (dst *Box) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Box", src) -} - -func (dst Box) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Box) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Box) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Box{Status: Null} - return nil - } - - if len(src) < 11 { - return fmt.Errorf("invalid length for Box: %v", len(src)) - } - - str := string(src[1:]) - - var end int - end = strings.IndexByte(str, ',') - - x1, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1:] - end = strings.IndexByte(str, ')') - - y1, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+3:] - end = strings.IndexByte(str, ',') - - x2, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1 : len(str)-1] - - y2, err := strconv.ParseFloat(str, 64) - if err != nil { - return err - } - - *dst = Box{P: [2]Vec2{{x1, y1}, {x2, y2}}, Status: Present} - return nil -} - -func (dst *Box) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Box{Status: Null} - return nil - } - - if len(src) != 32 { - return fmt.Errorf("invalid length for Box: %v", len(src)) - } - - x1 := binary.BigEndian.Uint64(src) - y1 := binary.BigEndian.Uint64(src[8:]) - x2 := binary.BigEndian.Uint64(src[16:]) - y2 := binary.BigEndian.Uint64(src[24:]) - - *dst = Box{ - P: [2]Vec2{ - {math.Float64frombits(x1), math.Float64frombits(y1)}, - {math.Float64frombits(x2), math.Float64frombits(y2)}, - }, - Status: Present, - } - return nil -} - -func (src Box) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, fmt.Sprintf(`(%s,%s),(%s,%s)`, - strconv.FormatFloat(src.P[0].X, 'f', -1, 64), - strconv.FormatFloat(src.P[0].Y, 'f', -1, 64), - strconv.FormatFloat(src.P[1].X, 'f', -1, 64), - strconv.FormatFloat(src.P[1].Y, 'f', -1, 64), - )...) - return buf, nil -} - -func (src Box) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[0].X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[0].Y)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[1].X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[1].Y)) - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Box) Scan(src interface{}) error { - if src == nil { - *dst = Box{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Box) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/bpchar.go b/vendor/github.com/jackc/pgtype/bpchar.go deleted file mode 100644 index c5fa42ea..00000000 --- a/vendor/github.com/jackc/pgtype/bpchar.go +++ /dev/null @@ -1,93 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" -) - -// BPChar is fixed-length, blank padded char type -// character(n), char(n) -type BPChar Text - -// Set converts from src to dst. -func (dst *BPChar) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -// Get returns underlying value -func (dst BPChar) Get() interface{} { - return (Text)(dst).Get() -} - -// AssignTo assigns from src to dst. -func (src *BPChar) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *rune: - runes := []rune(src.String) - if len(runes) == 1 { - *v = runes[0] - return nil - } - case *string: - *v = src.String - return nil - case *[]byte: - *v = make([]byte, len(src.String)) - copy(*v, src.String) - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (BPChar) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *BPChar) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (dst *BPChar) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeBinary(ci, src) -} - -func (BPChar) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src BPChar) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeText(ci, buf) -} - -func (src BPChar) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *BPChar) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src BPChar) Value() (driver.Value, error) { - return (Text)(src).Value() -} - -func (src BPChar) MarshalJSON() ([]byte, error) { - return (Text)(src).MarshalJSON() -} - -func (dst *BPChar) UnmarshalJSON(b []byte) error { - return (*Text)(dst).UnmarshalJSON(b) -} diff --git a/vendor/github.com/jackc/pgtype/bpchar_array.go b/vendor/github.com/jackc/pgtype/bpchar_array.go deleted file mode 100644 index 8e792214..00000000 --- a/vendor/github.com/jackc/pgtype/bpchar_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type BPCharArray struct { - Elements []BPChar - Dimensions []ArrayDimension - Status Status -} - -func (dst *BPCharArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = BPCharArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = BPCharArray{Status: Null} - } else if len(value) == 0 { - *dst = BPCharArray{Status: Present} - } else { - elements := make([]BPChar, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = BPCharArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = BPCharArray{Status: Null} - } else if len(value) == 0 { - *dst = BPCharArray{Status: Present} - } else { - elements := make([]BPChar, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = BPCharArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []BPChar: - if value == nil { - *dst = BPCharArray{Status: Null} - } else if len(value) == 0 { - *dst = BPCharArray{Status: Present} - } else { - *dst = BPCharArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = BPCharArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for BPCharArray", src) - } - if elementsLength == 0 { - *dst = BPCharArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to BPCharArray", src) - } - - *dst = BPCharArray{ - Elements: make([]BPChar, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]BPChar, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to BPCharArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *BPCharArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to BPCharArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in BPCharArray", err) - } - index++ - - return index, nil -} - -func (dst BPCharArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *BPCharArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *BPCharArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from BPCharArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from BPCharArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *BPCharArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = BPCharArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []BPChar - - if len(uta.Elements) > 0 { - elements = make([]BPChar, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem BPChar - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = BPCharArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *BPCharArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = BPCharArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = BPCharArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]BPChar, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = BPCharArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src BPCharArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src BPCharArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("bpchar"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "bpchar") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *BPCharArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src BPCharArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/bytea.go b/vendor/github.com/jackc/pgtype/bytea.go deleted file mode 100644 index 67eba350..00000000 --- a/vendor/github.com/jackc/pgtype/bytea.go +++ /dev/null @@ -1,163 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/hex" - "fmt" -) - -type Bytea struct { - Bytes []byte - Status Status -} - -func (dst *Bytea) Set(src interface{}) error { - if src == nil { - *dst = Bytea{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case []byte: - if value != nil { - *dst = Bytea{Bytes: value, Status: Present} - } else { - *dst = Bytea{Status: Null} - } - default: - if originalSrc, ok := underlyingBytesType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Bytea", value) - } - - return nil -} - -func (dst Bytea) Get() interface{} { - switch dst.Status { - case Present: - return dst.Bytes - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Bytea) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *[]byte: - buf := make([]byte, len(src.Bytes)) - copy(buf, src.Bytes) - *v = buf - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -// DecodeText only supports the hex format. This has been the default since -// PostgreSQL 9.0. -func (dst *Bytea) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Bytea{Status: Null} - return nil - } - - if len(src) < 2 || src[0] != '\\' || src[1] != 'x' { - return fmt.Errorf("invalid hex format") - } - - buf := make([]byte, (len(src)-2)/2) - _, err := hex.Decode(buf, src[2:]) - if err != nil { - return err - } - - *dst = Bytea{Bytes: buf, Status: Present} - return nil -} - -func (dst *Bytea) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Bytea{Status: Null} - return nil - } - - *dst = Bytea{Bytes: src, Status: Present} - return nil -} - -func (src Bytea) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, `\x`...) - buf = append(buf, hex.EncodeToString(src.Bytes)...) - return buf, nil -} - -func (src Bytea) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.Bytes...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Bytea) Scan(src interface{}) error { - if src == nil { - *dst = Bytea{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - buf := make([]byte, len(src)) - copy(buf, src) - *dst = Bytea{Bytes: buf, Status: Present} - return nil - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Bytea) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.Bytes, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/bytea_array.go b/vendor/github.com/jackc/pgtype/bytea_array.go deleted file mode 100644 index 69d1ceb9..00000000 --- a/vendor/github.com/jackc/pgtype/bytea_array.go +++ /dev/null @@ -1,489 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type ByteaArray struct { - Elements []Bytea - Dimensions []ArrayDimension - Status Status -} - -func (dst *ByteaArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = ByteaArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case [][]byte: - if value == nil { - *dst = ByteaArray{Status: Null} - } else if len(value) == 0 { - *dst = ByteaArray{Status: Present} - } else { - elements := make([]Bytea, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = ByteaArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Bytea: - if value == nil { - *dst = ByteaArray{Status: Null} - } else if len(value) == 0 { - *dst = ByteaArray{Status: Present} - } else { - *dst = ByteaArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = ByteaArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for ByteaArray", src) - } - if elementsLength == 0 { - *dst = ByteaArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to ByteaArray", src) - } - - *dst = ByteaArray{ - Elements: make([]Bytea, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Bytea, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to ByteaArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *ByteaArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to ByteaArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in ByteaArray", err) - } - index++ - - return index, nil -} - -func (dst ByteaArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *ByteaArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[][]byte: - *v = make([][]byte, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *ByteaArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from ByteaArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from ByteaArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *ByteaArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = ByteaArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Bytea - - if len(uta.Elements) > 0 { - elements = make([]Bytea, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Bytea - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = ByteaArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *ByteaArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = ByteaArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = ByteaArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Bytea, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = ByteaArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src ByteaArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src ByteaArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("bytea"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "bytea") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *ByteaArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src ByteaArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/cid.go b/vendor/github.com/jackc/pgtype/cid.go deleted file mode 100644 index b944748c..00000000 --- a/vendor/github.com/jackc/pgtype/cid.go +++ /dev/null @@ -1,61 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// CID is PostgreSQL's Command Identifier type. -// -// When one does -// -// select cmin, cmax, * from some_table; -// -// it is the data type of the cmin and cmax hidden system columns. -// -// It is currently implemented as an unsigned four byte integer. -// Its definition can be found in src/include/c.h as CommandId -// in the PostgreSQL sources. -type CID pguint32 - -// Set converts from src to dst. Note that as CID is not a general -// number type Set does not do automatic type conversion as other number -// types do. -func (dst *CID) Set(src interface{}) error { - return (*pguint32)(dst).Set(src) -} - -func (dst CID) Get() interface{} { - return (pguint32)(dst).Get() -} - -// AssignTo assigns from src to dst. Note that as CID is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *CID) AssignTo(dst interface{}) error { - return (*pguint32)(src).AssignTo(dst) -} - -func (dst *CID) DecodeText(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeText(ci, src) -} - -func (dst *CID) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeBinary(ci, src) -} - -func (src CID) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeText(ci, buf) -} - -func (src CID) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *CID) Scan(src interface{}) error { - return (*pguint32)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src CID) Value() (driver.Value, error) { - return (pguint32)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/cidr.go b/vendor/github.com/jackc/pgtype/cidr.go deleted file mode 100644 index 7c562cf2..00000000 --- a/vendor/github.com/jackc/pgtype/cidr.go +++ /dev/null @@ -1,43 +0,0 @@ -package pgtype - -import "database/sql/driver" - -type CIDR Inet - -func (dst *CIDR) Set(src interface{}) error { - return (*Inet)(dst).Set(src) -} - -func (dst CIDR) Get() interface{} { - return (Inet)(dst).Get() -} - -func (src *CIDR) AssignTo(dst interface{}) error { - return (*Inet)(src).AssignTo(dst) -} - -func (dst *CIDR) DecodeText(ci *ConnInfo, src []byte) error { - return (*Inet)(dst).DecodeText(ci, src) -} - -func (dst *CIDR) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Inet)(dst).DecodeBinary(ci, src) -} - -func (src CIDR) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Inet)(src).EncodeText(ci, buf) -} - -func (src CIDR) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Inet)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *CIDR) Scan(src interface{}) error { - return (*Inet)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src CIDR) Value() (driver.Value, error) { - return (Inet)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/cidr_array.go b/vendor/github.com/jackc/pgtype/cidr_array.go deleted file mode 100644 index 783c599c..00000000 --- a/vendor/github.com/jackc/pgtype/cidr_array.go +++ /dev/null @@ -1,546 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "net" - "reflect" - - "github.com/jackc/pgio" -) - -type CIDRArray struct { - Elements []CIDR - Dimensions []ArrayDimension - Status Status -} - -func (dst *CIDRArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = CIDRArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []*net.IPNet: - if value == nil { - *dst = CIDRArray{Status: Null} - } else if len(value) == 0 { - *dst = CIDRArray{Status: Present} - } else { - elements := make([]CIDR, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = CIDRArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []net.IP: - if value == nil { - *dst = CIDRArray{Status: Null} - } else if len(value) == 0 { - *dst = CIDRArray{Status: Present} - } else { - elements := make([]CIDR, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = CIDRArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*net.IP: - if value == nil { - *dst = CIDRArray{Status: Null} - } else if len(value) == 0 { - *dst = CIDRArray{Status: Present} - } else { - elements := make([]CIDR, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = CIDRArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []CIDR: - if value == nil { - *dst = CIDRArray{Status: Null} - } else if len(value) == 0 { - *dst = CIDRArray{Status: Present} - } else { - *dst = CIDRArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = CIDRArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for CIDRArray", src) - } - if elementsLength == 0 { - *dst = CIDRArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to CIDRArray", src) - } - - *dst = CIDRArray{ - Elements: make([]CIDR, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]CIDR, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to CIDRArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *CIDRArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to CIDRArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in CIDRArray", err) - } - index++ - - return index, nil -} - -func (dst CIDRArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *CIDRArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]*net.IPNet: - *v = make([]*net.IPNet, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]net.IP: - *v = make([]net.IP, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*net.IP: - *v = make([]*net.IP, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *CIDRArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from CIDRArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from CIDRArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *CIDRArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = CIDRArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []CIDR - - if len(uta.Elements) > 0 { - elements = make([]CIDR, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem CIDR - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = CIDRArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *CIDRArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = CIDRArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = CIDRArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]CIDR, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = CIDRArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src CIDRArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src CIDRArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("cidr"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "cidr") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *CIDRArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src CIDRArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/circle.go b/vendor/github.com/jackc/pgtype/circle.go deleted file mode 100644 index 4279650e..00000000 --- a/vendor/github.com/jackc/pgtype/circle.go +++ /dev/null @@ -1,150 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Circle struct { - P Vec2 - R float64 - Status Status -} - -func (dst *Circle) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Circle", src) -} - -func (dst Circle) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Circle) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Circle) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Circle{Status: Null} - return nil - } - - if len(src) < 9 { - return fmt.Errorf("invalid length for Circle: %v", len(src)) - } - - str := string(src[2:]) - end := strings.IndexByte(str, ',') - x, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1:] - end = strings.IndexByte(str, ')') - - y, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+2 : len(str)-1] - - r, err := strconv.ParseFloat(str, 64) - if err != nil { - return err - } - - *dst = Circle{P: Vec2{x, y}, R: r, Status: Present} - return nil -} - -func (dst *Circle) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Circle{Status: Null} - return nil - } - - if len(src) != 24 { - return fmt.Errorf("invalid length for Circle: %v", len(src)) - } - - x := binary.BigEndian.Uint64(src) - y := binary.BigEndian.Uint64(src[8:]) - r := binary.BigEndian.Uint64(src[16:]) - - *dst = Circle{ - P: Vec2{math.Float64frombits(x), math.Float64frombits(y)}, - R: math.Float64frombits(r), - Status: Present, - } - return nil -} - -func (src Circle) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, fmt.Sprintf(`<(%s,%s),%s>`, - strconv.FormatFloat(src.P.X, 'f', -1, 64), - strconv.FormatFloat(src.P.Y, 'f', -1, 64), - strconv.FormatFloat(src.R, 'f', -1, 64), - )...) - - return buf, nil -} - -func (src Circle) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.P.X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P.Y)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.R)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Circle) Scan(src interface{}) error { - if src == nil { - *dst = Circle{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Circle) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/composite_fields.go b/vendor/github.com/jackc/pgtype/composite_fields.go deleted file mode 100644 index b6d09fcf..00000000 --- a/vendor/github.com/jackc/pgtype/composite_fields.go +++ /dev/null @@ -1,107 +0,0 @@ -package pgtype - -import "fmt" - -// CompositeFields scans the fields of a composite type into the elements of the CompositeFields value. To scan a -// nullable value use a *CompositeFields. It will be set to nil in case of null. -// -// CompositeFields implements EncodeBinary and EncodeText. However, functionality is limited due to CompositeFields not -// knowing the PostgreSQL schema of the composite type. Prefer using a registered CompositeType. -type CompositeFields []interface{} - -func (cf CompositeFields) DecodeBinary(ci *ConnInfo, src []byte) error { - if len(cf) == 0 { - return fmt.Errorf("cannot decode into empty CompositeFields") - } - - if src == nil { - return fmt.Errorf("cannot decode unexpected null into CompositeFields") - } - - scanner := NewCompositeBinaryScanner(ci, src) - - for _, f := range cf { - scanner.ScanValue(f) - } - - if scanner.Err() != nil { - return scanner.Err() - } - - return nil -} - -func (cf CompositeFields) DecodeText(ci *ConnInfo, src []byte) error { - if len(cf) == 0 { - return fmt.Errorf("cannot decode into empty CompositeFields") - } - - if src == nil { - return fmt.Errorf("cannot decode unexpected null into CompositeFields") - } - - scanner := NewCompositeTextScanner(ci, src) - - for _, f := range cf { - scanner.ScanValue(f) - } - - if scanner.Err() != nil { - return scanner.Err() - } - - return nil -} - -// EncodeText encodes composite fields into the text format. Prefer registering a CompositeType to using -// CompositeFields to encode directly. -func (cf CompositeFields) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - b := NewCompositeTextBuilder(ci, buf) - - for _, f := range cf { - if textEncoder, ok := f.(TextEncoder); ok { - b.AppendEncoder(textEncoder) - } else { - b.AppendValue(f) - } - } - - return b.Finish() -} - -// EncodeBinary encodes composite fields into the binary format. Unlike CompositeType the schema of the destination is -// unknown. Prefer registering a CompositeType to using CompositeFields to encode directly. Because the binary -// composite format requires the OID of each field to be specified the only types that will work are those known to -// ConnInfo. -// -// In particular: -// -// * Nil cannot be used because there is no way to determine what type it. -// * Integer types must be exact matches. e.g. A Go int32 into a PostgreSQL bigint will fail. -// * No dereferencing will be done. e.g. *Text must be used instead of Text. -func (cf CompositeFields) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - b := NewCompositeBinaryBuilder(ci, buf) - - for _, f := range cf { - dt, ok := ci.DataTypeForValue(f) - if !ok { - return nil, fmt.Errorf("Unknown OID for %#v", f) - } - - if binaryEncoder, ok := f.(BinaryEncoder); ok { - b.AppendEncoder(dt.OID, binaryEncoder) - } else { - err := dt.Value.Set(f) - if err != nil { - return nil, err - } - if binaryEncoder, ok := dt.Value.(BinaryEncoder); ok { - b.AppendEncoder(dt.OID, binaryEncoder) - } else { - return nil, fmt.Errorf("Cannot encode binary format for %v", f) - } - } - } - - return b.Finish() -} diff --git a/vendor/github.com/jackc/pgtype/composite_type.go b/vendor/github.com/jackc/pgtype/composite_type.go deleted file mode 100644 index 32e0aa26..00000000 --- a/vendor/github.com/jackc/pgtype/composite_type.go +++ /dev/null @@ -1,682 +0,0 @@ -package pgtype - -import ( - "encoding/binary" - "errors" - "fmt" - "reflect" - "strings" - - "github.com/jackc/pgio" -) - -type CompositeTypeField struct { - Name string - OID uint32 -} - -type CompositeType struct { - status Status - - typeName string - - fields []CompositeTypeField - valueTranscoders []ValueTranscoder -} - -// NewCompositeType creates a CompositeType from fields and ci. ci is used to find the ValueTranscoders used -// for fields. All field OIDs must be previously registered in ci. -func NewCompositeType(typeName string, fields []CompositeTypeField, ci *ConnInfo) (*CompositeType, error) { - valueTranscoders := make([]ValueTranscoder, len(fields)) - - for i := range fields { - dt, ok := ci.DataTypeForOID(fields[i].OID) - if !ok { - return nil, fmt.Errorf("no data type registered for oid: %d", fields[i].OID) - } - - value := NewValue(dt.Value) - valueTranscoder, ok := value.(ValueTranscoder) - if !ok { - return nil, fmt.Errorf("data type for oid does not implement ValueTranscoder: %d", fields[i].OID) - } - - valueTranscoders[i] = valueTranscoder - } - - return &CompositeType{typeName: typeName, fields: fields, valueTranscoders: valueTranscoders}, nil -} - -// NewCompositeTypeValues creates a CompositeType from fields and values. fields and values must have the same length. -// Prefer NewCompositeType unless overriding the transcoding of fields is required. -func NewCompositeTypeValues(typeName string, fields []CompositeTypeField, values []ValueTranscoder) (*CompositeType, error) { - if len(fields) != len(values) { - return nil, errors.New("fields and valueTranscoders must have same length") - } - - return &CompositeType{typeName: typeName, fields: fields, valueTranscoders: values}, nil -} - -func (src CompositeType) Get() interface{} { - switch src.status { - case Present: - results := make(map[string]interface{}, len(src.valueTranscoders)) - for i := range src.valueTranscoders { - results[src.fields[i].Name] = src.valueTranscoders[i].Get() - } - return results - case Null: - return nil - default: - return src.status - } -} - -func (ct *CompositeType) NewTypeValue() Value { - a := &CompositeType{ - typeName: ct.typeName, - fields: ct.fields, - valueTranscoders: make([]ValueTranscoder, len(ct.valueTranscoders)), - } - - for i := range ct.valueTranscoders { - a.valueTranscoders[i] = NewValue(ct.valueTranscoders[i]).(ValueTranscoder) - } - - return a -} - -func (ct *CompositeType) TypeName() string { - return ct.typeName -} - -func (ct *CompositeType) Fields() []CompositeTypeField { - return ct.fields -} - -func (dst *CompositeType) Set(src interface{}) error { - if src == nil { - dst.status = Null - return nil - } - - switch value := src.(type) { - case []interface{}: - if len(value) != len(dst.valueTranscoders) { - return fmt.Errorf("Number of fields don't match. CompositeType has %d fields", len(dst.valueTranscoders)) - } - for i, v := range value { - if err := dst.valueTranscoders[i].Set(v); err != nil { - return err - } - } - dst.status = Present - case *[]interface{}: - if value == nil { - dst.status = Null - return nil - } - return dst.Set(*value) - default: - return fmt.Errorf("Can not convert %v to Composite", src) - } - - return nil -} - -// AssignTo should never be called on composite value directly -func (src CompositeType) AssignTo(dst interface{}) error { - switch src.status { - case Present: - switch v := dst.(type) { - case []interface{}: - if len(v) != len(src.valueTranscoders) { - return fmt.Errorf("Number of fields don't match. CompositeType has %d fields", len(src.valueTranscoders)) - } - for i := range src.valueTranscoders { - if v[i] == nil { - continue - } - - err := assignToOrSet(src.valueTranscoders[i], v[i]) - if err != nil { - return fmt.Errorf("unable to assign to dst[%d]: %v", i, err) - } - } - return nil - case *[]interface{}: - return src.AssignTo(*v) - default: - if isPtrStruct, err := src.assignToPtrStruct(dst); isPtrStruct { - return err - } - - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func assignToOrSet(src Value, dst interface{}) error { - assignToErr := src.AssignTo(dst) - if assignToErr != nil { - // Try to use get / set instead -- this avoids every type having to be able to AssignTo type of self. - setSucceeded := false - if setter, ok := dst.(Value); ok { - err := setter.Set(src.Get()) - setSucceeded = err == nil - } - if !setSucceeded { - return assignToErr - } - } - - return nil -} - -func (src CompositeType) assignToPtrStruct(dst interface{}) (bool, error) { - dstValue := reflect.ValueOf(dst) - if dstValue.Kind() != reflect.Ptr { - return false, nil - } - - if dstValue.IsNil() { - return false, nil - } - - dstElemValue := dstValue.Elem() - dstElemType := dstElemValue.Type() - - if dstElemType.Kind() != reflect.Struct { - return false, nil - } - - exportedFields := make([]int, 0, dstElemType.NumField()) - for i := 0; i < dstElemType.NumField(); i++ { - sf := dstElemType.Field(i) - if sf.PkgPath == "" { - exportedFields = append(exportedFields, i) - } - } - - if len(exportedFields) != len(src.valueTranscoders) { - return false, nil - } - - for i := range exportedFields { - err := assignToOrSet(src.valueTranscoders[i], dstElemValue.Field(exportedFields[i]).Addr().Interface()) - if err != nil { - return true, fmt.Errorf("unable to assign to field %s: %v", dstElemType.Field(exportedFields[i]).Name, err) - } - } - - return true, nil -} - -func (src CompositeType) EncodeBinary(ci *ConnInfo, buf []byte) (newBuf []byte, err error) { - switch src.status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - b := NewCompositeBinaryBuilder(ci, buf) - for i := range src.valueTranscoders { - b.AppendEncoder(src.fields[i].OID, src.valueTranscoders[i]) - } - - return b.Finish() -} - -// DecodeBinary implements BinaryDecoder interface. -// Opposite to Record, fields in a composite act as a "schema" -// and decoding fails if SQL value can't be assigned due to -// type mismatch -func (dst *CompositeType) DecodeBinary(ci *ConnInfo, buf []byte) error { - if buf == nil { - dst.status = Null - return nil - } - - scanner := NewCompositeBinaryScanner(ci, buf) - - for _, f := range dst.valueTranscoders { - scanner.ScanDecoder(f) - } - - if scanner.Err() != nil { - return scanner.Err() - } - - dst.status = Present - - return nil -} - -func (dst *CompositeType) DecodeText(ci *ConnInfo, buf []byte) error { - if buf == nil { - dst.status = Null - return nil - } - - scanner := NewCompositeTextScanner(ci, buf) - - for _, f := range dst.valueTranscoders { - scanner.ScanDecoder(f) - } - - if scanner.Err() != nil { - return scanner.Err() - } - - dst.status = Present - - return nil -} - -func (src CompositeType) EncodeText(ci *ConnInfo, buf []byte) (newBuf []byte, err error) { - switch src.status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - b := NewCompositeTextBuilder(ci, buf) - for _, f := range src.valueTranscoders { - b.AppendEncoder(f) - } - - return b.Finish() -} - -type CompositeBinaryScanner struct { - ci *ConnInfo - rp int - src []byte - - fieldCount int32 - fieldBytes []byte - fieldOID uint32 - err error -} - -// NewCompositeBinaryScanner a scanner over a binary encoded composite balue. -func NewCompositeBinaryScanner(ci *ConnInfo, src []byte) *CompositeBinaryScanner { - rp := 0 - if len(src[rp:]) < 4 { - return &CompositeBinaryScanner{err: fmt.Errorf("Record incomplete %v", src)} - } - - fieldCount := int32(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - return &CompositeBinaryScanner{ - ci: ci, - rp: rp, - src: src, - fieldCount: fieldCount, - } -} - -// ScanDecoder calls Next and decodes the result with d. -func (cfs *CompositeBinaryScanner) ScanDecoder(d BinaryDecoder) { - if cfs.err != nil { - return - } - - if cfs.Next() { - cfs.err = d.DecodeBinary(cfs.ci, cfs.fieldBytes) - } else { - cfs.err = errors.New("read past end of composite") - } -} - -// ScanDecoder calls Next and scans the result into d. -func (cfs *CompositeBinaryScanner) ScanValue(d interface{}) { - if cfs.err != nil { - return - } - - if cfs.Next() { - cfs.err = cfs.ci.Scan(cfs.OID(), BinaryFormatCode, cfs.Bytes(), d) - } else { - cfs.err = errors.New("read past end of composite") - } -} - -// Next advances the scanner to the next field. It returns false after the last field is read or an error occurs. After -// Next returns false, the Err method can be called to check if any errors occurred. -func (cfs *CompositeBinaryScanner) Next() bool { - if cfs.err != nil { - return false - } - - if cfs.rp == len(cfs.src) { - return false - } - - if len(cfs.src[cfs.rp:]) < 8 { - cfs.err = fmt.Errorf("Record incomplete %v", cfs.src) - return false - } - cfs.fieldOID = binary.BigEndian.Uint32(cfs.src[cfs.rp:]) - cfs.rp += 4 - - fieldLen := int(int32(binary.BigEndian.Uint32(cfs.src[cfs.rp:]))) - cfs.rp += 4 - - if fieldLen >= 0 { - if len(cfs.src[cfs.rp:]) < fieldLen { - cfs.err = fmt.Errorf("Record incomplete rp=%d src=%v", cfs.rp, cfs.src) - return false - } - cfs.fieldBytes = cfs.src[cfs.rp : cfs.rp+fieldLen] - cfs.rp += fieldLen - } else { - cfs.fieldBytes = nil - } - - return true -} - -func (cfs *CompositeBinaryScanner) FieldCount() int { - return int(cfs.fieldCount) -} - -// Bytes returns the bytes of the field most recently read by Scan(). -func (cfs *CompositeBinaryScanner) Bytes() []byte { - return cfs.fieldBytes -} - -// OID returns the OID of the field most recently read by Scan(). -func (cfs *CompositeBinaryScanner) OID() uint32 { - return cfs.fieldOID -} - -// Err returns any error encountered by the scanner. -func (cfs *CompositeBinaryScanner) Err() error { - return cfs.err -} - -type CompositeTextScanner struct { - ci *ConnInfo - rp int - src []byte - - fieldBytes []byte - err error -} - -// NewCompositeTextScanner a scanner over a text encoded composite value. -func NewCompositeTextScanner(ci *ConnInfo, src []byte) *CompositeTextScanner { - if len(src) < 2 { - return &CompositeTextScanner{err: fmt.Errorf("Record incomplete %v", src)} - } - - if src[0] != '(' { - return &CompositeTextScanner{err: fmt.Errorf("composite text format must start with '('")} - } - - if src[len(src)-1] != ')' { - return &CompositeTextScanner{err: fmt.Errorf("composite text format must end with ')'")} - } - - return &CompositeTextScanner{ - ci: ci, - rp: 1, - src: src, - } -} - -// ScanDecoder calls Next and decodes the result with d. -func (cfs *CompositeTextScanner) ScanDecoder(d TextDecoder) { - if cfs.err != nil { - return - } - - if cfs.Next() { - cfs.err = d.DecodeText(cfs.ci, cfs.fieldBytes) - } else { - cfs.err = errors.New("read past end of composite") - } -} - -// ScanDecoder calls Next and scans the result into d. -func (cfs *CompositeTextScanner) ScanValue(d interface{}) { - if cfs.err != nil { - return - } - - if cfs.Next() { - cfs.err = cfs.ci.Scan(0, TextFormatCode, cfs.Bytes(), d) - } else { - cfs.err = errors.New("read past end of composite") - } -} - -// Next advances the scanner to the next field. It returns false after the last field is read or an error occurs. After -// Next returns false, the Err method can be called to check if any errors occurred. -func (cfs *CompositeTextScanner) Next() bool { - if cfs.err != nil { - return false - } - - if cfs.rp == len(cfs.src) { - return false - } - - switch cfs.src[cfs.rp] { - case ',', ')': // null - cfs.rp++ - cfs.fieldBytes = nil - return true - case '"': // quoted value - cfs.rp++ - cfs.fieldBytes = make([]byte, 0, 16) - for { - ch := cfs.src[cfs.rp] - - if ch == '"' { - cfs.rp++ - if cfs.src[cfs.rp] == '"' { - cfs.fieldBytes = append(cfs.fieldBytes, '"') - cfs.rp++ - } else { - break - } - } else if ch == '\\' { - cfs.rp++ - cfs.fieldBytes = append(cfs.fieldBytes, cfs.src[cfs.rp]) - cfs.rp++ - } else { - cfs.fieldBytes = append(cfs.fieldBytes, ch) - cfs.rp++ - } - } - cfs.rp++ - return true - default: // unquoted value - start := cfs.rp - for { - ch := cfs.src[cfs.rp] - if ch == ',' || ch == ')' { - break - } - cfs.rp++ - } - cfs.fieldBytes = cfs.src[start:cfs.rp] - cfs.rp++ - return true - } -} - -// Bytes returns the bytes of the field most recently read by Scan(). -func (cfs *CompositeTextScanner) Bytes() []byte { - return cfs.fieldBytes -} - -// Err returns any error encountered by the scanner. -func (cfs *CompositeTextScanner) Err() error { - return cfs.err -} - -type CompositeBinaryBuilder struct { - ci *ConnInfo - buf []byte - startIdx int - fieldCount uint32 - err error -} - -func NewCompositeBinaryBuilder(ci *ConnInfo, buf []byte) *CompositeBinaryBuilder { - startIdx := len(buf) - buf = append(buf, 0, 0, 0, 0) // allocate room for number of fields - return &CompositeBinaryBuilder{ci: ci, buf: buf, startIdx: startIdx} -} - -func (b *CompositeBinaryBuilder) AppendValue(oid uint32, field interface{}) { - if b.err != nil { - return - } - - dt, ok := b.ci.DataTypeForOID(oid) - if !ok { - b.err = fmt.Errorf("unknown data type for OID: %d", oid) - return - } - - err := dt.Value.Set(field) - if err != nil { - b.err = err - return - } - - binaryEncoder, ok := dt.Value.(BinaryEncoder) - if !ok { - b.err = fmt.Errorf("unable to encode binary for OID: %d", oid) - return - } - - b.AppendEncoder(oid, binaryEncoder) -} - -func (b *CompositeBinaryBuilder) AppendEncoder(oid uint32, field BinaryEncoder) { - if b.err != nil { - return - } - - b.buf = pgio.AppendUint32(b.buf, oid) - lengthPos := len(b.buf) - b.buf = pgio.AppendInt32(b.buf, -1) - fieldBuf, err := field.EncodeBinary(b.ci, b.buf) - if err != nil { - b.err = err - return - } - if fieldBuf != nil { - binary.BigEndian.PutUint32(fieldBuf[lengthPos:], uint32(len(fieldBuf)-len(b.buf))) - b.buf = fieldBuf - } - - b.fieldCount++ -} - -func (b *CompositeBinaryBuilder) Finish() ([]byte, error) { - if b.err != nil { - return nil, b.err - } - - binary.BigEndian.PutUint32(b.buf[b.startIdx:], b.fieldCount) - return b.buf, nil -} - -type CompositeTextBuilder struct { - ci *ConnInfo - buf []byte - startIdx int - fieldCount uint32 - err error - fieldBuf [32]byte -} - -func NewCompositeTextBuilder(ci *ConnInfo, buf []byte) *CompositeTextBuilder { - buf = append(buf, '(') // allocate room for number of fields - return &CompositeTextBuilder{ci: ci, buf: buf} -} - -func (b *CompositeTextBuilder) AppendValue(field interface{}) { - if b.err != nil { - return - } - - if field == nil { - b.buf = append(b.buf, ',') - return - } - - dt, ok := b.ci.DataTypeForValue(field) - if !ok { - b.err = fmt.Errorf("unknown data type for field: %v", field) - return - } - - err := dt.Value.Set(field) - if err != nil { - b.err = err - return - } - - textEncoder, ok := dt.Value.(TextEncoder) - if !ok { - b.err = fmt.Errorf("unable to encode text for value: %v", field) - return - } - - b.AppendEncoder(textEncoder) -} - -func (b *CompositeTextBuilder) AppendEncoder(field TextEncoder) { - if b.err != nil { - return - } - - fieldBuf, err := field.EncodeText(b.ci, b.fieldBuf[0:0]) - if err != nil { - b.err = err - return - } - if fieldBuf != nil { - b.buf = append(b.buf, quoteCompositeFieldIfNeeded(string(fieldBuf))...) - } - - b.buf = append(b.buf, ',') -} - -func (b *CompositeTextBuilder) Finish() ([]byte, error) { - if b.err != nil { - return nil, b.err - } - - b.buf[len(b.buf)-1] = ')' - return b.buf, nil -} - -var quoteCompositeReplacer = strings.NewReplacer(`\`, `\\`, `"`, `\"`) - -func quoteCompositeField(src string) string { - return `"` + quoteCompositeReplacer.Replace(src) + `"` -} - -func quoteCompositeFieldIfNeeded(src string) string { - if src == "" || src[0] == ' ' || src[len(src)-1] == ' ' || strings.ContainsAny(src, `(),"\`) { - return quoteCompositeField(src) - } - return src -} diff --git a/vendor/github.com/jackc/pgtype/convert.go b/vendor/github.com/jackc/pgtype/convert.go deleted file mode 100644 index 377fe3ea..00000000 --- a/vendor/github.com/jackc/pgtype/convert.go +++ /dev/null @@ -1,476 +0,0 @@ -package pgtype - -import ( - "database/sql" - "fmt" - "math" - "reflect" - "time" -) - -const ( - maxUint = ^uint(0) - maxInt = int(maxUint >> 1) - minInt = -maxInt - 1 -) - -// underlyingNumberType gets the underlying type that can be converted to Int2, Int4, Int8, Float4, or Float8 -func underlyingNumberType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - case reflect.Int: - convVal := int(refVal.Int()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Int8: - convVal := int8(refVal.Int()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Int16: - convVal := int16(refVal.Int()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Int32: - convVal := int32(refVal.Int()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Int64: - convVal := int64(refVal.Int()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Uint: - convVal := uint(refVal.Uint()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Uint8: - convVal := uint8(refVal.Uint()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Uint16: - convVal := uint16(refVal.Uint()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Uint32: - convVal := uint32(refVal.Uint()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Uint64: - convVal := uint64(refVal.Uint()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Float32: - convVal := float32(refVal.Float()) - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.Float64: - convVal := refVal.Float() - return convVal, reflect.TypeOf(convVal) != refVal.Type() - case reflect.String: - convVal := refVal.String() - return convVal, reflect.TypeOf(convVal) != refVal.Type() - } - - return nil, false -} - -// underlyingBoolType gets the underlying type that can be converted to Bool -func underlyingBoolType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - case reflect.Bool: - convVal := refVal.Bool() - return convVal, reflect.TypeOf(convVal) != refVal.Type() - } - - return nil, false -} - -// underlyingBytesType gets the underlying type that can be converted to []byte -func underlyingBytesType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - case reflect.Slice: - if refVal.Type().Elem().Kind() == reflect.Uint8 { - convVal := refVal.Bytes() - return convVal, reflect.TypeOf(convVal) != refVal.Type() - } - } - - return nil, false -} - -// underlyingStringType gets the underlying type that can be converted to String -func underlyingStringType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - case reflect.String: - convVal := refVal.String() - return convVal, reflect.TypeOf(convVal) != refVal.Type() - } - - return nil, false -} - -// underlyingPtrType dereferences a pointer -func underlyingPtrType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - } - - return nil, false -} - -// underlyingTimeType gets the underlying type that can be converted to time.Time -func underlyingTimeType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - } - - timeType := reflect.TypeOf(time.Time{}) - if refVal.Type().ConvertibleTo(timeType) { - return refVal.Convert(timeType).Interface(), true - } - - return nil, false -} - -// underlyingUUIDType gets the underlying type that can be converted to [16]byte -func underlyingUUIDType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - } - - uuidType := reflect.TypeOf([16]byte{}) - if refVal.Type().ConvertibleTo(uuidType) { - return refVal.Convert(uuidType).Interface(), true - } - - return nil, false -} - -// underlyingSliceType gets the underlying slice type -func underlyingSliceType(val interface{}) (interface{}, bool) { - refVal := reflect.ValueOf(val) - - switch refVal.Kind() { - case reflect.Ptr: - if refVal.IsNil() { - return nil, false - } - convVal := refVal.Elem().Interface() - return convVal, true - case reflect.Slice: - baseSliceType := reflect.SliceOf(refVal.Type().Elem()) - if refVal.Type().ConvertibleTo(baseSliceType) { - convVal := refVal.Convert(baseSliceType) - return convVal.Interface(), reflect.TypeOf(convVal.Interface()) != refVal.Type() - } - } - - return nil, false -} - -func int64AssignTo(srcVal int64, srcStatus Status, dst interface{}) error { - if srcStatus == Present { - switch v := dst.(type) { - case *int: - if srcVal < int64(minInt) { - return fmt.Errorf("%d is less than minimum value for int", srcVal) - } else if srcVal > int64(maxInt) { - return fmt.Errorf("%d is greater than maximum value for int", srcVal) - } - *v = int(srcVal) - case *int8: - if srcVal < math.MinInt8 { - return fmt.Errorf("%d is less than minimum value for int8", srcVal) - } else if srcVal > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for int8", srcVal) - } - *v = int8(srcVal) - case *int16: - if srcVal < math.MinInt16 { - return fmt.Errorf("%d is less than minimum value for int16", srcVal) - } else if srcVal > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for int16", srcVal) - } - *v = int16(srcVal) - case *int32: - if srcVal < math.MinInt32 { - return fmt.Errorf("%d is less than minimum value for int32", srcVal) - } else if srcVal > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for int32", srcVal) - } - *v = int32(srcVal) - case *int64: - if srcVal < math.MinInt64 { - return fmt.Errorf("%d is less than minimum value for int64", srcVal) - } else if srcVal > math.MaxInt64 { - return fmt.Errorf("%d is greater than maximum value for int64", srcVal) - } - *v = int64(srcVal) - case *uint: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for uint", srcVal) - } else if uint64(srcVal) > uint64(maxUint) { - return fmt.Errorf("%d is greater than maximum value for uint", srcVal) - } - *v = uint(srcVal) - case *uint8: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for uint8", srcVal) - } else if srcVal > math.MaxUint8 { - return fmt.Errorf("%d is greater than maximum value for uint8", srcVal) - } - *v = uint8(srcVal) - case *uint16: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for uint32", srcVal) - } else if srcVal > math.MaxUint16 { - return fmt.Errorf("%d is greater than maximum value for uint16", srcVal) - } - *v = uint16(srcVal) - case *uint32: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for uint32", srcVal) - } else if srcVal > math.MaxUint32 { - return fmt.Errorf("%d is greater than maximum value for uint32", srcVal) - } - *v = uint32(srcVal) - case *uint64: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for uint64", srcVal) - } - *v = uint64(srcVal) - case sql.Scanner: - return v.Scan(srcVal) - default: - if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr { - el := v.Elem() - switch el.Kind() { - // if dst is a pointer to pointer, strip the pointer and try again - case reflect.Ptr: - if el.IsNil() { - // allocate destination - el.Set(reflect.New(el.Type().Elem())) - } - return int64AssignTo(srcVal, srcStatus, el.Interface()) - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - if el.OverflowInt(int64(srcVal)) { - return fmt.Errorf("cannot put %d into %T", srcVal, dst) - } - el.SetInt(int64(srcVal)) - return nil - case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: - if srcVal < 0 { - return fmt.Errorf("%d is less than zero for %T", srcVal, dst) - } - if el.OverflowUint(uint64(srcVal)) { - return fmt.Errorf("cannot put %d into %T", srcVal, dst) - } - el.SetUint(uint64(srcVal)) - return nil - } - } - return fmt.Errorf("cannot assign %v into %T", srcVal, dst) - } - return nil - } - - // if dst is a pointer to pointer and srcStatus is not Present, nil it out - if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr { - el := v.Elem() - if el.Kind() == reflect.Ptr { - el.Set(reflect.Zero(el.Type())) - return nil - } - } - - return fmt.Errorf("cannot assign %v %v into %T", srcVal, srcStatus, dst) -} - -func float64AssignTo(srcVal float64, srcStatus Status, dst interface{}) error { - if srcStatus == Present { - switch v := dst.(type) { - case *float32: - *v = float32(srcVal) - case *float64: - *v = srcVal - default: - if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr { - el := v.Elem() - switch el.Kind() { - // if dst is a type alias of a float32 or 64, set dst val - case reflect.Float32, reflect.Float64: - el.SetFloat(srcVal) - return nil - // if dst is a pointer to pointer, strip the pointer and try again - case reflect.Ptr: - if el.IsNil() { - // allocate destination - el.Set(reflect.New(el.Type().Elem())) - } - return float64AssignTo(srcVal, srcStatus, el.Interface()) - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: - i64 := int64(srcVal) - if float64(i64) == srcVal { - return int64AssignTo(i64, srcStatus, dst) - } - } - } - return fmt.Errorf("cannot assign %v into %T", srcVal, dst) - } - return nil - } - - // if dst is a pointer to pointer and srcStatus is not Present, nil it out - if v := reflect.ValueOf(dst); v.Kind() == reflect.Ptr { - el := v.Elem() - if el.Kind() == reflect.Ptr { - el.Set(reflect.Zero(el.Type())) - return nil - } - } - - return fmt.Errorf("cannot assign %v %v into %T", srcVal, srcStatus, dst) -} - -func NullAssignTo(dst interface{}) error { - dstPtr := reflect.ValueOf(dst) - - // AssignTo dst must always be a pointer - if dstPtr.Kind() != reflect.Ptr { - return &nullAssignmentError{dst: dst} - } - - dstVal := dstPtr.Elem() - - switch dstVal.Kind() { - case reflect.Ptr, reflect.Slice, reflect.Map: - dstVal.Set(reflect.Zero(dstVal.Type())) - return nil - } - - return &nullAssignmentError{dst: dst} -} - -var kindTypes map[reflect.Kind]reflect.Type - -func toInterface(dst reflect.Value, t reflect.Type) (interface{}, bool) { - nextDst := dst.Convert(t) - return nextDst.Interface(), dst.Type() != nextDst.Type() -} - -// GetAssignToDstType attempts to convert dst to something AssignTo can assign -// to. If dst is a pointer to pointer it allocates a value and returns the -// dereferences pointer. If dst is a named type such as *Foo where Foo is type -// Foo int16, it converts dst to *int16. -// -// GetAssignToDstType returns the converted dst and a bool representing if any -// change was made. -func GetAssignToDstType(dst interface{}) (interface{}, bool) { - dstPtr := reflect.ValueOf(dst) - - // AssignTo dst must always be a pointer - if dstPtr.Kind() != reflect.Ptr { - return nil, false - } - - dstVal := dstPtr.Elem() - - // if dst is a pointer to pointer, allocate space try again with the dereferenced pointer - if dstVal.Kind() == reflect.Ptr { - dstVal.Set(reflect.New(dstVal.Type().Elem())) - return dstVal.Interface(), true - } - - // if dst is pointer to a base type that has been renamed - if baseValType, ok := kindTypes[dstVal.Kind()]; ok { - return toInterface(dstPtr, reflect.PtrTo(baseValType)) - } - - if dstVal.Kind() == reflect.Slice { - if baseElemType, ok := kindTypes[dstVal.Type().Elem().Kind()]; ok { - return toInterface(dstPtr, reflect.PtrTo(reflect.SliceOf(baseElemType))) - } - } - - if dstVal.Kind() == reflect.Array { - if baseElemType, ok := kindTypes[dstVal.Type().Elem().Kind()]; ok { - return toInterface(dstPtr, reflect.PtrTo(reflect.ArrayOf(dstVal.Len(), baseElemType))) - } - } - - if dstVal.Kind() == reflect.Struct { - if dstVal.Type().NumField() == 1 && dstVal.Type().Field(0).Anonymous { - dstPtr = dstVal.Field(0).Addr() - nested := dstVal.Type().Field(0).Type - if nested.Kind() == reflect.Array { - if baseElemType, ok := kindTypes[nested.Elem().Kind()]; ok { - return toInterface(dstPtr, reflect.PtrTo(reflect.ArrayOf(nested.Len(), baseElemType))) - } - } - if _, ok := kindTypes[nested.Kind()]; ok && dstPtr.CanInterface() { - return dstPtr.Interface(), true - } - } - } - - return nil, false -} - -func init() { - kindTypes = map[reflect.Kind]reflect.Type{ - reflect.Bool: reflect.TypeOf(false), - reflect.Float32: reflect.TypeOf(float32(0)), - reflect.Float64: reflect.TypeOf(float64(0)), - reflect.Int: reflect.TypeOf(int(0)), - reflect.Int8: reflect.TypeOf(int8(0)), - reflect.Int16: reflect.TypeOf(int16(0)), - reflect.Int32: reflect.TypeOf(int32(0)), - reflect.Int64: reflect.TypeOf(int64(0)), - reflect.Uint: reflect.TypeOf(uint(0)), - reflect.Uint8: reflect.TypeOf(uint8(0)), - reflect.Uint16: reflect.TypeOf(uint16(0)), - reflect.Uint32: reflect.TypeOf(uint32(0)), - reflect.Uint64: reflect.TypeOf(uint64(0)), - reflect.String: reflect.TypeOf(""), - } -} diff --git a/vendor/github.com/jackc/pgtype/database_sql.go b/vendor/github.com/jackc/pgtype/database_sql.go deleted file mode 100644 index 9d1cf822..00000000 --- a/vendor/github.com/jackc/pgtype/database_sql.go +++ /dev/null @@ -1,41 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "errors" -) - -func DatabaseSQLValue(ci *ConnInfo, src Value) (interface{}, error) { - if valuer, ok := src.(driver.Valuer); ok { - return valuer.Value() - } - - if textEncoder, ok := src.(TextEncoder); ok { - buf, err := textEncoder.EncodeText(ci, nil) - if err != nil { - return nil, err - } - return string(buf), nil - } - - if binaryEncoder, ok := src.(BinaryEncoder); ok { - buf, err := binaryEncoder.EncodeBinary(ci, nil) - if err != nil { - return nil, err - } - return buf, nil - } - - return nil, errors.New("cannot convert to database/sql compatible value") -} - -func EncodeValueText(src TextEncoder) (interface{}, error) { - buf, err := src.EncodeText(nil, make([]byte, 0, 32)) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - return string(buf), err -} diff --git a/vendor/github.com/jackc/pgtype/date.go b/vendor/github.com/jackc/pgtype/date.go deleted file mode 100644 index e68abf01..00000000 --- a/vendor/github.com/jackc/pgtype/date.go +++ /dev/null @@ -1,324 +0,0 @@ -package pgtype - -import ( - "database/sql" - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "strings" - "time" - - "github.com/jackc/pgio" -) - -type Date struct { - Time time.Time - Status Status - InfinityModifier InfinityModifier -} - -const ( - negativeInfinityDayOffset = -2147483648 - infinityDayOffset = 2147483647 -) - -func (dst *Date) Set(src interface{}) error { - if src == nil { - *dst = Date{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - if value, ok := src.(interface{ Value() (driver.Value, error) }); ok { - v, err := value.Value() - if err != nil { - return fmt.Errorf("cannot get value %v for Date: %v", value, err) - } - return dst.Set(v) - } - - switch value := src.(type) { - case time.Time: - *dst = Date{Time: value, Status: Present} - case *time.Time: - if value == nil { - *dst = Date{Status: Null} - } else { - return dst.Set(*value) - } - case string: - return dst.DecodeText(nil, []byte(value)) - case *string: - if value == nil { - *dst = Date{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingTimeType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Date", value) - } - - return nil -} - -func (dst Date) Get() interface{} { - switch dst.Status { - case Present: - if dst.InfinityModifier != None { - return dst.InfinityModifier - } - return dst.Time - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Date) AssignTo(dst interface{}) error { - if scanner, ok := dst.(sql.Scanner); ok { - var err error - switch src.Status { - case Present: - if src.InfinityModifier != None { - err = scanner.Scan(src.InfinityModifier.String()) - } else { - err = scanner.Scan(src.Time) - } - case Null: - err = scanner.Scan(nil) - } - if err != nil { - return fmt.Errorf("unable assign %v to %T: %s", src, dst, err) - } - return nil - } - - switch src.Status { - case Present: - switch v := dst.(type) { - case *time.Time: - if src.InfinityModifier != None { - return fmt.Errorf("cannot assign %v to %T", src, dst) - } - *v = src.Time - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Date) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Date{Status: Null} - return nil - } - - sbuf := string(src) - switch sbuf { - case "infinity": - *dst = Date{Status: Present, InfinityModifier: Infinity} - case "-infinity": - *dst = Date{Status: Present, InfinityModifier: -Infinity} - default: - if strings.HasSuffix(sbuf, " BC") { - t, err := time.ParseInLocation("2006-01-02", strings.TrimRight(sbuf, " BC"), time.UTC) - t2 := time.Date(1-t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), t.Location()) - if err != nil { - return err - } - *dst = Date{Time: t2, Status: Present} - return nil - } - t, err := time.ParseInLocation("2006-01-02", sbuf, time.UTC) - if err != nil { - return err - } - - *dst = Date{Time: t, Status: Present} - } - - return nil -} - -func (dst *Date) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Date{Status: Null} - return nil - } - - if len(src) != 4 { - return fmt.Errorf("invalid length for date: %v", len(src)) - } - - dayOffset := int32(binary.BigEndian.Uint32(src)) - - switch dayOffset { - case infinityDayOffset: - *dst = Date{Status: Present, InfinityModifier: Infinity} - case negativeInfinityDayOffset: - *dst = Date{Status: Present, InfinityModifier: -Infinity} - default: - t := time.Date(2000, 1, int(1+dayOffset), 0, 0, 0, 0, time.UTC) - *dst = Date{Time: t, Status: Present} - } - - return nil -} - -func (src Date) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var s string - - switch src.InfinityModifier { - case None: - s = src.Time.Format("2006-01-02") - case Infinity: - s = "infinity" - case NegativeInfinity: - s = "-infinity" - } - - return append(buf, s...), nil -} - -func (src Date) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var daysSinceDateEpoch int32 - switch src.InfinityModifier { - case None: - tUnix := time.Date(src.Time.Year(), src.Time.Month(), src.Time.Day(), 0, 0, 0, 0, time.UTC).Unix() - dateEpoch := time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC).Unix() - - secSinceDateEpoch := tUnix - dateEpoch - daysSinceDateEpoch = int32(secSinceDateEpoch / 86400) - case Infinity: - daysSinceDateEpoch = infinityDayOffset - case NegativeInfinity: - daysSinceDateEpoch = negativeInfinityDayOffset - } - - return pgio.AppendInt32(buf, daysSinceDateEpoch), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Date) Scan(src interface{}) error { - if src == nil { - *dst = Date{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - case time.Time: - *dst = Date{Time: src, Status: Present} - return nil - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Date) Value() (driver.Value, error) { - switch src.Status { - case Present: - if src.InfinityModifier != None { - return src.InfinityModifier.String(), nil - } - return src.Time, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Date) MarshalJSON() ([]byte, error) { - switch src.Status { - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - if src.Status != Present { - return nil, errBadStatus - } - - var s string - - switch src.InfinityModifier { - case None: - s = src.Time.Format("2006-01-02") - case Infinity: - s = "infinity" - case NegativeInfinity: - s = "-infinity" - } - - return json.Marshal(s) -} - -func (dst *Date) UnmarshalJSON(b []byte) error { - var s *string - err := json.Unmarshal(b, &s) - if err != nil { - return err - } - - if s == nil { - *dst = Date{Status: Null} - return nil - } - - switch *s { - case "infinity": - *dst = Date{Status: Present, InfinityModifier: Infinity} - case "-infinity": - *dst = Date{Status: Present, InfinityModifier: -Infinity} - default: - t, err := time.ParseInLocation("2006-01-02", *s, time.UTC) - if err != nil { - return err - } - - *dst = Date{Time: t, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/date_array.go b/vendor/github.com/jackc/pgtype/date_array.go deleted file mode 100644 index 24152fa0..00000000 --- a/vendor/github.com/jackc/pgtype/date_array.go +++ /dev/null @@ -1,518 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - "time" - - "github.com/jackc/pgio" -) - -type DateArray struct { - Elements []Date - Dimensions []ArrayDimension - Status Status -} - -func (dst *DateArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = DateArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []time.Time: - if value == nil { - *dst = DateArray{Status: Null} - } else if len(value) == 0 { - *dst = DateArray{Status: Present} - } else { - elements := make([]Date, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = DateArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*time.Time: - if value == nil { - *dst = DateArray{Status: Null} - } else if len(value) == 0 { - *dst = DateArray{Status: Present} - } else { - elements := make([]Date, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = DateArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Date: - if value == nil { - *dst = DateArray{Status: Null} - } else if len(value) == 0 { - *dst = DateArray{Status: Present} - } else { - *dst = DateArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = DateArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for DateArray", src) - } - if elementsLength == 0 { - *dst = DateArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to DateArray", src) - } - - *dst = DateArray{ - Elements: make([]Date, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Date, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to DateArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *DateArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to DateArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in DateArray", err) - } - index++ - - return index, nil -} - -func (dst DateArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *DateArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]time.Time: - *v = make([]time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*time.Time: - *v = make([]*time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *DateArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from DateArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from DateArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *DateArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = DateArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Date - - if len(uta.Elements) > 0 { - elements = make([]Date, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Date - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = DateArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *DateArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = DateArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = DateArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Date, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = DateArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src DateArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src DateArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("date"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "date") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *DateArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src DateArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/daterange.go b/vendor/github.com/jackc/pgtype/daterange.go deleted file mode 100644 index 63164a5a..00000000 --- a/vendor/github.com/jackc/pgtype/daterange.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Daterange struct { - Lower Date - Upper Date - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Daterange) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Daterange{Status: Null} - return nil - } - - switch value := src.(type) { - case Daterange: - *dst = value - case *Daterange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Daterange", src) - } - - return nil -} - -func (dst Daterange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Daterange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Daterange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Daterange{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Daterange{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Daterange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Daterange{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Daterange{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Daterange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Daterange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Daterange) Scan(src interface{}) error { - if src == nil { - *dst = Daterange{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Daterange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/enum_array.go b/vendor/github.com/jackc/pgtype/enum_array.go deleted file mode 100644 index 59b5a3ed..00000000 --- a/vendor/github.com/jackc/pgtype/enum_array.go +++ /dev/null @@ -1,428 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "fmt" - "reflect" -) - -type EnumArray struct { - Elements []GenericText - Dimensions []ArrayDimension - Status Status -} - -func (dst *EnumArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = EnumArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = EnumArray{Status: Null} - } else if len(value) == 0 { - *dst = EnumArray{Status: Present} - } else { - elements := make([]GenericText, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = EnumArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = EnumArray{Status: Null} - } else if len(value) == 0 { - *dst = EnumArray{Status: Present} - } else { - elements := make([]GenericText, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = EnumArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []GenericText: - if value == nil { - *dst = EnumArray{Status: Null} - } else if len(value) == 0 { - *dst = EnumArray{Status: Present} - } else { - *dst = EnumArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = EnumArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for EnumArray", src) - } - if elementsLength == 0 { - *dst = EnumArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to EnumArray", src) - } - - *dst = EnumArray{ - Elements: make([]GenericText, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]GenericText, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to EnumArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *EnumArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to EnumArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in EnumArray", err) - } - index++ - - return index, nil -} - -func (dst EnumArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *EnumArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *EnumArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from EnumArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from EnumArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *EnumArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = EnumArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []GenericText - - if len(uta.Elements) > 0 { - elements = make([]GenericText, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem GenericText - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = EnumArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (src EnumArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *EnumArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src EnumArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/enum_type.go b/vendor/github.com/jackc/pgtype/enum_type.go deleted file mode 100644 index 52657822..00000000 --- a/vendor/github.com/jackc/pgtype/enum_type.go +++ /dev/null @@ -1,168 +0,0 @@ -package pgtype - -import "fmt" - -// EnumType represents an enum type. While it implements Value, this is only in service of its type conversion duties -// when registered as a data type in a ConnType. It should not be used directly as a Value. -type EnumType struct { - value string - status Status - - typeName string // PostgreSQL type name - members []string // enum members - membersMap map[string]string // map to quickly lookup member and reuse string instead of allocating -} - -// NewEnumType initializes a new EnumType. It retains a read-only reference to members. members must not be changed. -func NewEnumType(typeName string, members []string) *EnumType { - et := &EnumType{typeName: typeName, members: members} - et.membersMap = make(map[string]string, len(members)) - for _, m := range members { - et.membersMap[m] = m - } - return et -} - -func (et *EnumType) NewTypeValue() Value { - return &EnumType{ - value: et.value, - status: et.status, - - typeName: et.typeName, - members: et.members, - membersMap: et.membersMap, - } -} - -func (et *EnumType) TypeName() string { - return et.typeName -} - -func (et *EnumType) Members() []string { - return et.members -} - -// Set assigns src to dst. Set purposely does not check that src is a member. This allows continued error free -// operation in the event the PostgreSQL enum type is modified during a connection. -func (dst *EnumType) Set(src interface{}) error { - if src == nil { - dst.status = Null - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case string: - dst.value = value - dst.status = Present - case *string: - if value == nil { - dst.status = Null - } else { - dst.value = *value - dst.status = Present - } - case []byte: - if value == nil { - dst.status = Null - } else { - dst.value = string(value) - dst.status = Present - } - default: - if originalSrc, ok := underlyingStringType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to enum %s", value, dst.typeName) - } - - return nil -} - -func (dst EnumType) Get() interface{} { - switch dst.status { - case Present: - return dst.value - case Null: - return nil - default: - return dst.status - } -} - -func (src *EnumType) AssignTo(dst interface{}) error { - switch src.status { - case Present: - switch v := dst.(type) { - case *string: - *v = src.value - return nil - case *[]byte: - *v = make([]byte, len(src.value)) - copy(*v, src.value) - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (EnumType) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *EnumType) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - dst.status = Null - return nil - } - - // Lookup the string in membersMap to avoid an allocation. - if s, found := dst.membersMap[string(src)]; found { - dst.value = s - } else { - // If an enum type is modified after the initial connection it is possible to receive an unexpected value. - // Gracefully handle this situation. Purposely NOT modifying members and membersMap to allow for sharing members - // and membersMap between connections. - dst.value = string(src) - } - dst.status = Present - - return nil -} - -func (dst *EnumType) DecodeBinary(ci *ConnInfo, src []byte) error { - return dst.DecodeText(ci, src) -} - -func (EnumType) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src EnumType) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.value...), nil -} - -func (src EnumType) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return src.EncodeText(ci, buf) -} diff --git a/vendor/github.com/jackc/pgtype/float4.go b/vendor/github.com/jackc/pgtype/float4.go deleted file mode 100644 index 89b9e8fa..00000000 --- a/vendor/github.com/jackc/pgtype/float4.go +++ /dev/null @@ -1,282 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -type Float4 struct { - Float float32 - Status Status -} - -func (dst *Float4) Set(src interface{}) error { - if src == nil { - *dst = Float4{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case float32: - *dst = Float4{Float: value, Status: Present} - case float64: - *dst = Float4{Float: float32(value), Status: Present} - case int8: - *dst = Float4{Float: float32(value), Status: Present} - case uint8: - *dst = Float4{Float: float32(value), Status: Present} - case int16: - *dst = Float4{Float: float32(value), Status: Present} - case uint16: - *dst = Float4{Float: float32(value), Status: Present} - case int32: - f32 := float32(value) - if int32(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case uint32: - f32 := float32(value) - if uint32(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case int64: - f32 := float32(value) - if int64(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case uint64: - f32 := float32(value) - if uint64(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case int: - f32 := float32(value) - if int(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case uint: - f32 := float32(value) - if uint(f32) == value { - *dst = Float4{Float: f32, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float32", value) - } - case string: - num, err := strconv.ParseFloat(value, 32) - if err != nil { - return err - } - *dst = Float4{Float: float32(num), Status: Present} - case *float64: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *int8: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Float4{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Float8", value) - } - - return nil -} - -func (dst Float4) Get() interface{} { - switch dst.Status { - case Present: - return dst.Float - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Float4) AssignTo(dst interface{}) error { - return float64AssignTo(float64(src.Float), src.Status, dst) -} - -func (dst *Float4) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float4{Status: Null} - return nil - } - - n, err := strconv.ParseFloat(string(src), 32) - if err != nil { - return err - } - - *dst = Float4{Float: float32(n), Status: Present} - return nil -} - -func (dst *Float4) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float4{Status: Null} - return nil - } - - if len(src) != 4 { - return fmt.Errorf("invalid length for float4: %v", len(src)) - } - - n := int32(binary.BigEndian.Uint32(src)) - - *dst = Float4{Float: math.Float32frombits(uint32(n)), Status: Present} - return nil -} - -func (src Float4) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, strconv.FormatFloat(float64(src.Float), 'f', -1, 32)...) - return buf, nil -} - -func (src Float4) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint32(buf, math.Float32bits(src.Float)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Float4) Scan(src interface{}) error { - if src == nil { - *dst = Float4{Status: Null} - return nil - } - - switch src := src.(type) { - case float64: - *dst = Float4{Float: float32(src), Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Float4) Value() (driver.Value, error) { - switch src.Status { - case Present: - return float64(src.Float), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/float4_array.go b/vendor/github.com/jackc/pgtype/float4_array.go deleted file mode 100644 index 41f2ec8f..00000000 --- a/vendor/github.com/jackc/pgtype/float4_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type Float4Array struct { - Elements []Float4 - Dimensions []ArrayDimension - Status Status -} - -func (dst *Float4Array) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Float4Array{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []float32: - if value == nil { - *dst = Float4Array{Status: Null} - } else if len(value) == 0 { - *dst = Float4Array{Status: Present} - } else { - elements := make([]Float4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Float4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*float32: - if value == nil { - *dst = Float4Array{Status: Null} - } else if len(value) == 0 { - *dst = Float4Array{Status: Present} - } else { - elements := make([]Float4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Float4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Float4: - if value == nil { - *dst = Float4Array{Status: Null} - } else if len(value) == 0 { - *dst = Float4Array{Status: Present} - } else { - *dst = Float4Array{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = Float4Array{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for Float4Array", src) - } - if elementsLength == 0 { - *dst = Float4Array{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Float4Array", src) - } - - *dst = Float4Array{ - Elements: make([]Float4, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Float4, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to Float4Array, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *Float4Array) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to Float4Array") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in Float4Array", err) - } - index++ - - return index, nil -} - -func (dst Float4Array) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Float4Array) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]float32: - *v = make([]float32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*float32: - *v = make([]*float32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *Float4Array) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from Float4Array") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from Float4Array") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *Float4Array) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float4Array{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Float4 - - if len(uta.Elements) > 0 { - elements = make([]Float4, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Float4 - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Float4Array{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *Float4Array) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float4Array{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = Float4Array{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Float4, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Float4Array{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src Float4Array) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src Float4Array) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("float4"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "float4") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Float4Array) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Float4Array) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/float8.go b/vendor/github.com/jackc/pgtype/float8.go deleted file mode 100644 index 6297ab5e..00000000 --- a/vendor/github.com/jackc/pgtype/float8.go +++ /dev/null @@ -1,272 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -type Float8 struct { - Float float64 - Status Status -} - -func (dst *Float8) Set(src interface{}) error { - if src == nil { - *dst = Float8{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case float32: - *dst = Float8{Float: float64(value), Status: Present} - case float64: - *dst = Float8{Float: value, Status: Present} - case int8: - *dst = Float8{Float: float64(value), Status: Present} - case uint8: - *dst = Float8{Float: float64(value), Status: Present} - case int16: - *dst = Float8{Float: float64(value), Status: Present} - case uint16: - *dst = Float8{Float: float64(value), Status: Present} - case int32: - *dst = Float8{Float: float64(value), Status: Present} - case uint32: - *dst = Float8{Float: float64(value), Status: Present} - case int64: - f64 := float64(value) - if int64(f64) == value { - *dst = Float8{Float: f64, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float64", value) - } - case uint64: - f64 := float64(value) - if uint64(f64) == value { - *dst = Float8{Float: f64, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float64", value) - } - case int: - f64 := float64(value) - if int(f64) == value { - *dst = Float8{Float: f64, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float64", value) - } - case uint: - f64 := float64(value) - if uint(f64) == value { - *dst = Float8{Float: f64, Status: Present} - } else { - return fmt.Errorf("%v cannot be exactly represented as float64", value) - } - case string: - num, err := strconv.ParseFloat(value, 64) - if err != nil { - return err - } - *dst = Float8{Float: float64(num), Status: Present} - case *float64: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *int8: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Float8{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Float8", value) - } - - return nil -} - -func (dst Float8) Get() interface{} { - switch dst.Status { - case Present: - return dst.Float - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Float8) AssignTo(dst interface{}) error { - return float64AssignTo(src.Float, src.Status, dst) -} - -func (dst *Float8) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float8{Status: Null} - return nil - } - - n, err := strconv.ParseFloat(string(src), 64) - if err != nil { - return err - } - - *dst = Float8{Float: n, Status: Present} - return nil -} - -func (dst *Float8) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float8{Status: Null} - return nil - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for float8: %v", len(src)) - } - - n := int64(binary.BigEndian.Uint64(src)) - - *dst = Float8{Float: math.Float64frombits(uint64(n)), Status: Present} - return nil -} - -func (src Float8) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, strconv.FormatFloat(float64(src.Float), 'f', -1, 64)...) - return buf, nil -} - -func (src Float8) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.Float)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Float8) Scan(src interface{}) error { - if src == nil { - *dst = Float8{Status: Null} - return nil - } - - switch src := src.(type) { - case float64: - *dst = Float8{Float: src, Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Float8) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.Float, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/float8_array.go b/vendor/github.com/jackc/pgtype/float8_array.go deleted file mode 100644 index 836ee19d..00000000 --- a/vendor/github.com/jackc/pgtype/float8_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type Float8Array struct { - Elements []Float8 - Dimensions []ArrayDimension - Status Status -} - -func (dst *Float8Array) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Float8Array{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []float64: - if value == nil { - *dst = Float8Array{Status: Null} - } else if len(value) == 0 { - *dst = Float8Array{Status: Present} - } else { - elements := make([]Float8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Float8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*float64: - if value == nil { - *dst = Float8Array{Status: Null} - } else if len(value) == 0 { - *dst = Float8Array{Status: Present} - } else { - elements := make([]Float8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Float8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Float8: - if value == nil { - *dst = Float8Array{Status: Null} - } else if len(value) == 0 { - *dst = Float8Array{Status: Present} - } else { - *dst = Float8Array{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = Float8Array{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for Float8Array", src) - } - if elementsLength == 0 { - *dst = Float8Array{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Float8Array", src) - } - - *dst = Float8Array{ - Elements: make([]Float8, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Float8, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to Float8Array, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *Float8Array) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to Float8Array") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in Float8Array", err) - } - index++ - - return index, nil -} - -func (dst Float8Array) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Float8Array) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]float64: - *v = make([]float64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*float64: - *v = make([]*float64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *Float8Array) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from Float8Array") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from Float8Array") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *Float8Array) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float8Array{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Float8 - - if len(uta.Elements) > 0 { - elements = make([]Float8, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Float8 - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Float8Array{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *Float8Array) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Float8Array{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = Float8Array{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Float8, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Float8Array{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src Float8Array) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src Float8Array) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("float8"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "float8") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Float8Array) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Float8Array) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/generic_binary.go b/vendor/github.com/jackc/pgtype/generic_binary.go deleted file mode 100644 index 76a1d351..00000000 --- a/vendor/github.com/jackc/pgtype/generic_binary.go +++ /dev/null @@ -1,39 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// GenericBinary is a placeholder for binary format values that no other type exists -// to handle. -type GenericBinary Bytea - -func (dst *GenericBinary) Set(src interface{}) error { - return (*Bytea)(dst).Set(src) -} - -func (dst GenericBinary) Get() interface{} { - return (Bytea)(dst).Get() -} - -func (src *GenericBinary) AssignTo(dst interface{}) error { - return (*Bytea)(src).AssignTo(dst) -} - -func (dst *GenericBinary) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Bytea)(dst).DecodeBinary(ci, src) -} - -func (src GenericBinary) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Bytea)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *GenericBinary) Scan(src interface{}) error { - return (*Bytea)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src GenericBinary) Value() (driver.Value, error) { - return (Bytea)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/generic_text.go b/vendor/github.com/jackc/pgtype/generic_text.go deleted file mode 100644 index dbf5b47e..00000000 --- a/vendor/github.com/jackc/pgtype/generic_text.go +++ /dev/null @@ -1,39 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// GenericText is a placeholder for text format values that no other type exists -// to handle. -type GenericText Text - -func (dst *GenericText) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -func (dst GenericText) Get() interface{} { - return (Text)(dst).Get() -} - -func (src *GenericText) AssignTo(dst interface{}) error { - return (*Text)(src).AssignTo(dst) -} - -func (dst *GenericText) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (src GenericText) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeText(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *GenericText) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src GenericText) Value() (driver.Value, error) { - return (Text)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/hstore.go b/vendor/github.com/jackc/pgtype/hstore.go deleted file mode 100644 index e42b7551..00000000 --- a/vendor/github.com/jackc/pgtype/hstore.go +++ /dev/null @@ -1,465 +0,0 @@ -package pgtype - -import ( - "bytes" - "database/sql/driver" - "encoding/binary" - "errors" - "fmt" - "strings" - "unicode" - "unicode/utf8" - - "github.com/jackc/pgio" -) - -// Hstore represents an hstore column that can be null or have null values -// associated with its keys. -type Hstore struct { - Map map[string]Text - Status Status -} - -func (dst *Hstore) Set(src interface{}) error { - if src == nil { - *dst = Hstore{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case map[string]string: - m := make(map[string]Text, len(value)) - for k, v := range value { - m[k] = Text{String: v, Status: Present} - } - *dst = Hstore{Map: m, Status: Present} - case map[string]*string: - m := make(map[string]Text, len(value)) - for k, v := range value { - if v == nil { - m[k] = Text{Status: Null} - } else { - m[k] = Text{String: *v, Status: Present} - } - } - *dst = Hstore{Map: m, Status: Present} - case map[string]Text: - *dst = Hstore{Map: value, Status: Present} - default: - return fmt.Errorf("cannot convert %v to Hstore", src) - } - - return nil -} - -func (dst Hstore) Get() interface{} { - switch dst.Status { - case Present: - return dst.Map - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Hstore) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *map[string]string: - *v = make(map[string]string, len(src.Map)) - for k, val := range src.Map { - if val.Status != Present { - return fmt.Errorf("cannot decode %#v into %T", src, dst) - } - (*v)[k] = val.String - } - return nil - case *map[string]*string: - *v = make(map[string]*string, len(src.Map)) - for k, val := range src.Map { - switch val.Status { - case Null: - (*v)[k] = nil - case Present: - str := val.String - (*v)[k] = &str - default: - return fmt.Errorf("cannot decode %#v into %T", src, dst) - } - } - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Hstore) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Hstore{Status: Null} - return nil - } - - keys, values, err := parseHstore(string(src)) - if err != nil { - return err - } - - m := make(map[string]Text, len(keys)) - for i := range keys { - m[keys[i]] = values[i] - } - - *dst = Hstore{Map: m, Status: Present} - return nil -} - -func (dst *Hstore) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Hstore{Status: Null} - return nil - } - - rp := 0 - - if len(src[rp:]) < 4 { - return fmt.Errorf("hstore incomplete %v", src) - } - pairCount := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - - m := make(map[string]Text, pairCount) - - for i := 0; i < pairCount; i++ { - if len(src[rp:]) < 4 { - return fmt.Errorf("hstore incomplete %v", src) - } - keyLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - - if len(src[rp:]) < keyLen { - return fmt.Errorf("hstore incomplete %v", src) - } - key := string(src[rp : rp+keyLen]) - rp += keyLen - - if len(src[rp:]) < 4 { - return fmt.Errorf("hstore incomplete %v", src) - } - valueLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - - var valueBuf []byte - if valueLen >= 0 { - valueBuf = src[rp : rp+valueLen] - rp += valueLen - } - - var value Text - err := value.DecodeBinary(ci, valueBuf) - if err != nil { - return err - } - m[key] = value - } - - *dst = Hstore{Map: m, Status: Present} - - return nil -} - -func (src Hstore) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - firstPair := true - - inElemBuf := make([]byte, 0, 32) - for k, v := range src.Map { - if firstPair { - firstPair = false - } else { - buf = append(buf, ',') - } - - buf = append(buf, quoteHstoreElementIfNeeded(k)...) - buf = append(buf, "=>"...) - - elemBuf, err := v.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - - if elemBuf == nil { - buf = append(buf, "NULL"...) - } else { - buf = append(buf, quoteHstoreElementIfNeeded(string(elemBuf))...) - } - } - - return buf, nil -} - -func (src Hstore) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.Map))) - - var err error - for k, v := range src.Map { - buf = pgio.AppendInt32(buf, int32(len(k))) - buf = append(buf, k...) - - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := v.EncodeText(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, err -} - -var quoteHstoreReplacer = strings.NewReplacer(`\`, `\\`, `"`, `\"`) - -func quoteHstoreElement(src string) string { - return `"` + quoteArrayReplacer.Replace(src) + `"` -} - -func quoteHstoreElementIfNeeded(src string) string { - if src == "" || (len(src) == 4 && strings.ToLower(src) == "null") || strings.ContainsAny(src, ` {},"\=>`) { - return quoteArrayElement(src) - } - return src -} - -const ( - hsPre = iota - hsKey - hsSep - hsVal - hsNul - hsNext -) - -type hstoreParser struct { - str string - pos int -} - -func newHSP(in string) *hstoreParser { - return &hstoreParser{ - pos: 0, - str: in, - } -} - -func (p *hstoreParser) Consume() (r rune, end bool) { - if p.pos >= len(p.str) { - end = true - return - } - r, w := utf8.DecodeRuneInString(p.str[p.pos:]) - p.pos += w - return -} - -func (p *hstoreParser) Peek() (r rune, end bool) { - if p.pos >= len(p.str) { - end = true - return - } - r, _ = utf8.DecodeRuneInString(p.str[p.pos:]) - return -} - -// parseHstore parses the string representation of an hstore column (the same -// you would get from an ordinary SELECT) into two slices of keys and values. it -// is used internally in the default parsing of hstores. -func parseHstore(s string) (k []string, v []Text, err error) { - if s == "" { - return - } - - buf := bytes.Buffer{} - keys := []string{} - values := []Text{} - p := newHSP(s) - - r, end := p.Consume() - state := hsPre - - for !end { - switch state { - case hsPre: - if r == '"' { - state = hsKey - } else { - err = errors.New("String does not begin with \"") - } - case hsKey: - switch r { - case '"': //End of the key - keys = append(keys, buf.String()) - buf = bytes.Buffer{} - state = hsSep - case '\\': //Potential escaped character - n, end := p.Consume() - switch { - case end: - err = errors.New("Found EOS in key, expecting character or \"") - case n == '"', n == '\\': - buf.WriteRune(n) - default: - buf.WriteRune(r) - buf.WriteRune(n) - } - default: //Any other character - buf.WriteRune(r) - } - case hsSep: - if r == '=' { - r, end = p.Consume() - switch { - case end: - err = errors.New("Found EOS after '=', expecting '>'") - case r == '>': - r, end = p.Consume() - switch { - case end: - err = errors.New("Found EOS after '=>', expecting '\"' or 'NULL'") - case r == '"': - state = hsVal - case r == 'N': - state = hsNul - default: - err = fmt.Errorf("Invalid character '%c' after '=>', expecting '\"' or 'NULL'", r) - } - default: - err = fmt.Errorf("Invalid character after '=', expecting '>'") - } - } else { - err = fmt.Errorf("Invalid character '%c' after value, expecting '='", r) - } - case hsVal: - switch r { - case '"': //End of the value - values = append(values, Text{String: buf.String(), Status: Present}) - buf = bytes.Buffer{} - state = hsNext - case '\\': //Potential escaped character - n, end := p.Consume() - switch { - case end: - err = errors.New("Found EOS in key, expecting character or \"") - case n == '"', n == '\\': - buf.WriteRune(n) - default: - buf.WriteRune(r) - buf.WriteRune(n) - } - default: //Any other character - buf.WriteRune(r) - } - case hsNul: - nulBuf := make([]rune, 3) - nulBuf[0] = r - for i := 1; i < 3; i++ { - r, end = p.Consume() - if end { - err = errors.New("Found EOS in NULL value") - return - } - nulBuf[i] = r - } - if nulBuf[0] == 'U' && nulBuf[1] == 'L' && nulBuf[2] == 'L' { - values = append(values, Text{Status: Null}) - state = hsNext - } else { - err = fmt.Errorf("Invalid NULL value: 'N%s'", string(nulBuf)) - } - case hsNext: - if r == ',' { - r, end = p.Consume() - switch { - case end: - err = errors.New("Found EOS after ',', expecting space") - case (unicode.IsSpace(r)): - r, end = p.Consume() - state = hsKey - default: - err = fmt.Errorf("Invalid character '%c' after ', ', expecting \"", r) - } - } else { - err = fmt.Errorf("Invalid character '%c' after value, expecting ','", r) - } - } - - if err != nil { - return - } - r, end = p.Consume() - } - if state != hsNext { - err = errors.New("Improperly formatted hstore") - return - } - k = keys - v = values - return -} - -// Scan implements the database/sql Scanner interface. -func (dst *Hstore) Scan(src interface{}) error { - if src == nil { - *dst = Hstore{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Hstore) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/hstore_array.go b/vendor/github.com/jackc/pgtype/hstore_array.go deleted file mode 100644 index 47b4b3ff..00000000 --- a/vendor/github.com/jackc/pgtype/hstore_array.go +++ /dev/null @@ -1,489 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type HstoreArray struct { - Elements []Hstore - Dimensions []ArrayDimension - Status Status -} - -func (dst *HstoreArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = HstoreArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []map[string]string: - if value == nil { - *dst = HstoreArray{Status: Null} - } else if len(value) == 0 { - *dst = HstoreArray{Status: Present} - } else { - elements := make([]Hstore, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = HstoreArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Hstore: - if value == nil { - *dst = HstoreArray{Status: Null} - } else if len(value) == 0 { - *dst = HstoreArray{Status: Present} - } else { - *dst = HstoreArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = HstoreArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for HstoreArray", src) - } - if elementsLength == 0 { - *dst = HstoreArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to HstoreArray", src) - } - - *dst = HstoreArray{ - Elements: make([]Hstore, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Hstore, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to HstoreArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *HstoreArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to HstoreArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in HstoreArray", err) - } - index++ - - return index, nil -} - -func (dst HstoreArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *HstoreArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]map[string]string: - *v = make([]map[string]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *HstoreArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from HstoreArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from HstoreArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *HstoreArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = HstoreArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Hstore - - if len(uta.Elements) > 0 { - elements = make([]Hstore, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Hstore - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = HstoreArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *HstoreArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = HstoreArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = HstoreArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Hstore, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = HstoreArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src HstoreArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src HstoreArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("hstore"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "hstore") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *HstoreArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src HstoreArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/inet.go b/vendor/github.com/jackc/pgtype/inet.go deleted file mode 100644 index 976f0d7b..00000000 --- a/vendor/github.com/jackc/pgtype/inet.go +++ /dev/null @@ -1,304 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding" - "fmt" - "net" - "strings" -) - -// Network address family is dependent on server socket.h value for AF_INET. -// In practice, all platforms appear to have the same value. See -// src/include/utils/inet.h for more information. -const ( - defaultAFInet = 2 - defaultAFInet6 = 3 -) - -// Inet represents both inet and cidr PostgreSQL types. -type Inet struct { - IPNet *net.IPNet - Status Status -} - -func (dst *Inet) Set(src interface{}) error { - if src == nil { - *dst = Inet{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case net.IPNet: - *dst = Inet{IPNet: &value, Status: Present} - case net.IP: - if len(value) == 0 { - *dst = Inet{Status: Null} - } else { - bitCount := len(value) * 8 - mask := net.CIDRMask(bitCount, bitCount) - *dst = Inet{IPNet: &net.IPNet{Mask: mask, IP: value}, Status: Present} - } - case string: - ip, ipnet, err := net.ParseCIDR(value) - if err != nil { - ip := net.ParseIP(value) - if ip == nil { - return fmt.Errorf("unable to parse inet address: %s", value) - } - - if ipv4 := maybeGetIPv4(value, ip); ipv4 != nil { - ipnet = &net.IPNet{IP: ipv4, Mask: net.CIDRMask(32, 32)} - } else { - ipnet = &net.IPNet{IP: ip, Mask: net.CIDRMask(128, 128)} - } - } else { - ipnet.IP = ip - if ipv4 := maybeGetIPv4(value, ipnet.IP); ipv4 != nil { - ipnet.IP = ipv4 - if len(ipnet.Mask) == 16 { - ipnet.Mask = ipnet.Mask[12:] // Not sure this is ever needed. - } - } - } - - *dst = Inet{IPNet: ipnet, Status: Present} - case *net.IPNet: - if value == nil { - *dst = Inet{Status: Null} - } else { - return dst.Set(*value) - } - case *net.IP: - if value == nil { - *dst = Inet{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Inet{Status: Null} - } else { - return dst.Set(*value) - } - default: - if tv, ok := src.(encoding.TextMarshaler); ok { - text, err := tv.MarshalText() - if err != nil { - return fmt.Errorf("cannot marshal %v: %w", value, err) - } - return dst.Set(string(text)) - } - if sv, ok := src.(fmt.Stringer); ok { - return dst.Set(sv.String()) - } - if originalSrc, ok := underlyingPtrType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Inet", value) - } - - return nil -} - -// Convert the net.IP to IPv4, if appropriate. -// -// When parsing a string to a net.IP using net.ParseIP() and the like, we get a -// 16 byte slice for IPv4 addresses as well as IPv6 addresses. This function -// calls To4() to convert them to a 4 byte slice. This is useful as it allows -// users of the net.IP check for IPv4 addresses based on the length and makes -// it clear we are handling IPv4 as opposed to IPv6 or IPv4-mapped IPv6 -// addresses. -func maybeGetIPv4(input string, ip net.IP) net.IP { - // Do not do this if the provided input looks like IPv6. This is because - // To4() on IPv4-mapped IPv6 addresses converts them to IPv4, which behave - // different in some cases. - if strings.Contains(input, ":") { - return nil - } - - return ip.To4() -} - -func (dst Inet) Get() interface{} { - switch dst.Status { - case Present: - return dst.IPNet - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Inet) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *net.IPNet: - *v = net.IPNet{ - IP: make(net.IP, len(src.IPNet.IP)), - Mask: make(net.IPMask, len(src.IPNet.Mask)), - } - copy(v.IP, src.IPNet.IP) - copy(v.Mask, src.IPNet.Mask) - return nil - case *net.IP: - if oneCount, bitCount := src.IPNet.Mask.Size(); oneCount != bitCount { - return fmt.Errorf("cannot assign %v to %T", src, dst) - } - *v = make(net.IP, len(src.IPNet.IP)) - copy(*v, src.IPNet.IP) - return nil - default: - if tv, ok := dst.(encoding.TextUnmarshaler); ok { - if err := tv.UnmarshalText([]byte(src.IPNet.String())); err != nil { - return fmt.Errorf("cannot unmarshal %v to %T: %w", src, dst, err) - } - return nil - } - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Inet) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Inet{Status: Null} - return nil - } - - var ipnet *net.IPNet - var err error - - if ip := net.ParseIP(string(src)); ip != nil { - if ipv4 := ip.To4(); ipv4 != nil { - ip = ipv4 - } - bitCount := len(ip) * 8 - mask := net.CIDRMask(bitCount, bitCount) - ipnet = &net.IPNet{Mask: mask, IP: ip} - } else { - ip, ipnet, err = net.ParseCIDR(string(src)) - if err != nil { - return err - } - if ipv4 := ip.To4(); ipv4 != nil { - ip = ipv4 - } - ones, _ := ipnet.Mask.Size() - *ipnet = net.IPNet{IP: ip, Mask: net.CIDRMask(ones, len(ip)*8)} - } - - *dst = Inet{IPNet: ipnet, Status: Present} - return nil -} - -func (dst *Inet) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Inet{Status: Null} - return nil - } - - if len(src) != 8 && len(src) != 20 { - return fmt.Errorf("Received an invalid size for an inet: %d", len(src)) - } - - // ignore family - bits := src[1] - // ignore is_cidr - addressLength := src[3] - - var ipnet net.IPNet - ipnet.IP = make(net.IP, int(addressLength)) - copy(ipnet.IP, src[4:]) - if ipv4 := ipnet.IP.To4(); ipv4 != nil { - ipnet.IP = ipv4 - } - ipnet.Mask = net.CIDRMask(int(bits), len(ipnet.IP)*8) - - *dst = Inet{IPNet: &ipnet, Status: Present} - - return nil -} - -func (src Inet) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.IPNet.String()...), nil -} - -// EncodeBinary encodes src into w. -func (src Inet) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var family byte - switch len(src.IPNet.IP) { - case net.IPv4len: - family = defaultAFInet - case net.IPv6len: - family = defaultAFInet6 - default: - return nil, fmt.Errorf("Unexpected IP length: %v", len(src.IPNet.IP)) - } - - buf = append(buf, family) - - ones, _ := src.IPNet.Mask.Size() - buf = append(buf, byte(ones)) - - // is_cidr is ignored on server - buf = append(buf, 0) - - buf = append(buf, byte(len(src.IPNet.IP))) - - return append(buf, src.IPNet.IP...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Inet) Scan(src interface{}) error { - if src == nil { - *dst = Inet{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Inet) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/inet_array.go b/vendor/github.com/jackc/pgtype/inet_array.go deleted file mode 100644 index 2460a1c4..00000000 --- a/vendor/github.com/jackc/pgtype/inet_array.go +++ /dev/null @@ -1,546 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "net" - "reflect" - - "github.com/jackc/pgio" -) - -type InetArray struct { - Elements []Inet - Dimensions []ArrayDimension - Status Status -} - -func (dst *InetArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = InetArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []*net.IPNet: - if value == nil { - *dst = InetArray{Status: Null} - } else if len(value) == 0 { - *dst = InetArray{Status: Present} - } else { - elements := make([]Inet, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = InetArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []net.IP: - if value == nil { - *dst = InetArray{Status: Null} - } else if len(value) == 0 { - *dst = InetArray{Status: Present} - } else { - elements := make([]Inet, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = InetArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*net.IP: - if value == nil { - *dst = InetArray{Status: Null} - } else if len(value) == 0 { - *dst = InetArray{Status: Present} - } else { - elements := make([]Inet, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = InetArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Inet: - if value == nil { - *dst = InetArray{Status: Null} - } else if len(value) == 0 { - *dst = InetArray{Status: Present} - } else { - *dst = InetArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = InetArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for InetArray", src) - } - if elementsLength == 0 { - *dst = InetArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to InetArray", src) - } - - *dst = InetArray{ - Elements: make([]Inet, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Inet, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to InetArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *InetArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to InetArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in InetArray", err) - } - index++ - - return index, nil -} - -func (dst InetArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *InetArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]*net.IPNet: - *v = make([]*net.IPNet, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]net.IP: - *v = make([]net.IP, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*net.IP: - *v = make([]*net.IP, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *InetArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from InetArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from InetArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *InetArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = InetArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Inet - - if len(uta.Elements) > 0 { - elements = make([]Inet, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Inet - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = InetArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *InetArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = InetArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = InetArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Inet, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = InetArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src InetArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src InetArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("inet"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "inet") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *InetArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src InetArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/int2.go b/vendor/github.com/jackc/pgtype/int2.go deleted file mode 100644 index 0775882a..00000000 --- a/vendor/github.com/jackc/pgtype/int2.go +++ /dev/null @@ -1,321 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -type Int2 struct { - Int int16 - Status Status -} - -func (dst *Int2) Set(src interface{}) error { - if src == nil { - *dst = Int2{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case int8: - *dst = Int2{Int: int16(value), Status: Present} - case uint8: - *dst = Int2{Int: int16(value), Status: Present} - case int16: - *dst = Int2{Int: int16(value), Status: Present} - case uint16: - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case int32: - if value < math.MinInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case uint32: - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case int64: - if value < math.MinInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case uint64: - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case int: - if value < math.MinInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case uint: - if value > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case string: - num, err := strconv.ParseInt(value, 10, 16) - if err != nil { - return err - } - *dst = Int2{Int: int16(num), Status: Present} - case float32: - if value > math.MaxInt16 { - return fmt.Errorf("%f is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case float64: - if value > math.MaxInt16 { - return fmt.Errorf("%f is greater than maximum value for Int2", value) - } - *dst = Int2{Int: int16(value), Status: Present} - case *int8: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - case *float64: - if value == nil { - *dst = Int2{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int2", value) - } - - return nil -} - -func (dst Int2) Get() interface{} { - switch dst.Status { - case Present: - return dst.Int - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int2) AssignTo(dst interface{}) error { - return int64AssignTo(int64(src.Int), src.Status, dst) -} - -func (dst *Int2) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int2{Status: Null} - return nil - } - - n, err := strconv.ParseInt(string(src), 10, 16) - if err != nil { - return err - } - - *dst = Int2{Int: int16(n), Status: Present} - return nil -} - -func (dst *Int2) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int2{Status: Null} - return nil - } - - if len(src) != 2 { - return fmt.Errorf("invalid length for int2: %v", len(src)) - } - - n := int16(binary.BigEndian.Uint16(src)) - *dst = Int2{Int: n, Status: Present} - return nil -} - -func (src Int2) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, strconv.FormatInt(int64(src.Int), 10)...), nil -} - -func (src Int2) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return pgio.AppendInt16(buf, src.Int), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int2) Scan(src interface{}) error { - if src == nil { - *dst = Int2{Status: Null} - return nil - } - - switch src := src.(type) { - case int64: - if src < math.MinInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", src) - } - if src > math.MaxInt16 { - return fmt.Errorf("%d is greater than maximum value for Int2", src) - } - *dst = Int2{Int: int16(src), Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int2) Value() (driver.Value, error) { - switch src.Status { - case Present: - return int64(src.Int), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Int2) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - return []byte(strconv.FormatInt(int64(src.Int), 10)), nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *Int2) UnmarshalJSON(b []byte) error { - var n *int16 - err := json.Unmarshal(b, &n) - if err != nil { - return err - } - - if n == nil { - *dst = Int2{Status: Null} - } else { - *dst = Int2{Int: *n, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/int2_array.go b/vendor/github.com/jackc/pgtype/int2_array.go deleted file mode 100644 index a5133845..00000000 --- a/vendor/github.com/jackc/pgtype/int2_array.go +++ /dev/null @@ -1,909 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type Int2Array struct { - Elements []Int2 - Dimensions []ArrayDimension - Status Status -} - -func (dst *Int2Array) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int2Array{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []int16: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int16: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint16: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint16: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int32: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int32: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint32: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint32: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int64: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int64: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint64: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint64: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - elements := make([]Int2, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int2Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Int2: - if value == nil { - *dst = Int2Array{Status: Null} - } else if len(value) == 0 { - *dst = Int2Array{Status: Present} - } else { - *dst = Int2Array{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = Int2Array{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for Int2Array", src) - } - if elementsLength == 0 { - *dst = Int2Array{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int2Array", src) - } - - *dst = Int2Array{ - Elements: make([]Int2, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Int2, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to Int2Array, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *Int2Array) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to Int2Array") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in Int2Array", err) - } - index++ - - return index, nil -} - -func (dst Int2Array) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int2Array) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]int16: - *v = make([]int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int16: - *v = make([]*int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint16: - *v = make([]uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint16: - *v = make([]*uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int32: - *v = make([]int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int32: - *v = make([]*int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint32: - *v = make([]uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint32: - *v = make([]*uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int64: - *v = make([]int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int64: - *v = make([]*int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint64: - *v = make([]uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint64: - *v = make([]*uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int: - *v = make([]int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int: - *v = make([]*int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint: - *v = make([]uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint: - *v = make([]*uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *Int2Array) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from Int2Array") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from Int2Array") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *Int2Array) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int2Array{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Int2 - - if len(uta.Elements) > 0 { - elements = make([]Int2, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Int2 - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Int2Array{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *Int2Array) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int2Array{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = Int2Array{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Int2, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Int2Array{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src Int2Array) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src Int2Array) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("int2"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "int2") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int2Array) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int2Array) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/int4.go b/vendor/github.com/jackc/pgtype/int4.go deleted file mode 100644 index 22b48e5e..00000000 --- a/vendor/github.com/jackc/pgtype/int4.go +++ /dev/null @@ -1,312 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -type Int4 struct { - Int int32 - Status Status -} - -func (dst *Int4) Set(src interface{}) error { - if src == nil { - *dst = Int4{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case int8: - *dst = Int4{Int: int32(value), Status: Present} - case uint8: - *dst = Int4{Int: int32(value), Status: Present} - case int16: - *dst = Int4{Int: int32(value), Status: Present} - case uint16: - *dst = Int4{Int: int32(value), Status: Present} - case int32: - *dst = Int4{Int: int32(value), Status: Present} - case uint32: - if value > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case int64: - if value < math.MinInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - if value > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case uint64: - if value > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case int: - if value < math.MinInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - if value > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case uint: - if value > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case string: - num, err := strconv.ParseInt(value, 10, 32) - if err != nil { - return err - } - *dst = Int4{Int: int32(num), Status: Present} - case float32: - if value > math.MaxInt32 { - return fmt.Errorf("%f is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case float64: - if value > math.MaxInt32 { - return fmt.Errorf("%f is greater than maximum value for Int4", value) - } - *dst = Int4{Int: int32(value), Status: Present} - case *int8: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - case *float64: - if value == nil { - *dst = Int4{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int4", value) - } - - return nil -} - -func (dst Int4) Get() interface{} { - switch dst.Status { - case Present: - return dst.Int - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int4) AssignTo(dst interface{}) error { - return int64AssignTo(int64(src.Int), src.Status, dst) -} - -func (dst *Int4) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4{Status: Null} - return nil - } - - n, err := strconv.ParseInt(string(src), 10, 32) - if err != nil { - return err - } - - *dst = Int4{Int: int32(n), Status: Present} - return nil -} - -func (dst *Int4) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4{Status: Null} - return nil - } - - if len(src) != 4 { - return fmt.Errorf("invalid length for int4: %v", len(src)) - } - - n := int32(binary.BigEndian.Uint32(src)) - *dst = Int4{Int: n, Status: Present} - return nil -} - -func (src Int4) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, strconv.FormatInt(int64(src.Int), 10)...), nil -} - -func (src Int4) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return pgio.AppendInt32(buf, src.Int), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int4) Scan(src interface{}) error { - if src == nil { - *dst = Int4{Status: Null} - return nil - } - - switch src := src.(type) { - case int64: - if src < math.MinInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", src) - } - if src > math.MaxInt32 { - return fmt.Errorf("%d is greater than maximum value for Int4", src) - } - *dst = Int4{Int: int32(src), Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int4) Value() (driver.Value, error) { - switch src.Status { - case Present: - return int64(src.Int), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Int4) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - return []byte(strconv.FormatInt(int64(src.Int), 10)), nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *Int4) UnmarshalJSON(b []byte) error { - var n *int32 - err := json.Unmarshal(b, &n) - if err != nil { - return err - } - - if n == nil { - *dst = Int4{Status: Null} - } else { - *dst = Int4{Int: *n, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/int4_array.go b/vendor/github.com/jackc/pgtype/int4_array.go deleted file mode 100644 index de26236f..00000000 --- a/vendor/github.com/jackc/pgtype/int4_array.go +++ /dev/null @@ -1,909 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type Int4Array struct { - Elements []Int4 - Dimensions []ArrayDimension - Status Status -} - -func (dst *Int4Array) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int4Array{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []int16: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int16: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint16: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint16: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int32: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int32: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint32: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint32: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int64: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int64: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint64: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint64: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - elements := make([]Int4, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Int4: - if value == nil { - *dst = Int4Array{Status: Null} - } else if len(value) == 0 { - *dst = Int4Array{Status: Present} - } else { - *dst = Int4Array{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = Int4Array{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for Int4Array", src) - } - if elementsLength == 0 { - *dst = Int4Array{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int4Array", src) - } - - *dst = Int4Array{ - Elements: make([]Int4, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Int4, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to Int4Array, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *Int4Array) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to Int4Array") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in Int4Array", err) - } - index++ - - return index, nil -} - -func (dst Int4Array) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int4Array) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]int16: - *v = make([]int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int16: - *v = make([]*int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint16: - *v = make([]uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint16: - *v = make([]*uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int32: - *v = make([]int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int32: - *v = make([]*int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint32: - *v = make([]uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint32: - *v = make([]*uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int64: - *v = make([]int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int64: - *v = make([]*int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint64: - *v = make([]uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint64: - *v = make([]*uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int: - *v = make([]int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int: - *v = make([]*int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint: - *v = make([]uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint: - *v = make([]*uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *Int4Array) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from Int4Array") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from Int4Array") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *Int4Array) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4Array{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Int4 - - if len(uta.Elements) > 0 { - elements = make([]Int4, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Int4 - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Int4Array{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *Int4Array) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4Array{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = Int4Array{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Int4, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Int4Array{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src Int4Array) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src Int4Array) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("int4"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "int4") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int4Array) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int4Array) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/int4_multirange.go b/vendor/github.com/jackc/pgtype/int4_multirange.go deleted file mode 100644 index c3432ce6..00000000 --- a/vendor/github.com/jackc/pgtype/int4_multirange.go +++ /dev/null @@ -1,239 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - - "github.com/jackc/pgio" -) - -type Int4multirange struct { - Ranges []Int4range - Status Status -} - -func (dst *Int4multirange) Set(src interface{}) error { - //untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int4multirange{Status: Null} - return nil - } - - switch value := src.(type) { - case Int4multirange: - *dst = value - case *Int4multirange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - case []Int4range: - if value == nil { - *dst = Int4multirange{Status: Null} - } else if len(value) == 0 { - *dst = Int4multirange{Status: Present} - } else { - elements := make([]Int4range, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4multirange{ - Ranges: elements, - Status: Present, - } - } - case []*Int4range: - if value == nil { - *dst = Int4multirange{Status: Null} - } else if len(value) == 0 { - *dst = Int4multirange{Status: Present} - } else { - elements := make([]Int4range, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int4multirange{ - Ranges: elements, - Status: Present, - } - } - default: - return fmt.Errorf("cannot convert %v to Int4multirange", src) - } - - return nil - -} - -func (dst Int4multirange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int4multirange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Int4multirange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4multirange{Status: Null} - return nil - } - - utmr, err := ParseUntypedTextMultirange(string(src)) - if err != nil { - return err - } - - var elements []Int4range - - if len(utmr.Elements) > 0 { - elements = make([]Int4range, len(utmr.Elements)) - - for i, s := range utmr.Elements { - var elem Int4range - - elemSrc := []byte(s) - - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Int4multirange{Ranges: elements, Status: Present} - - return nil -} - -func (dst *Int4multirange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4multirange{Status: Null} - return nil - } - - rp := 0 - - numElems := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if numElems == 0 { - *dst = Int4multirange{Status: Present} - return nil - } - - elements := make([]Int4range, numElems) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err := elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Int4multirange{Ranges: elements, Status: Present} - return nil -} - -func (src Int4multirange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, '{') - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Ranges { - if i > 0 { - buf = append(buf, ',') - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - return nil, fmt.Errorf("multi-range does not allow null range") - } else { - buf = append(buf, string(elemBuf)...) - } - - } - - buf = append(buf, '}') - - return buf, nil -} - -func (src Int4multirange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.Ranges))) - - for i := range src.Ranges { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Ranges[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int4multirange) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int4multirange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/int4range.go b/vendor/github.com/jackc/pgtype/int4range.go deleted file mode 100644 index c7f51fa6..00000000 --- a/vendor/github.com/jackc/pgtype/int4range.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Int4range struct { - Lower Int4 - Upper Int4 - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Int4range) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int4range{Status: Null} - return nil - } - - switch value := src.(type) { - case Int4range: - *dst = value - case *Int4range: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Int4range", src) - } - - return nil -} - -func (dst Int4range) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int4range) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Int4range) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4range{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Int4range{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Int4range) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int4range{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Int4range{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Int4range) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Int4range) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int4range) Scan(src interface{}) error { - if src == nil { - *dst = Int4range{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int4range) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/int8.go b/vendor/github.com/jackc/pgtype/int8.go deleted file mode 100644 index 0e089979..00000000 --- a/vendor/github.com/jackc/pgtype/int8.go +++ /dev/null @@ -1,298 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -type Int8 struct { - Int int64 - Status Status -} - -func (dst *Int8) Set(src interface{}) error { - if src == nil { - *dst = Int8{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case int8: - *dst = Int8{Int: int64(value), Status: Present} - case uint8: - *dst = Int8{Int: int64(value), Status: Present} - case int16: - *dst = Int8{Int: int64(value), Status: Present} - case uint16: - *dst = Int8{Int: int64(value), Status: Present} - case int32: - *dst = Int8{Int: int64(value), Status: Present} - case uint32: - *dst = Int8{Int: int64(value), Status: Present} - case int64: - *dst = Int8{Int: int64(value), Status: Present} - case uint64: - if value > math.MaxInt64 { - return fmt.Errorf("%d is greater than maximum value for Int8", value) - } - *dst = Int8{Int: int64(value), Status: Present} - case int: - if int64(value) < math.MinInt64 { - return fmt.Errorf("%d is greater than maximum value for Int8", value) - } - if int64(value) > math.MaxInt64 { - return fmt.Errorf("%d is greater than maximum value for Int8", value) - } - *dst = Int8{Int: int64(value), Status: Present} - case uint: - if uint64(value) > math.MaxInt64 { - return fmt.Errorf("%d is greater than maximum value for Int8", value) - } - *dst = Int8{Int: int64(value), Status: Present} - case string: - num, err := strconv.ParseInt(value, 10, 64) - if err != nil { - return err - } - *dst = Int8{Int: num, Status: Present} - case float32: - if value > math.MaxInt64 { - return fmt.Errorf("%f is greater than maximum value for Int8", value) - } - *dst = Int8{Int: int64(value), Status: Present} - case float64: - if value > math.MaxInt64 { - return fmt.Errorf("%f is greater than maximum value for Int8", value) - } - *dst = Int8{Int: int64(value), Status: Present} - case *int8: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - case *float64: - if value == nil { - *dst = Int8{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int8", value) - } - - return nil -} - -func (dst Int8) Get() interface{} { - switch dst.Status { - case Present: - return dst.Int - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int8) AssignTo(dst interface{}) error { - return int64AssignTo(int64(src.Int), src.Status, dst) -} - -func (dst *Int8) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8{Status: Null} - return nil - } - - n, err := strconv.ParseInt(string(src), 10, 64) - if err != nil { - return err - } - - *dst = Int8{Int: n, Status: Present} - return nil -} - -func (dst *Int8) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8{Status: Null} - return nil - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for int8: %v", len(src)) - } - - n := int64(binary.BigEndian.Uint64(src)) - - *dst = Int8{Int: n, Status: Present} - return nil -} - -func (src Int8) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, strconv.FormatInt(src.Int, 10)...), nil -} - -func (src Int8) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return pgio.AppendInt64(buf, src.Int), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int8) Scan(src interface{}) error { - if src == nil { - *dst = Int8{Status: Null} - return nil - } - - switch src := src.(type) { - case int64: - *dst = Int8{Int: src, Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int8) Value() (driver.Value, error) { - switch src.Status { - case Present: - return int64(src.Int), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Int8) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - return []byte(strconv.FormatInt(src.Int, 10)), nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *Int8) UnmarshalJSON(b []byte) error { - var n *int64 - err := json.Unmarshal(b, &n) - if err != nil { - return err - } - - if n == nil { - *dst = Int8{Status: Null} - } else { - *dst = Int8{Int: *n, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/int8_array.go b/vendor/github.com/jackc/pgtype/int8_array.go deleted file mode 100644 index e405b326..00000000 --- a/vendor/github.com/jackc/pgtype/int8_array.go +++ /dev/null @@ -1,909 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type Int8Array struct { - Elements []Int8 - Dimensions []ArrayDimension - Status Status -} - -func (dst *Int8Array) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int8Array{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []int16: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int16: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint16: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint16: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int32: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int32: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint32: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint32: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int64: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int64: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint64: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint64: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - elements := make([]Int8, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8Array{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Int8: - if value == nil { - *dst = Int8Array{Status: Null} - } else if len(value) == 0 { - *dst = Int8Array{Status: Present} - } else { - *dst = Int8Array{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = Int8Array{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for Int8Array", src) - } - if elementsLength == 0 { - *dst = Int8Array{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Int8Array", src) - } - - *dst = Int8Array{ - Elements: make([]Int8, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Int8, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to Int8Array, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *Int8Array) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to Int8Array") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in Int8Array", err) - } - index++ - - return index, nil -} - -func (dst Int8Array) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int8Array) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]int16: - *v = make([]int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int16: - *v = make([]*int16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint16: - *v = make([]uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint16: - *v = make([]*uint16, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int32: - *v = make([]int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int32: - *v = make([]*int32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint32: - *v = make([]uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint32: - *v = make([]*uint32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int64: - *v = make([]int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int64: - *v = make([]*int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint64: - *v = make([]uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint64: - *v = make([]*uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int: - *v = make([]int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int: - *v = make([]*int, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint: - *v = make([]uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint: - *v = make([]*uint, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *Int8Array) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from Int8Array") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from Int8Array") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *Int8Array) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8Array{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Int8 - - if len(uta.Elements) > 0 { - elements = make([]Int8, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Int8 - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Int8Array{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *Int8Array) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8Array{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = Int8Array{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Int8, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Int8Array{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src Int8Array) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src Int8Array) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("int8"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "int8") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int8Array) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int8Array) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/int8_multirange.go b/vendor/github.com/jackc/pgtype/int8_multirange.go deleted file mode 100644 index e0976427..00000000 --- a/vendor/github.com/jackc/pgtype/int8_multirange.go +++ /dev/null @@ -1,239 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - - "github.com/jackc/pgio" -) - -type Int8multirange struct { - Ranges []Int8range - Status Status -} - -func (dst *Int8multirange) Set(src interface{}) error { - //untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int8multirange{Status: Null} - return nil - } - - switch value := src.(type) { - case Int8multirange: - *dst = value - case *Int8multirange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - case []Int8range: - if value == nil { - *dst = Int8multirange{Status: Null} - } else if len(value) == 0 { - *dst = Int8multirange{Status: Present} - } else { - elements := make([]Int8range, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8multirange{ - Ranges: elements, - Status: Present, - } - } - case []*Int8range: - if value == nil { - *dst = Int8multirange{Status: Null} - } else if len(value) == 0 { - *dst = Int8multirange{Status: Present} - } else { - elements := make([]Int8range, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Int8multirange{ - Ranges: elements, - Status: Present, - } - } - default: - return fmt.Errorf("cannot convert %v to Int8multirange", src) - } - - return nil - -} - -func (dst Int8multirange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int8multirange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Int8multirange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8multirange{Status: Null} - return nil - } - - utmr, err := ParseUntypedTextMultirange(string(src)) - if err != nil { - return err - } - - var elements []Int8range - - if len(utmr.Elements) > 0 { - elements = make([]Int8range, len(utmr.Elements)) - - for i, s := range utmr.Elements { - var elem Int8range - - elemSrc := []byte(s) - - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Int8multirange{Ranges: elements, Status: Present} - - return nil -} - -func (dst *Int8multirange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8multirange{Status: Null} - return nil - } - - rp := 0 - - numElems := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if numElems == 0 { - *dst = Int8multirange{Status: Present} - return nil - } - - elements := make([]Int8range, numElems) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err := elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Int8multirange{Ranges: elements, Status: Present} - return nil -} - -func (src Int8multirange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, '{') - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Ranges { - if i > 0 { - buf = append(buf, ',') - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - return nil, fmt.Errorf("multi-range does not allow null range") - } else { - buf = append(buf, string(elemBuf)...) - } - - } - - buf = append(buf, '}') - - return buf, nil -} - -func (src Int8multirange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.Ranges))) - - for i := range src.Ranges { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Ranges[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int8multirange) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int8multirange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/int8range.go b/vendor/github.com/jackc/pgtype/int8range.go deleted file mode 100644 index 71369373..00000000 --- a/vendor/github.com/jackc/pgtype/int8range.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Int8range struct { - Lower Int8 - Upper Int8 - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Int8range) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Int8range{Status: Null} - return nil - } - - switch value := src.(type) { - case Int8range: - *dst = value - case *Int8range: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Int8range", src) - } - - return nil -} - -func (dst Int8range) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Int8range) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Int8range) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8range{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Int8range{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Int8range) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Int8range{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Int8range{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Int8range) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Int8range) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Int8range) Scan(src interface{}) error { - if src == nil { - *dst = Int8range{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Int8range) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/interval.go b/vendor/github.com/jackc/pgtype/interval.go deleted file mode 100644 index 00ec47c5..00000000 --- a/vendor/github.com/jackc/pgtype/interval.go +++ /dev/null @@ -1,257 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "strconv" - "strings" - "time" - - "github.com/jackc/pgio" -) - -const ( - microsecondsPerSecond = 1000000 - microsecondsPerMinute = 60 * microsecondsPerSecond - microsecondsPerHour = 60 * microsecondsPerMinute - microsecondsPerDay = 24 * microsecondsPerHour - microsecondsPerMonth = 30 * microsecondsPerDay -) - -type Interval struct { - Microseconds int64 - Days int32 - Months int32 - Status Status -} - -func (dst *Interval) Set(src interface{}) error { - if src == nil { - *dst = Interval{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case time.Duration: - *dst = Interval{Microseconds: int64(value) / 1000, Status: Present} - default: - if originalSrc, ok := underlyingPtrType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Interval", value) - } - - return nil -} - -func (dst Interval) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Interval) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *time.Duration: - us := int64(src.Months)*microsecondsPerMonth + int64(src.Days)*microsecondsPerDay + src.Microseconds - *v = time.Duration(us) * time.Microsecond - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Interval) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Interval{Status: Null} - return nil - } - - var microseconds int64 - var days int32 - var months int32 - - parts := strings.Split(string(src), " ") - - for i := 0; i < len(parts)-1; i += 2 { - scalar, err := strconv.ParseInt(parts[i], 10, 64) - if err != nil { - return fmt.Errorf("bad interval format") - } - - switch parts[i+1] { - case "year", "years": - months += int32(scalar * 12) - case "mon", "mons": - months += int32(scalar) - case "day", "days": - days = int32(scalar) - } - } - - if len(parts)%2 == 1 { - timeParts := strings.SplitN(parts[len(parts)-1], ":", 3) - if len(timeParts) != 3 { - return fmt.Errorf("bad interval format") - } - - var negative bool - if timeParts[0][0] == '-' { - negative = true - timeParts[0] = timeParts[0][1:] - } - - hours, err := strconv.ParseInt(timeParts[0], 10, 64) - if err != nil { - return fmt.Errorf("bad interval hour format: %s", timeParts[0]) - } - - minutes, err := strconv.ParseInt(timeParts[1], 10, 64) - if err != nil { - return fmt.Errorf("bad interval minute format: %s", timeParts[1]) - } - - secondParts := strings.SplitN(timeParts[2], ".", 2) - - seconds, err := strconv.ParseInt(secondParts[0], 10, 64) - if err != nil { - return fmt.Errorf("bad interval second format: %s", secondParts[0]) - } - - var uSeconds int64 - if len(secondParts) == 2 { - uSeconds, err = strconv.ParseInt(secondParts[1], 10, 64) - if err != nil { - return fmt.Errorf("bad interval decimal format: %s", secondParts[1]) - } - - for i := 0; i < 6-len(secondParts[1]); i++ { - uSeconds *= 10 - } - } - - microseconds = hours * microsecondsPerHour - microseconds += minutes * microsecondsPerMinute - microseconds += seconds * microsecondsPerSecond - microseconds += uSeconds - - if negative { - microseconds = -microseconds - } - } - - *dst = Interval{Months: months, Days: days, Microseconds: microseconds, Status: Present} - return nil -} - -func (dst *Interval) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Interval{Status: Null} - return nil - } - - if len(src) != 16 { - return fmt.Errorf("Received an invalid size for an interval: %d", len(src)) - } - - microseconds := int64(binary.BigEndian.Uint64(src)) - days := int32(binary.BigEndian.Uint32(src[8:])) - months := int32(binary.BigEndian.Uint32(src[12:])) - - *dst = Interval{Microseconds: microseconds, Days: days, Months: months, Status: Present} - return nil -} - -func (src Interval) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if src.Months != 0 { - buf = append(buf, strconv.FormatInt(int64(src.Months), 10)...) - buf = append(buf, " mon "...) - } - - if src.Days != 0 { - buf = append(buf, strconv.FormatInt(int64(src.Days), 10)...) - buf = append(buf, " day "...) - } - - absMicroseconds := src.Microseconds - if absMicroseconds < 0 { - absMicroseconds = -absMicroseconds - buf = append(buf, '-') - } - - hours := absMicroseconds / microsecondsPerHour - minutes := (absMicroseconds % microsecondsPerHour) / microsecondsPerMinute - seconds := (absMicroseconds % microsecondsPerMinute) / microsecondsPerSecond - microseconds := absMicroseconds % microsecondsPerSecond - - timeStr := fmt.Sprintf("%02d:%02d:%02d.%06d", hours, minutes, seconds, microseconds) - return append(buf, timeStr...), nil -} - -// EncodeBinary encodes src into w. -func (src Interval) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt64(buf, src.Microseconds) - buf = pgio.AppendInt32(buf, src.Days) - return pgio.AppendInt32(buf, src.Months), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Interval) Scan(src interface{}) error { - if src == nil { - *dst = Interval{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Interval) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/json.go b/vendor/github.com/jackc/pgtype/json.go deleted file mode 100644 index a9508bdd..00000000 --- a/vendor/github.com/jackc/pgtype/json.go +++ /dev/null @@ -1,209 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/json" - "errors" - "fmt" - "reflect" -) - -type JSON struct { - Bytes []byte - Status Status -} - -func (dst *JSON) Set(src interface{}) error { - if src == nil { - *dst = JSON{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case string: - *dst = JSON{Bytes: []byte(value), Status: Present} - case *string: - if value == nil { - *dst = JSON{Status: Null} - } else { - *dst = JSON{Bytes: []byte(*value), Status: Present} - } - case []byte: - if value == nil { - *dst = JSON{Status: Null} - } else { - *dst = JSON{Bytes: value, Status: Present} - } - // Encode* methods are defined on *JSON. If JSON is passed directly then the - // struct itself would be encoded instead of Bytes. This is clearly a footgun - // so detect and return an error. See https://github.com/jackc/pgx/issues/350. - case JSON: - return errors.New("use pointer to pgtype.JSON instead of value") - // Same as above but for JSONB (because they share implementation) - case JSONB: - return errors.New("use pointer to pgtype.JSONB instead of value") - - default: - buf, err := json.Marshal(value) - if err != nil { - return err - } - *dst = JSON{Bytes: buf, Status: Present} - } - - return nil -} - -func (dst JSON) Get() interface{} { - switch dst.Status { - case Present: - var i interface{} - err := json.Unmarshal(dst.Bytes, &i) - if err != nil { - return dst - } - return i - case Null: - return nil - default: - return dst.Status - } -} - -func (src *JSON) AssignTo(dst interface{}) error { - switch v := dst.(type) { - case *string: - if src.Status == Present { - *v = string(src.Bytes) - } else { - return fmt.Errorf("cannot assign non-present status to %T", dst) - } - case **string: - if src.Status == Present { - s := string(src.Bytes) - *v = &s - return nil - } else { - *v = nil - return nil - } - case *[]byte: - if src.Status != Present { - *v = nil - } else { - buf := make([]byte, len(src.Bytes)) - copy(buf, src.Bytes) - *v = buf - } - default: - data := src.Bytes - if data == nil || src.Status != Present { - data = []byte("null") - } - - p := reflect.ValueOf(dst).Elem() - p.Set(reflect.Zero(p.Type())) - - return json.Unmarshal(data, dst) - } - - return nil -} - -func (JSON) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *JSON) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSON{Status: Null} - return nil - } - - *dst = JSON{Bytes: src, Status: Present} - return nil -} - -func (dst *JSON) DecodeBinary(ci *ConnInfo, src []byte) error { - return dst.DecodeText(ci, src) -} - -func (JSON) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src JSON) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.Bytes...), nil -} - -func (src JSON) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return src.EncodeText(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *JSON) Scan(src interface{}) error { - if src == nil { - *dst = JSON{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src JSON) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.Bytes, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src JSON) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - return src.Bytes, nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *JSON) UnmarshalJSON(b []byte) error { - if b == nil || string(b) == "null" { - *dst = JSON{Status: Null} - } else { - *dst = JSON{Bytes: b, Status: Present} - } - return nil - -} diff --git a/vendor/github.com/jackc/pgtype/json_array.go b/vendor/github.com/jackc/pgtype/json_array.go deleted file mode 100644 index 8d68882f..00000000 --- a/vendor/github.com/jackc/pgtype/json_array.go +++ /dev/null @@ -1,546 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type JSONArray struct { - Elements []JSON - Dimensions []ArrayDimension - Status Status -} - -func (dst *JSONArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = JSONArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = JSONArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONArray{Status: Present} - } else { - elements := make([]JSON, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case [][]byte: - if value == nil { - *dst = JSONArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONArray{Status: Present} - } else { - elements := make([]JSON, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []json.RawMessage: - if value == nil { - *dst = JSONArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONArray{Status: Present} - } else { - elements := make([]JSON, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []JSON: - if value == nil { - *dst = JSONArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONArray{Status: Present} - } else { - *dst = JSONArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = JSONArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for JSONArray", src) - } - if elementsLength == 0 { - *dst = JSONArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to JSONArray", src) - } - - *dst = JSONArray{ - Elements: make([]JSON, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]JSON, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to JSONArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *JSONArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to JSONArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in JSONArray", err) - } - index++ - - return index, nil -} - -func (dst JSONArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *JSONArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[][]byte: - *v = make([][]byte, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]json.RawMessage: - *v = make([]json.RawMessage, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *JSONArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from JSONArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from JSONArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *JSONArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSONArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []JSON - - if len(uta.Elements) > 0 { - elements = make([]JSON, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem JSON - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = JSONArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *JSONArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSONArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = JSONArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]JSON, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = JSONArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src JSONArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src JSONArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("json"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "json") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *JSONArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src JSONArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/jsonb.go b/vendor/github.com/jackc/pgtype/jsonb.go deleted file mode 100644 index c9dafc93..00000000 --- a/vendor/github.com/jackc/pgtype/jsonb.go +++ /dev/null @@ -1,85 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" -) - -type JSONB JSON - -func (dst *JSONB) Set(src interface{}) error { - return (*JSON)(dst).Set(src) -} - -func (dst JSONB) Get() interface{} { - return (JSON)(dst).Get() -} - -func (src *JSONB) AssignTo(dst interface{}) error { - return (*JSON)(src).AssignTo(dst) -} - -func (JSONB) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *JSONB) DecodeText(ci *ConnInfo, src []byte) error { - return (*JSON)(dst).DecodeText(ci, src) -} - -func (dst *JSONB) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSONB{Status: Null} - return nil - } - - if len(src) == 0 { - return fmt.Errorf("jsonb too short") - } - - if src[0] != 1 { - return fmt.Errorf("unknown jsonb version number %d", src[0]) - } - - *dst = JSONB{Bytes: src[1:], Status: Present} - return nil - -} - -func (JSONB) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src JSONB) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (JSON)(src).EncodeText(ci, buf) -} - -func (src JSONB) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, 1) - return append(buf, src.Bytes...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *JSONB) Scan(src interface{}) error { - return (*JSON)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src JSONB) Value() (driver.Value, error) { - return (JSON)(src).Value() -} - -func (src JSONB) MarshalJSON() ([]byte, error) { - return (JSON)(src).MarshalJSON() -} - -func (dst *JSONB) UnmarshalJSON(b []byte) error { - return (*JSON)(dst).UnmarshalJSON(b) -} diff --git a/vendor/github.com/jackc/pgtype/jsonb_array.go b/vendor/github.com/jackc/pgtype/jsonb_array.go deleted file mode 100644 index e78ad377..00000000 --- a/vendor/github.com/jackc/pgtype/jsonb_array.go +++ /dev/null @@ -1,546 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type JSONBArray struct { - Elements []JSONB - Dimensions []ArrayDimension - Status Status -} - -func (dst *JSONBArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = JSONBArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = JSONBArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONBArray{Status: Present} - } else { - elements := make([]JSONB, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONBArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case [][]byte: - if value == nil { - *dst = JSONBArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONBArray{Status: Present} - } else { - elements := make([]JSONB, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONBArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []json.RawMessage: - if value == nil { - *dst = JSONBArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONBArray{Status: Present} - } else { - elements := make([]JSONB, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = JSONBArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []JSONB: - if value == nil { - *dst = JSONBArray{Status: Null} - } else if len(value) == 0 { - *dst = JSONBArray{Status: Present} - } else { - *dst = JSONBArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = JSONBArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for JSONBArray", src) - } - if elementsLength == 0 { - *dst = JSONBArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to JSONBArray", src) - } - - *dst = JSONBArray{ - Elements: make([]JSONB, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]JSONB, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to JSONBArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *JSONBArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to JSONBArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in JSONBArray", err) - } - index++ - - return index, nil -} - -func (dst JSONBArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *JSONBArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[][]byte: - *v = make([][]byte, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]json.RawMessage: - *v = make([]json.RawMessage, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *JSONBArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from JSONBArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from JSONBArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *JSONBArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSONBArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []JSONB - - if len(uta.Elements) > 0 { - elements = make([]JSONB, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem JSONB - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = JSONBArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *JSONBArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = JSONBArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = JSONBArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]JSONB, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = JSONBArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src JSONBArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src JSONBArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("jsonb"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "jsonb") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *JSONBArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src JSONBArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/line.go b/vendor/github.com/jackc/pgtype/line.go deleted file mode 100644 index 3564b174..00000000 --- a/vendor/github.com/jackc/pgtype/line.go +++ /dev/null @@ -1,148 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Line struct { - A, B, C float64 - Status Status -} - -func (dst *Line) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Line", src) -} - -func (dst Line) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Line) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Line) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Line{Status: Null} - return nil - } - - if len(src) < 7 { - return fmt.Errorf("invalid length for Line: %v", len(src)) - } - - parts := strings.SplitN(string(src[1:len(src)-1]), ",", 3) - if len(parts) < 3 { - return fmt.Errorf("invalid format for line") - } - - a, err := strconv.ParseFloat(parts[0], 64) - if err != nil { - return err - } - - b, err := strconv.ParseFloat(parts[1], 64) - if err != nil { - return err - } - - c, err := strconv.ParseFloat(parts[2], 64) - if err != nil { - return err - } - - *dst = Line{A: a, B: b, C: c, Status: Present} - return nil -} - -func (dst *Line) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Line{Status: Null} - return nil - } - - if len(src) != 24 { - return fmt.Errorf("invalid length for Line: %v", len(src)) - } - - a := binary.BigEndian.Uint64(src) - b := binary.BigEndian.Uint64(src[8:]) - c := binary.BigEndian.Uint64(src[16:]) - - *dst = Line{ - A: math.Float64frombits(a), - B: math.Float64frombits(b), - C: math.Float64frombits(c), - Status: Present, - } - return nil -} - -func (src Line) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, fmt.Sprintf(`{%s,%s,%s}`, - strconv.FormatFloat(src.A, 'f', -1, 64), - strconv.FormatFloat(src.B, 'f', -1, 64), - strconv.FormatFloat(src.C, 'f', -1, 64), - )...) - - return buf, nil -} - -func (src Line) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.A)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.B)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.C)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Line) Scan(src interface{}) error { - if src == nil { - *dst = Line{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Line) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/lseg.go b/vendor/github.com/jackc/pgtype/lseg.go deleted file mode 100644 index 894dae86..00000000 --- a/vendor/github.com/jackc/pgtype/lseg.go +++ /dev/null @@ -1,165 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Lseg struct { - P [2]Vec2 - Status Status -} - -func (dst *Lseg) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Lseg", src) -} - -func (dst Lseg) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Lseg) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Lseg) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Lseg{Status: Null} - return nil - } - - if len(src) < 11 { - return fmt.Errorf("invalid length for Lseg: %v", len(src)) - } - - str := string(src[2:]) - - var end int - end = strings.IndexByte(str, ',') - - x1, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1:] - end = strings.IndexByte(str, ')') - - y1, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+3:] - end = strings.IndexByte(str, ',') - - x2, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1 : len(str)-2] - - y2, err := strconv.ParseFloat(str, 64) - if err != nil { - return err - } - - *dst = Lseg{P: [2]Vec2{{x1, y1}, {x2, y2}}, Status: Present} - return nil -} - -func (dst *Lseg) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Lseg{Status: Null} - return nil - } - - if len(src) != 32 { - return fmt.Errorf("invalid length for Lseg: %v", len(src)) - } - - x1 := binary.BigEndian.Uint64(src) - y1 := binary.BigEndian.Uint64(src[8:]) - x2 := binary.BigEndian.Uint64(src[16:]) - y2 := binary.BigEndian.Uint64(src[24:]) - - *dst = Lseg{ - P: [2]Vec2{ - {math.Float64frombits(x1), math.Float64frombits(y1)}, - {math.Float64frombits(x2), math.Float64frombits(y2)}, - }, - Status: Present, - } - return nil -} - -func (src Lseg) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, fmt.Sprintf(`[(%s,%s),(%s,%s)]`, - strconv.FormatFloat(src.P[0].X, 'f', -1, 64), - strconv.FormatFloat(src.P[0].Y, 'f', -1, 64), - strconv.FormatFloat(src.P[1].X, 'f', -1, 64), - strconv.FormatFloat(src.P[1].Y, 'f', -1, 64), - )...) - - return buf, nil -} - -func (src Lseg) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[0].X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[0].Y)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[1].X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P[1].Y)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Lseg) Scan(src interface{}) error { - if src == nil { - *dst = Lseg{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Lseg) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/ltree.go b/vendor/github.com/jackc/pgtype/ltree.go deleted file mode 100644 index 8c8d4213..00000000 --- a/vendor/github.com/jackc/pgtype/ltree.go +++ /dev/null @@ -1,72 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" -) - -type Ltree Text - -func (dst *Ltree) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -func (dst Ltree) Get() interface{} { - return (Text)(dst).Get() -} - -func (src *Ltree) AssignTo(dst interface{}) error { - return (*Text)(src).AssignTo(dst) -} - -func (src Ltree) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeText(ci, buf) -} - -func (src Ltree) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - buf = append(buf, 1) - return append(buf, src.String...), nil -} - -func (Ltree) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *Ltree) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (dst *Ltree) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Ltree{Status: Null} - return nil - } - - // Get Ltree version, only 1 is allowed - version := src[0] - if version != 1 { - return fmt.Errorf("unsupported ltree version %d", version) - } - - ltreeStr := string(src[1:]) - *dst = Ltree{String: ltreeStr, Status: Present} - return nil -} - -func (Ltree) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (dst *Ltree) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -func (src Ltree) Value() (driver.Value, error) { - return (Text)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/macaddr.go b/vendor/github.com/jackc/pgtype/macaddr.go deleted file mode 100644 index 1d3cfe7b..00000000 --- a/vendor/github.com/jackc/pgtype/macaddr.go +++ /dev/null @@ -1,173 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - "net" -) - -type Macaddr struct { - Addr net.HardwareAddr - Status Status -} - -func (dst *Macaddr) Set(src interface{}) error { - if src == nil { - *dst = Macaddr{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case net.HardwareAddr: - addr := make(net.HardwareAddr, len(value)) - copy(addr, value) - *dst = Macaddr{Addr: addr, Status: Present} - case string: - addr, err := net.ParseMAC(value) - if err != nil { - return err - } - *dst = Macaddr{Addr: addr, Status: Present} - case *net.HardwareAddr: - if value == nil { - *dst = Macaddr{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Macaddr{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingPtrType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Macaddr", value) - } - - return nil -} - -func (dst Macaddr) Get() interface{} { - switch dst.Status { - case Present: - return dst.Addr - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Macaddr) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *net.HardwareAddr: - *v = make(net.HardwareAddr, len(src.Addr)) - copy(*v, src.Addr) - return nil - case *string: - *v = src.Addr.String() - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Macaddr) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Macaddr{Status: Null} - return nil - } - - addr, err := net.ParseMAC(string(src)) - if err != nil { - return err - } - - *dst = Macaddr{Addr: addr, Status: Present} - return nil -} - -func (dst *Macaddr) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Macaddr{Status: Null} - return nil - } - - if len(src) != 6 { - return fmt.Errorf("Received an invalid size for a macaddr: %d", len(src)) - } - - addr := make(net.HardwareAddr, 6) - copy(addr, src) - - *dst = Macaddr{Addr: addr, Status: Present} - - return nil -} - -func (src Macaddr) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.Addr.String()...), nil -} - -// EncodeBinary encodes src into w. -func (src Macaddr) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.Addr...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Macaddr) Scan(src interface{}) error { - if src == nil { - *dst = Macaddr{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Macaddr) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/macaddr_array.go b/vendor/github.com/jackc/pgtype/macaddr_array.go deleted file mode 100644 index bdb1f203..00000000 --- a/vendor/github.com/jackc/pgtype/macaddr_array.go +++ /dev/null @@ -1,518 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "net" - "reflect" - - "github.com/jackc/pgio" -) - -type MacaddrArray struct { - Elements []Macaddr - Dimensions []ArrayDimension - Status Status -} - -func (dst *MacaddrArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = MacaddrArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []net.HardwareAddr: - if value == nil { - *dst = MacaddrArray{Status: Null} - } else if len(value) == 0 { - *dst = MacaddrArray{Status: Present} - } else { - elements := make([]Macaddr, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = MacaddrArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*net.HardwareAddr: - if value == nil { - *dst = MacaddrArray{Status: Null} - } else if len(value) == 0 { - *dst = MacaddrArray{Status: Present} - } else { - elements := make([]Macaddr, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = MacaddrArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Macaddr: - if value == nil { - *dst = MacaddrArray{Status: Null} - } else if len(value) == 0 { - *dst = MacaddrArray{Status: Present} - } else { - *dst = MacaddrArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = MacaddrArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for MacaddrArray", src) - } - if elementsLength == 0 { - *dst = MacaddrArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to MacaddrArray", src) - } - - *dst = MacaddrArray{ - Elements: make([]Macaddr, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Macaddr, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to MacaddrArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *MacaddrArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to MacaddrArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in MacaddrArray", err) - } - index++ - - return index, nil -} - -func (dst MacaddrArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *MacaddrArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]net.HardwareAddr: - *v = make([]net.HardwareAddr, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*net.HardwareAddr: - *v = make([]*net.HardwareAddr, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *MacaddrArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from MacaddrArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from MacaddrArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *MacaddrArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = MacaddrArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Macaddr - - if len(uta.Elements) > 0 { - elements = make([]Macaddr, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Macaddr - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = MacaddrArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *MacaddrArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = MacaddrArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = MacaddrArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Macaddr, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = MacaddrArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src MacaddrArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src MacaddrArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("macaddr"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "macaddr") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *MacaddrArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src MacaddrArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/multirange.go b/vendor/github.com/jackc/pgtype/multirange.go deleted file mode 100644 index beb11f70..00000000 --- a/vendor/github.com/jackc/pgtype/multirange.go +++ /dev/null @@ -1,83 +0,0 @@ -package pgtype - -import ( - "bytes" - "fmt" -) - -type UntypedTextMultirange struct { - Elements []string -} - -func ParseUntypedTextMultirange(src string) (*UntypedTextMultirange, error) { - utmr := &UntypedTextMultirange{} - utmr.Elements = make([]string, 0) - - buf := bytes.NewBufferString(src) - - skipWhitespace(buf) - - r, _, err := buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid array: %v", err) - } - - if r != '{' { - return nil, fmt.Errorf("invalid multirange, expected '{': %v", err) - } - -parseValueLoop: - for { - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid multirange: %v", err) - } - - switch r { - case ',': // skip range separator - case '}': - break parseValueLoop - default: - buf.UnreadRune() - value, err := parseRange(buf) - if err != nil { - return nil, fmt.Errorf("invalid multirange value: %v", err) - } - utmr.Elements = append(utmr.Elements, value) - } - } - - skipWhitespace(buf) - - if buf.Len() > 0 { - return nil, fmt.Errorf("unexpected trailing data: %v", buf.String()) - } - - return utmr, nil - -} - -func parseRange(buf *bytes.Buffer) (string, error) { - - s := &bytes.Buffer{} - - boundSepRead := false - for { - r, _, err := buf.ReadRune() - if err != nil { - return "", err - } - - switch r { - case ',', '}': - if r == ',' && !boundSepRead { - boundSepRead = true - break - } - buf.UnreadRune() - return s.String(), nil - } - - s.WriteRune(r) - } -} diff --git a/vendor/github.com/jackc/pgtype/name.go b/vendor/github.com/jackc/pgtype/name.go deleted file mode 100644 index 7ce8d25e..00000000 --- a/vendor/github.com/jackc/pgtype/name.go +++ /dev/null @@ -1,58 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// Name is a type used for PostgreSQL's special 63-byte -// name data type, used for identifiers like table names. -// The pg_class.relname column is a good example of where the -// name data type is used. -// -// Note that the underlying Go data type of pgx.Name is string, -// so there is no way to enforce the 63-byte length. Inputting -// a longer name into PostgreSQL will result in silent truncation -// to 63 bytes. -// -// Also, if you have custom-compiled PostgreSQL and set -// NAMEDATALEN to a different value, obviously that number of -// bytes applies, rather than the default 63. -type Name Text - -func (dst *Name) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -func (dst Name) Get() interface{} { - return (Text)(dst).Get() -} - -func (src *Name) AssignTo(dst interface{}) error { - return (*Text)(src).AssignTo(dst) -} - -func (dst *Name) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (dst *Name) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeBinary(ci, src) -} - -func (src Name) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeText(ci, buf) -} - -func (src Name) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *Name) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Name) Value() (driver.Value, error) { - return (Text)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/num_multirange.go b/vendor/github.com/jackc/pgtype/num_multirange.go deleted file mode 100644 index cbabc8ac..00000000 --- a/vendor/github.com/jackc/pgtype/num_multirange.go +++ /dev/null @@ -1,239 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - - "github.com/jackc/pgio" -) - -type Nummultirange struct { - Ranges []Numrange - Status Status -} - -func (dst *Nummultirange) Set(src interface{}) error { - //untyped nil and typed nil interfaces are different - if src == nil { - *dst = Nummultirange{Status: Null} - return nil - } - - switch value := src.(type) { - case Nummultirange: - *dst = value - case *Nummultirange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - case []Numrange: - if value == nil { - *dst = Nummultirange{Status: Null} - } else if len(value) == 0 { - *dst = Nummultirange{Status: Present} - } else { - elements := make([]Numrange, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Nummultirange{ - Ranges: elements, - Status: Present, - } - } - case []*Numrange: - if value == nil { - *dst = Nummultirange{Status: Null} - } else if len(value) == 0 { - *dst = Nummultirange{Status: Present} - } else { - elements := make([]Numrange, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = Nummultirange{ - Ranges: elements, - Status: Present, - } - } - default: - return fmt.Errorf("cannot convert %v to Nummultirange", src) - } - - return nil - -} - -func (dst Nummultirange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Nummultirange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Nummultirange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Nummultirange{Status: Null} - return nil - } - - utmr, err := ParseUntypedTextMultirange(string(src)) - if err != nil { - return err - } - - var elements []Numrange - - if len(utmr.Elements) > 0 { - elements = make([]Numrange, len(utmr.Elements)) - - for i, s := range utmr.Elements { - var elem Numrange - - elemSrc := []byte(s) - - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = Nummultirange{Ranges: elements, Status: Present} - - return nil -} - -func (dst *Nummultirange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Nummultirange{Status: Null} - return nil - } - - rp := 0 - - numElems := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if numElems == 0 { - *dst = Nummultirange{Status: Present} - return nil - } - - elements := make([]Numrange, numElems) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err := elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = Nummultirange{Ranges: elements, Status: Present} - return nil -} - -func (src Nummultirange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, '{') - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Ranges { - if i > 0 { - buf = append(buf, ',') - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - return nil, fmt.Errorf("multi-range does not allow null range") - } else { - buf = append(buf, string(elemBuf)...) - } - - } - - buf = append(buf, '}') - - return buf, nil -} - -func (src Nummultirange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.Ranges))) - - for i := range src.Ranges { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Ranges[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Nummultirange) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Nummultirange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/numeric.go b/vendor/github.com/jackc/pgtype/numeric.go deleted file mode 100644 index 1f32b36b..00000000 --- a/vendor/github.com/jackc/pgtype/numeric.go +++ /dev/null @@ -1,853 +0,0 @@ -package pgtype - -import ( - "bytes" - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "math/big" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -// PostgreSQL internal numeric storage uses 16-bit "digits" with base of 10,000 -const nbase = 10000 - -const ( - pgNumericNaN = 0x00000000c0000000 - pgNumericNaNSign = 0xc000 - - pgNumericPosInf = 0x00000000d0000000 - pgNumericPosInfSign = 0xd000 - - pgNumericNegInf = 0x00000000f0000000 - pgNumericNegInfSign = 0xf000 -) - -var big0 *big.Int = big.NewInt(0) -var big1 *big.Int = big.NewInt(1) -var big10 *big.Int = big.NewInt(10) -var big100 *big.Int = big.NewInt(100) -var big1000 *big.Int = big.NewInt(1000) - -var bigMaxInt8 *big.Int = big.NewInt(math.MaxInt8) -var bigMinInt8 *big.Int = big.NewInt(math.MinInt8) -var bigMaxInt16 *big.Int = big.NewInt(math.MaxInt16) -var bigMinInt16 *big.Int = big.NewInt(math.MinInt16) -var bigMaxInt32 *big.Int = big.NewInt(math.MaxInt32) -var bigMinInt32 *big.Int = big.NewInt(math.MinInt32) -var bigMaxInt64 *big.Int = big.NewInt(math.MaxInt64) -var bigMinInt64 *big.Int = big.NewInt(math.MinInt64) -var bigMaxInt *big.Int = big.NewInt(int64(maxInt)) -var bigMinInt *big.Int = big.NewInt(int64(minInt)) - -var bigMaxUint8 *big.Int = big.NewInt(math.MaxUint8) -var bigMaxUint16 *big.Int = big.NewInt(math.MaxUint16) -var bigMaxUint32 *big.Int = big.NewInt(math.MaxUint32) -var bigMaxUint64 *big.Int = (&big.Int{}).SetUint64(uint64(math.MaxUint64)) -var bigMaxUint *big.Int = (&big.Int{}).SetUint64(uint64(maxUint)) - -var bigNBase *big.Int = big.NewInt(nbase) -var bigNBaseX2 *big.Int = big.NewInt(nbase * nbase) -var bigNBaseX3 *big.Int = big.NewInt(nbase * nbase * nbase) -var bigNBaseX4 *big.Int = big.NewInt(nbase * nbase * nbase * nbase) - -type Numeric struct { - Int *big.Int - Exp int32 - Status Status - NaN bool - InfinityModifier InfinityModifier -} - -func (dst *Numeric) Set(src interface{}) error { - if src == nil { - *dst = Numeric{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case float32: - if math.IsNaN(float64(value)) { - *dst = Numeric{Status: Present, NaN: true} - return nil - } else if math.IsInf(float64(value), 1) { - *dst = Numeric{Status: Present, InfinityModifier: Infinity} - return nil - } else if math.IsInf(float64(value), -1) { - *dst = Numeric{Status: Present, InfinityModifier: NegativeInfinity} - return nil - } - num, exp, err := parseNumericString(strconv.FormatFloat(float64(value), 'f', -1, 64)) - if err != nil { - return err - } - *dst = Numeric{Int: num, Exp: exp, Status: Present} - case float64: - if math.IsNaN(value) { - *dst = Numeric{Status: Present, NaN: true} - return nil - } else if math.IsInf(value, 1) { - *dst = Numeric{Status: Present, InfinityModifier: Infinity} - return nil - } else if math.IsInf(value, -1) { - *dst = Numeric{Status: Present, InfinityModifier: NegativeInfinity} - return nil - } - num, exp, err := parseNumericString(strconv.FormatFloat(value, 'f', -1, 64)) - if err != nil { - return err - } - *dst = Numeric{Int: num, Exp: exp, Status: Present} - case int8: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case uint8: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case int16: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case uint16: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case int32: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case uint32: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case int64: - *dst = Numeric{Int: big.NewInt(value), Status: Present} - case uint64: - *dst = Numeric{Int: (&big.Int{}).SetUint64(value), Status: Present} - case int: - *dst = Numeric{Int: big.NewInt(int64(value)), Status: Present} - case uint: - *dst = Numeric{Int: (&big.Int{}).SetUint64(uint64(value)), Status: Present} - case string: - num, exp, err := parseNumericString(value) - if err != nil { - return err - } - *dst = Numeric{Int: num, Exp: exp, Status: Present} - case *float64: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *float32: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *int8: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *uint8: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *int16: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *uint16: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *int32: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *uint32: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *int64: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *uint64: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *int: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *uint: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case *string: - if value == nil { - *dst = Numeric{Status: Null} - } else { - return dst.Set(*value) - } - case InfinityModifier: - *dst = Numeric{InfinityModifier: value, Status: Present} - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Numeric", value) - } - - return nil -} - -func (dst Numeric) Get() interface{} { - switch dst.Status { - case Present: - if dst.InfinityModifier != None { - return dst.InfinityModifier - } - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Numeric) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *float32: - f, err := src.toFloat64() - if err != nil { - return err - } - return float64AssignTo(f, src.Status, dst) - case *float64: - f, err := src.toFloat64() - if err != nil { - return err - } - return float64AssignTo(f, src.Status, dst) - case *int: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(bigMaxInt) > 0 { - return fmt.Errorf("%v is greater than maximum value for %T", normalizedInt, *v) - } - if normalizedInt.Cmp(bigMinInt) < 0 { - return fmt.Errorf("%v is less than minimum value for %T", normalizedInt, *v) - } - *v = int(normalizedInt.Int64()) - case *int8: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(bigMaxInt8) > 0 { - return fmt.Errorf("%v is greater than maximum value for %T", normalizedInt, *v) - } - if normalizedInt.Cmp(bigMinInt8) < 0 { - return fmt.Errorf("%v is less than minimum value for %T", normalizedInt, *v) - } - *v = int8(normalizedInt.Int64()) - case *int16: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(bigMaxInt16) > 0 { - return fmt.Errorf("%v is greater than maximum value for %T", normalizedInt, *v) - } - if normalizedInt.Cmp(bigMinInt16) < 0 { - return fmt.Errorf("%v is less than minimum value for %T", normalizedInt, *v) - } - *v = int16(normalizedInt.Int64()) - case *int32: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(bigMaxInt32) > 0 { - return fmt.Errorf("%v is greater than maximum value for %T", normalizedInt, *v) - } - if normalizedInt.Cmp(bigMinInt32) < 0 { - return fmt.Errorf("%v is less than minimum value for %T", normalizedInt, *v) - } - *v = int32(normalizedInt.Int64()) - case *int64: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(bigMaxInt64) > 0 { - return fmt.Errorf("%v is greater than maximum value for %T", normalizedInt, *v) - } - if normalizedInt.Cmp(bigMinInt64) < 0 { - return fmt.Errorf("%v is less than minimum value for %T", normalizedInt, *v) - } - *v = normalizedInt.Int64() - case *uint: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(big0) < 0 { - return fmt.Errorf("%d is less than zero for %T", normalizedInt, *v) - } else if normalizedInt.Cmp(bigMaxUint) > 0 { - return fmt.Errorf("%d is greater than maximum value for %T", normalizedInt, *v) - } - *v = uint(normalizedInt.Uint64()) - case *uint8: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(big0) < 0 { - return fmt.Errorf("%d is less than zero for %T", normalizedInt, *v) - } else if normalizedInt.Cmp(bigMaxUint8) > 0 { - return fmt.Errorf("%d is greater than maximum value for %T", normalizedInt, *v) - } - *v = uint8(normalizedInt.Uint64()) - case *uint16: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(big0) < 0 { - return fmt.Errorf("%d is less than zero for %T", normalizedInt, *v) - } else if normalizedInt.Cmp(bigMaxUint16) > 0 { - return fmt.Errorf("%d is greater than maximum value for %T", normalizedInt, *v) - } - *v = uint16(normalizedInt.Uint64()) - case *uint32: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(big0) < 0 { - return fmt.Errorf("%d is less than zero for %T", normalizedInt, *v) - } else if normalizedInt.Cmp(bigMaxUint32) > 0 { - return fmt.Errorf("%d is greater than maximum value for %T", normalizedInt, *v) - } - *v = uint32(normalizedInt.Uint64()) - case *uint64: - normalizedInt, err := src.toBigInt() - if err != nil { - return err - } - if normalizedInt.Cmp(big0) < 0 { - return fmt.Errorf("%d is less than zero for %T", normalizedInt, *v) - } else if normalizedInt.Cmp(bigMaxUint64) > 0 { - return fmt.Errorf("%d is greater than maximum value for %T", normalizedInt, *v) - } - *v = normalizedInt.Uint64() - case *big.Rat: - rat, err := src.toBigRat() - if err != nil { - return err - } - v.Set(rat) - case *string: - buf, err := encodeNumericText(*src, nil) - if err != nil { - return err - } - *v = string(buf) - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return nil -} - -func (dst *Numeric) toBigInt() (*big.Int, error) { - if dst.Exp == 0 { - return dst.Int, nil - } - - num := &big.Int{} - num.Set(dst.Int) - if dst.Exp > 0 { - mul := &big.Int{} - mul.Exp(big10, big.NewInt(int64(dst.Exp)), nil) - num.Mul(num, mul) - return num, nil - } - - div := &big.Int{} - div.Exp(big10, big.NewInt(int64(-dst.Exp)), nil) - remainder := &big.Int{} - num.DivMod(num, div, remainder) - if remainder.Cmp(big0) != 0 { - return nil, fmt.Errorf("cannot convert %v to integer", dst) - } - return num, nil -} - -func (dst *Numeric) toBigRat() (*big.Rat, error) { - if dst.NaN { - return nil, fmt.Errorf("%v is not a number", dst) - } else if dst.InfinityModifier == Infinity { - return nil, fmt.Errorf("%v is infinity", dst) - } else if dst.InfinityModifier == NegativeInfinity { - return nil, fmt.Errorf("%v is -infinity", dst) - } - - num := new(big.Rat).SetInt(dst.Int) - if dst.Exp > 0 { - mul := new(big.Int).Exp(big10, big.NewInt(int64(dst.Exp)), nil) - num.Mul(num, new(big.Rat).SetInt(mul)) - } else if dst.Exp < 0 { - mul := new(big.Int).Exp(big10, big.NewInt(int64(-dst.Exp)), nil) - num.Quo(num, new(big.Rat).SetInt(mul)) - } - return num, nil -} - -func (src *Numeric) toFloat64() (float64, error) { - if src.NaN { - return math.NaN(), nil - } else if src.InfinityModifier == Infinity { - return math.Inf(1), nil - } else if src.InfinityModifier == NegativeInfinity { - return math.Inf(-1), nil - } - - buf := make([]byte, 0, 32) - - buf = append(buf, src.Int.String()...) - buf = append(buf, 'e') - buf = append(buf, strconv.FormatInt(int64(src.Exp), 10)...) - - f, err := strconv.ParseFloat(string(buf), 64) - if err != nil { - return 0, err - } - return f, nil -} - -func (dst *Numeric) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Numeric{Status: Null} - return nil - } - - if string(src) == "NaN" { - *dst = Numeric{Status: Present, NaN: true} - return nil - } else if string(src) == "Infinity" { - *dst = Numeric{Status: Present, InfinityModifier: Infinity} - return nil - } else if string(src) == "-Infinity" { - *dst = Numeric{Status: Present, InfinityModifier: NegativeInfinity} - return nil - } - - num, exp, err := parseNumericString(string(src)) - if err != nil { - return err - } - - *dst = Numeric{Int: num, Exp: exp, Status: Present} - return nil -} - -func parseNumericString(str string) (n *big.Int, exp int32, err error) { - parts := strings.SplitN(str, ".", 2) - digits := strings.Join(parts, "") - - if len(parts) > 1 { - exp = int32(-len(parts[1])) - } else { - for len(digits) > 1 && digits[len(digits)-1] == '0' && digits[len(digits)-2] != '-' { - digits = digits[:len(digits)-1] - exp++ - } - } - - accum := &big.Int{} - if _, ok := accum.SetString(digits, 10); !ok { - return nil, 0, fmt.Errorf("%s is not a number", str) - } - - return accum, exp, nil -} - -func (dst *Numeric) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Numeric{Status: Null} - return nil - } - - if len(src) < 8 { - return fmt.Errorf("numeric incomplete %v", src) - } - - rp := 0 - ndigits := binary.BigEndian.Uint16(src[rp:]) - rp += 2 - weight := int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - sign := binary.BigEndian.Uint16(src[rp:]) - rp += 2 - dscale := int16(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - - if sign == pgNumericNaNSign { - *dst = Numeric{Status: Present, NaN: true} - return nil - } else if sign == pgNumericPosInfSign { - *dst = Numeric{Status: Present, InfinityModifier: Infinity} - return nil - } else if sign == pgNumericNegInfSign { - *dst = Numeric{Status: Present, InfinityModifier: NegativeInfinity} - return nil - } - - if ndigits == 0 { - *dst = Numeric{Int: big.NewInt(0), Status: Present} - return nil - } - - if len(src[rp:]) < int(ndigits)*2 { - return fmt.Errorf("numeric incomplete %v", src) - } - - accum := &big.Int{} - - for i := 0; i < int(ndigits+3)/4; i++ { - int64accum, bytesRead, digitsRead := nbaseDigitsToInt64(src[rp:]) - rp += bytesRead - - if i > 0 { - var mul *big.Int - switch digitsRead { - case 1: - mul = bigNBase - case 2: - mul = bigNBaseX2 - case 3: - mul = bigNBaseX3 - case 4: - mul = bigNBaseX4 - default: - return fmt.Errorf("invalid digitsRead: %d (this can't happen)", digitsRead) - } - accum.Mul(accum, mul) - } - - accum.Add(accum, big.NewInt(int64accum)) - } - - exp := (int32(weight) - int32(ndigits) + 1) * 4 - - if dscale > 0 { - fracNBaseDigits := int16(int32(ndigits) - int32(weight) - 1) - fracDecimalDigits := fracNBaseDigits * 4 - - if dscale > fracDecimalDigits { - multCount := int(dscale - fracDecimalDigits) - for i := 0; i < multCount; i++ { - accum.Mul(accum, big10) - exp-- - } - } else if dscale < fracDecimalDigits { - divCount := int(fracDecimalDigits - dscale) - for i := 0; i < divCount; i++ { - accum.Div(accum, big10) - exp++ - } - } - } - - reduced := &big.Int{} - remainder := &big.Int{} - if exp >= 0 { - for { - reduced.DivMod(accum, big10, remainder) - if remainder.Cmp(big0) != 0 { - break - } - accum.Set(reduced) - exp++ - } - } - - if sign != 0 { - accum.Neg(accum) - } - - *dst = Numeric{Int: accum, Exp: exp, Status: Present} - - return nil - -} - -func nbaseDigitsToInt64(src []byte) (accum int64, bytesRead, digitsRead int) { - digits := len(src) / 2 - if digits > 4 { - digits = 4 - } - - rp := 0 - - for i := 0; i < digits; i++ { - if i > 0 { - accum *= nbase - } - accum += int64(binary.BigEndian.Uint16(src[rp:])) - rp += 2 - } - - return accum, rp, digits -} - -func (src Numeric) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if src.NaN { - buf = append(buf, "NaN"...) - return buf, nil - } else if src.InfinityModifier == Infinity { - buf = append(buf, "Infinity"...) - return buf, nil - } else if src.InfinityModifier == NegativeInfinity { - buf = append(buf, "-Infinity"...) - return buf, nil - } - - buf = append(buf, src.Int.String()...) - buf = append(buf, 'e') - buf = append(buf, strconv.FormatInt(int64(src.Exp), 10)...) - return buf, nil -} - -func (src Numeric) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if src.NaN { - buf = pgio.AppendUint64(buf, pgNumericNaN) - return buf, nil - } else if src.InfinityModifier == Infinity { - buf = pgio.AppendUint64(buf, pgNumericPosInf) - return buf, nil - } else if src.InfinityModifier == NegativeInfinity { - buf = pgio.AppendUint64(buf, pgNumericNegInf) - return buf, nil - } - - var sign int16 - if src.Int.Cmp(big0) < 0 { - sign = 16384 - } - - absInt := &big.Int{} - wholePart := &big.Int{} - fracPart := &big.Int{} - remainder := &big.Int{} - absInt.Abs(src.Int) - - // Normalize absInt and exp to where exp is always a multiple of 4. This makes - // converting to 16-bit base 10,000 digits easier. - var exp int32 - switch src.Exp % 4 { - case 1, -3: - exp = src.Exp - 1 - absInt.Mul(absInt, big10) - case 2, -2: - exp = src.Exp - 2 - absInt.Mul(absInt, big100) - case 3, -1: - exp = src.Exp - 3 - absInt.Mul(absInt, big1000) - default: - exp = src.Exp - } - - if exp < 0 { - divisor := &big.Int{} - divisor.Exp(big10, big.NewInt(int64(-exp)), nil) - wholePart.DivMod(absInt, divisor, fracPart) - fracPart.Add(fracPart, divisor) - } else { - wholePart = absInt - } - - var wholeDigits, fracDigits []int16 - - for wholePart.Cmp(big0) != 0 { - wholePart.DivMod(wholePart, bigNBase, remainder) - wholeDigits = append(wholeDigits, int16(remainder.Int64())) - } - - if fracPart.Cmp(big0) != 0 { - for fracPart.Cmp(big1) != 0 { - fracPart.DivMod(fracPart, bigNBase, remainder) - fracDigits = append(fracDigits, int16(remainder.Int64())) - } - } - - buf = pgio.AppendInt16(buf, int16(len(wholeDigits)+len(fracDigits))) - - var weight int16 - if len(wholeDigits) > 0 { - weight = int16(len(wholeDigits) - 1) - if exp > 0 { - weight += int16(exp / 4) - } - } else { - weight = int16(exp/4) - 1 + int16(len(fracDigits)) - } - buf = pgio.AppendInt16(buf, weight) - - buf = pgio.AppendInt16(buf, sign) - - var dscale int16 - if src.Exp < 0 { - dscale = int16(-src.Exp) - } - buf = pgio.AppendInt16(buf, dscale) - - for i := len(wholeDigits) - 1; i >= 0; i-- { - buf = pgio.AppendInt16(buf, wholeDigits[i]) - } - - for i := len(fracDigits) - 1; i >= 0; i-- { - buf = pgio.AppendInt16(buf, fracDigits[i]) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Numeric) Scan(src interface{}) error { - if src == nil { - *dst = Numeric{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Numeric) Value() (driver.Value, error) { - switch src.Status { - case Present: - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - - return string(buf), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func encodeNumericText(n Numeric, buf []byte) (newBuf []byte, err error) { - // if !n.Valid { - // return nil, nil - // } - - if n.NaN { - buf = append(buf, "NaN"...) - return buf, nil - } else if n.InfinityModifier == Infinity { - buf = append(buf, "Infinity"...) - return buf, nil - } else if n.InfinityModifier == NegativeInfinity { - buf = append(buf, "-Infinity"...) - return buf, nil - } - - buf = append(buf, n.numberTextBytes()...) - - return buf, nil -} - -// numberString returns a string of the number. undefined if NaN, infinite, or NULL -func (n Numeric) numberTextBytes() []byte { - intStr := n.Int.String() - buf := &bytes.Buffer{} - exp := int(n.Exp) - if exp > 0 { - buf.WriteString(intStr) - for i := 0; i < exp; i++ { - buf.WriteByte('0') - } - } else if exp < 0 { - if len(intStr) <= -exp { - buf.WriteString("0.") - leadingZeros := -exp - len(intStr) - for i := 0; i < leadingZeros; i++ { - buf.WriteByte('0') - } - buf.WriteString(intStr) - } else if len(intStr) > -exp { - dpPos := len(intStr) + exp - buf.WriteString(intStr[:dpPos]) - buf.WriteByte('.') - buf.WriteString(intStr[dpPos:]) - } - } else { - buf.WriteString(intStr) - } - - return buf.Bytes() -} diff --git a/vendor/github.com/jackc/pgtype/numeric_array.go b/vendor/github.com/jackc/pgtype/numeric_array.go deleted file mode 100644 index 31899dec..00000000 --- a/vendor/github.com/jackc/pgtype/numeric_array.go +++ /dev/null @@ -1,685 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type NumericArray struct { - Elements []Numeric - Dimensions []ArrayDimension - Status Status -} - -func (dst *NumericArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = NumericArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []float32: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*float32: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []float64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*float64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []int64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*int64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []uint64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*uint64: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - elements := make([]Numeric, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = NumericArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Numeric: - if value == nil { - *dst = NumericArray{Status: Null} - } else if len(value) == 0 { - *dst = NumericArray{Status: Present} - } else { - *dst = NumericArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = NumericArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for NumericArray", src) - } - if elementsLength == 0 { - *dst = NumericArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to NumericArray", src) - } - - *dst = NumericArray{ - Elements: make([]Numeric, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Numeric, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to NumericArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *NumericArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to NumericArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in NumericArray", err) - } - index++ - - return index, nil -} - -func (dst NumericArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *NumericArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]float32: - *v = make([]float32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*float32: - *v = make([]*float32, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]float64: - *v = make([]float64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*float64: - *v = make([]*float64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]int64: - *v = make([]int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*int64: - *v = make([]*int64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]uint64: - *v = make([]uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*uint64: - *v = make([]*uint64, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *NumericArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from NumericArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from NumericArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *NumericArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = NumericArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Numeric - - if len(uta.Elements) > 0 { - elements = make([]Numeric, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Numeric - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = NumericArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *NumericArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = NumericArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = NumericArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Numeric, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = NumericArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src NumericArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src NumericArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("numeric"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "numeric") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *NumericArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src NumericArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/numrange.go b/vendor/github.com/jackc/pgtype/numrange.go deleted file mode 100644 index 3d5951a2..00000000 --- a/vendor/github.com/jackc/pgtype/numrange.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Numrange struct { - Lower Numeric - Upper Numeric - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Numrange) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Numrange{Status: Null} - return nil - } - - switch value := src.(type) { - case Numrange: - *dst = value - case *Numrange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Numrange", src) - } - - return nil -} - -func (dst Numrange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Numrange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Numrange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Numrange{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Numrange{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Numrange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Numrange{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Numrange{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Numrange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Numrange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Numrange) Scan(src interface{}) error { - if src == nil { - *dst = Numrange{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Numrange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/oid.go b/vendor/github.com/jackc/pgtype/oid.go deleted file mode 100644 index 31677e89..00000000 --- a/vendor/github.com/jackc/pgtype/oid.go +++ /dev/null @@ -1,81 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "strconv" - - "github.com/jackc/pgio" -) - -// OID (Object Identifier Type) is, according to -// https://www.postgresql.org/docs/current/static/datatype-oid.html, used -// internally by PostgreSQL as a primary key for various system tables. It is -// currently implemented as an unsigned four-byte integer. Its definition can be -// found in src/include/postgres_ext.h in the PostgreSQL sources. Because it is -// so frequently required to be in a NOT NULL condition OID cannot be NULL. To -// allow for NULL OIDs use OIDValue. -type OID uint32 - -func (dst *OID) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - return fmt.Errorf("cannot decode nil into OID") - } - - n, err := strconv.ParseUint(string(src), 10, 32) - if err != nil { - return err - } - - *dst = OID(n) - return nil -} - -func (dst *OID) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - return fmt.Errorf("cannot decode nil into OID") - } - - if len(src) != 4 { - return fmt.Errorf("invalid length: %v", len(src)) - } - - n := binary.BigEndian.Uint32(src) - *dst = OID(n) - return nil -} - -func (src OID) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return append(buf, strconv.FormatUint(uint64(src), 10)...), nil -} - -func (src OID) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return pgio.AppendUint32(buf, uint32(src)), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *OID) Scan(src interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan NULL into %T", src) - } - - switch src := src.(type) { - case int64: - *dst = OID(src) - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src OID) Value() (driver.Value, error) { - return int64(src), nil -} diff --git a/vendor/github.com/jackc/pgtype/oid_value.go b/vendor/github.com/jackc/pgtype/oid_value.go deleted file mode 100644 index 5dc9136c..00000000 --- a/vendor/github.com/jackc/pgtype/oid_value.go +++ /dev/null @@ -1,55 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// OIDValue (Object Identifier Type) is, according to -// https://www.postgresql.org/docs/current/static/datatype-OIDValue.html, used -// internally by PostgreSQL as a primary key for various system tables. It is -// currently implemented as an unsigned four-byte integer. Its definition can be -// found in src/include/postgres_ext.h in the PostgreSQL sources. -type OIDValue pguint32 - -// Set converts from src to dst. Note that as OIDValue is not a general -// number type Set does not do automatic type conversion as other number -// types do. -func (dst *OIDValue) Set(src interface{}) error { - return (*pguint32)(dst).Set(src) -} - -func (dst OIDValue) Get() interface{} { - return (pguint32)(dst).Get() -} - -// AssignTo assigns from src to dst. Note that as OIDValue is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *OIDValue) AssignTo(dst interface{}) error { - return (*pguint32)(src).AssignTo(dst) -} - -func (dst *OIDValue) DecodeText(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeText(ci, src) -} - -func (dst *OIDValue) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeBinary(ci, src) -} - -func (src OIDValue) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeText(ci, buf) -} - -func (src OIDValue) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *OIDValue) Scan(src interface{}) error { - return (*pguint32)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src OIDValue) Value() (driver.Value, error) { - return (pguint32)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/path.go b/vendor/github.com/jackc/pgtype/path.go deleted file mode 100644 index 9f89969e..00000000 --- a/vendor/github.com/jackc/pgtype/path.go +++ /dev/null @@ -1,195 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Path struct { - P []Vec2 - Closed bool - Status Status -} - -func (dst *Path) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Path", src) -} - -func (dst Path) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Path) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Path) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Path{Status: Null} - return nil - } - - if len(src) < 7 { - return fmt.Errorf("invalid length for Path: %v", len(src)) - } - - closed := src[0] == '(' - points := make([]Vec2, 0) - - str := string(src[2:]) - - for { - end := strings.IndexByte(str, ',') - x, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1:] - end = strings.IndexByte(str, ')') - - y, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - points = append(points, Vec2{x, y}) - - if end+3 < len(str) { - str = str[end+3:] - } else { - break - } - } - - *dst = Path{P: points, Closed: closed, Status: Present} - return nil -} - -func (dst *Path) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Path{Status: Null} - return nil - } - - if len(src) < 5 { - return fmt.Errorf("invalid length for Path: %v", len(src)) - } - - closed := src[0] == 1 - pointCount := int(binary.BigEndian.Uint32(src[1:])) - - rp := 5 - - if 5+pointCount*16 != len(src) { - return fmt.Errorf("invalid length for Path with %d points: %v", pointCount, len(src)) - } - - points := make([]Vec2, pointCount) - for i := 0; i < len(points); i++ { - x := binary.BigEndian.Uint64(src[rp:]) - rp += 8 - y := binary.BigEndian.Uint64(src[rp:]) - rp += 8 - points[i] = Vec2{math.Float64frombits(x), math.Float64frombits(y)} - } - - *dst = Path{ - P: points, - Closed: closed, - Status: Present, - } - return nil -} - -func (src Path) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var startByte, endByte byte - if src.Closed { - startByte = '(' - endByte = ')' - } else { - startByte = '[' - endByte = ']' - } - buf = append(buf, startByte) - - for i, p := range src.P { - if i > 0 { - buf = append(buf, ',') - } - buf = append(buf, fmt.Sprintf(`(%s,%s)`, - strconv.FormatFloat(p.X, 'f', -1, 64), - strconv.FormatFloat(p.Y, 'f', -1, 64), - )...) - } - - return append(buf, endByte), nil -} - -func (src Path) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var closeByte byte - if src.Closed { - closeByte = 1 - } - buf = append(buf, closeByte) - - buf = pgio.AppendInt32(buf, int32(len(src.P))) - - for _, p := range src.P { - buf = pgio.AppendUint64(buf, math.Float64bits(p.X)) - buf = pgio.AppendUint64(buf, math.Float64bits(p.Y)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Path) Scan(src interface{}) error { - if src == nil { - *dst = Path{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Path) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/pgtype.go b/vendor/github.com/jackc/pgtype/pgtype.go deleted file mode 100644 index a52740e7..00000000 --- a/vendor/github.com/jackc/pgtype/pgtype.go +++ /dev/null @@ -1,1001 +0,0 @@ -package pgtype - -import ( - "database/sql" - "encoding/binary" - "errors" - "fmt" - "math" - "net" - "reflect" - "time" -) - -// PostgreSQL oids for common types -const ( - BoolOID = 16 - ByteaOID = 17 - QCharOID = 18 - NameOID = 19 - Int8OID = 20 - Int2OID = 21 - Int4OID = 23 - TextOID = 25 - OIDOID = 26 - TIDOID = 27 - XIDOID = 28 - CIDOID = 29 - JSONOID = 114 - JSONArrayOID = 199 - PointOID = 600 - LsegOID = 601 - PathOID = 602 - BoxOID = 603 - PolygonOID = 604 - LineOID = 628 - CIDROID = 650 - CIDRArrayOID = 651 - Float4OID = 700 - Float8OID = 701 - CircleOID = 718 - UnknownOID = 705 - MacaddrOID = 829 - InetOID = 869 - BoolArrayOID = 1000 - Int2ArrayOID = 1005 - Int4ArrayOID = 1007 - TextArrayOID = 1009 - ByteaArrayOID = 1001 - BPCharArrayOID = 1014 - VarcharArrayOID = 1015 - Int8ArrayOID = 1016 - Float4ArrayOID = 1021 - Float8ArrayOID = 1022 - ACLItemOID = 1033 - ACLItemArrayOID = 1034 - InetArrayOID = 1041 - BPCharOID = 1042 - VarcharOID = 1043 - DateOID = 1082 - TimeOID = 1083 - TimestampOID = 1114 - TimestampArrayOID = 1115 - DateArrayOID = 1182 - TimestamptzOID = 1184 - TimestamptzArrayOID = 1185 - IntervalOID = 1186 - NumericArrayOID = 1231 - BitOID = 1560 - VarbitOID = 1562 - NumericOID = 1700 - RecordOID = 2249 - UUIDOID = 2950 - UUIDArrayOID = 2951 - JSONBOID = 3802 - JSONBArrayOID = 3807 - DaterangeOID = 3912 - Int4rangeOID = 3904 - Int4multirangeOID = 4451 - NumrangeOID = 3906 - NummultirangeOID = 4532 - TsrangeOID = 3908 - TsrangeArrayOID = 3909 - TstzrangeOID = 3910 - TstzrangeArrayOID = 3911 - Int8rangeOID = 3926 - Int8multirangeOID = 4536 -) - -type Status byte - -const ( - Undefined Status = iota - Null - Present -) - -type InfinityModifier int8 - -const ( - Infinity InfinityModifier = 1 - None InfinityModifier = 0 - NegativeInfinity InfinityModifier = -Infinity -) - -func (im InfinityModifier) String() string { - switch im { - case None: - return "none" - case Infinity: - return "infinity" - case NegativeInfinity: - return "-infinity" - default: - return "invalid" - } -} - -// PostgreSQL format codes -const ( - TextFormatCode = 0 - BinaryFormatCode = 1 -) - -// Value translates values to and from an internal canonical representation for the type. To actually be usable a type -// that implements Value should also implement some combination of BinaryDecoder, BinaryEncoder, TextDecoder, -// and TextEncoder. -// -// Operations that update a Value (e.g. Set, DecodeText, DecodeBinary) should entirely replace the value. e.g. Internal -// slices should be replaced not resized and reused. This allows Get and AssignTo to return a slice directly rather -// than incur a usually unnecessary copy. -type Value interface { - // Set converts and assigns src to itself. Value takes ownership of src. - Set(src interface{}) error - - // Get returns the simplest representation of Value. Get may return a pointer to an internal value but it must never - // mutate that value. e.g. If Get returns a []byte Value must never change the contents of the []byte. - Get() interface{} - - // AssignTo converts and assigns the Value to dst. AssignTo may a pointer to an internal value but it must never - // mutate that value. e.g. If Get returns a []byte Value must never change the contents of the []byte. - AssignTo(dst interface{}) error -} - -// TypeValue is a Value where instances can represent different PostgreSQL types. This can be useful for -// representing types such as enums, composites, and arrays. -// -// In general, instances of TypeValue should not be used to directly represent a value. It should only be used as an -// encoder and decoder internal to ConnInfo. -type TypeValue interface { - Value - - // NewTypeValue creates a TypeValue including references to internal type information. e.g. the list of members - // in an EnumType. - NewTypeValue() Value - - // TypeName returns the PostgreSQL name of this type. - TypeName() string -} - -// ValueTranscoder is a value that implements the text and binary encoding and decoding interfaces. -type ValueTranscoder interface { - Value - TextEncoder - BinaryEncoder - TextDecoder - BinaryDecoder -} - -// ResultFormatPreferrer allows a type to specify its preferred result format instead of it being inferred from -// whether it is also a BinaryDecoder. -type ResultFormatPreferrer interface { - PreferredResultFormat() int16 -} - -// ParamFormatPreferrer allows a type to specify its preferred param format instead of it being inferred from -// whether it is also a BinaryEncoder. -type ParamFormatPreferrer interface { - PreferredParamFormat() int16 -} - -type BinaryDecoder interface { - // DecodeBinary decodes src into BinaryDecoder. If src is nil then the - // original SQL value is NULL. BinaryDecoder takes ownership of src. The - // caller MUST not use it again. - DecodeBinary(ci *ConnInfo, src []byte) error -} - -type TextDecoder interface { - // DecodeText decodes src into TextDecoder. If src is nil then the original - // SQL value is NULL. TextDecoder takes ownership of src. The caller MUST not - // use it again. - DecodeText(ci *ConnInfo, src []byte) error -} - -// BinaryEncoder is implemented by types that can encode themselves into the -// PostgreSQL binary wire format. -type BinaryEncoder interface { - // EncodeBinary should append the binary format of self to buf. If self is the - // SQL value NULL then append nothing and return (nil, nil). The caller of - // EncodeBinary is responsible for writing the correct NULL value or the - // length of the data written. - EncodeBinary(ci *ConnInfo, buf []byte) (newBuf []byte, err error) -} - -// TextEncoder is implemented by types that can encode themselves into the -// PostgreSQL text wire format. -type TextEncoder interface { - // EncodeText should append the text format of self to buf. If self is the - // SQL value NULL then append nothing and return (nil, nil). The caller of - // EncodeText is responsible for writing the correct NULL value or the - // length of the data written. - EncodeText(ci *ConnInfo, buf []byte) (newBuf []byte, err error) -} - -var errUndefined = errors.New("cannot encode status undefined") -var errBadStatus = errors.New("invalid status") - -type nullAssignmentError struct { - dst interface{} -} - -func (e *nullAssignmentError) Error() string { - return fmt.Sprintf("cannot assign NULL to %T", e.dst) -} - -type DataType struct { - Value Value - - textDecoder TextDecoder - binaryDecoder BinaryDecoder - - Name string - OID uint32 -} - -type ConnInfo struct { - oidToDataType map[uint32]*DataType - nameToDataType map[string]*DataType - reflectTypeToName map[reflect.Type]string - oidToParamFormatCode map[uint32]int16 - oidToResultFormatCode map[uint32]int16 - - reflectTypeToDataType map[reflect.Type]*DataType -} - -func newConnInfo() *ConnInfo { - return &ConnInfo{ - oidToDataType: make(map[uint32]*DataType), - nameToDataType: make(map[string]*DataType), - reflectTypeToName: make(map[reflect.Type]string), - oidToParamFormatCode: make(map[uint32]int16), - oidToResultFormatCode: make(map[uint32]int16), - } -} - -func NewConnInfo() *ConnInfo { - ci := newConnInfo() - - ci.RegisterDataType(DataType{Value: &ACLItemArray{}, Name: "_aclitem", OID: ACLItemArrayOID}) - ci.RegisterDataType(DataType{Value: &BoolArray{}, Name: "_bool", OID: BoolArrayOID}) - ci.RegisterDataType(DataType{Value: &BPCharArray{}, Name: "_bpchar", OID: BPCharArrayOID}) - ci.RegisterDataType(DataType{Value: &ByteaArray{}, Name: "_bytea", OID: ByteaArrayOID}) - ci.RegisterDataType(DataType{Value: &CIDRArray{}, Name: "_cidr", OID: CIDRArrayOID}) - ci.RegisterDataType(DataType{Value: &DateArray{}, Name: "_date", OID: DateArrayOID}) - ci.RegisterDataType(DataType{Value: &Float4Array{}, Name: "_float4", OID: Float4ArrayOID}) - ci.RegisterDataType(DataType{Value: &Float8Array{}, Name: "_float8", OID: Float8ArrayOID}) - ci.RegisterDataType(DataType{Value: &InetArray{}, Name: "_inet", OID: InetArrayOID}) - ci.RegisterDataType(DataType{Value: &Int2Array{}, Name: "_int2", OID: Int2ArrayOID}) - ci.RegisterDataType(DataType{Value: &Int4Array{}, Name: "_int4", OID: Int4ArrayOID}) - ci.RegisterDataType(DataType{Value: &Int8Array{}, Name: "_int8", OID: Int8ArrayOID}) - ci.RegisterDataType(DataType{Value: &NumericArray{}, Name: "_numeric", OID: NumericArrayOID}) - ci.RegisterDataType(DataType{Value: &TextArray{}, Name: "_text", OID: TextArrayOID}) - ci.RegisterDataType(DataType{Value: &TimestampArray{}, Name: "_timestamp", OID: TimestampArrayOID}) - ci.RegisterDataType(DataType{Value: &TimestamptzArray{}, Name: "_timestamptz", OID: TimestamptzArrayOID}) - ci.RegisterDataType(DataType{Value: &UUIDArray{}, Name: "_uuid", OID: UUIDArrayOID}) - ci.RegisterDataType(DataType{Value: &VarcharArray{}, Name: "_varchar", OID: VarcharArrayOID}) - ci.RegisterDataType(DataType{Value: &ACLItem{}, Name: "aclitem", OID: ACLItemOID}) - ci.RegisterDataType(DataType{Value: &Bit{}, Name: "bit", OID: BitOID}) - ci.RegisterDataType(DataType{Value: &Bool{}, Name: "bool", OID: BoolOID}) - ci.RegisterDataType(DataType{Value: &Box{}, Name: "box", OID: BoxOID}) - ci.RegisterDataType(DataType{Value: &BPChar{}, Name: "bpchar", OID: BPCharOID}) - ci.RegisterDataType(DataType{Value: &Bytea{}, Name: "bytea", OID: ByteaOID}) - ci.RegisterDataType(DataType{Value: &QChar{}, Name: "char", OID: QCharOID}) - ci.RegisterDataType(DataType{Value: &CID{}, Name: "cid", OID: CIDOID}) - ci.RegisterDataType(DataType{Value: &CIDR{}, Name: "cidr", OID: CIDROID}) - ci.RegisterDataType(DataType{Value: &Circle{}, Name: "circle", OID: CircleOID}) - ci.RegisterDataType(DataType{Value: &Date{}, Name: "date", OID: DateOID}) - ci.RegisterDataType(DataType{Value: &Daterange{}, Name: "daterange", OID: DaterangeOID}) - ci.RegisterDataType(DataType{Value: &Float4{}, Name: "float4", OID: Float4OID}) - ci.RegisterDataType(DataType{Value: &Float8{}, Name: "float8", OID: Float8OID}) - ci.RegisterDataType(DataType{Value: &Inet{}, Name: "inet", OID: InetOID}) - ci.RegisterDataType(DataType{Value: &Int2{}, Name: "int2", OID: Int2OID}) - ci.RegisterDataType(DataType{Value: &Int4{}, Name: "int4", OID: Int4OID}) - ci.RegisterDataType(DataType{Value: &Int4range{}, Name: "int4range", OID: Int4rangeOID}) - ci.RegisterDataType(DataType{Value: &Int4multirange{}, Name: "int4multirange", OID: Int4multirangeOID}) - ci.RegisterDataType(DataType{Value: &Int8{}, Name: "int8", OID: Int8OID}) - ci.RegisterDataType(DataType{Value: &Int8range{}, Name: "int8range", OID: Int8rangeOID}) - ci.RegisterDataType(DataType{Value: &Int8multirange{}, Name: "int8multirange", OID: Int8multirangeOID}) - ci.RegisterDataType(DataType{Value: &Interval{}, Name: "interval", OID: IntervalOID}) - ci.RegisterDataType(DataType{Value: &JSON{}, Name: "json", OID: JSONOID}) - ci.RegisterDataType(DataType{Value: &JSONArray{}, Name: "_json", OID: JSONArrayOID}) - ci.RegisterDataType(DataType{Value: &JSONB{}, Name: "jsonb", OID: JSONBOID}) - ci.RegisterDataType(DataType{Value: &JSONBArray{}, Name: "_jsonb", OID: JSONBArrayOID}) - ci.RegisterDataType(DataType{Value: &Line{}, Name: "line", OID: LineOID}) - ci.RegisterDataType(DataType{Value: &Lseg{}, Name: "lseg", OID: LsegOID}) - ci.RegisterDataType(DataType{Value: &Macaddr{}, Name: "macaddr", OID: MacaddrOID}) - ci.RegisterDataType(DataType{Value: &Name{}, Name: "name", OID: NameOID}) - ci.RegisterDataType(DataType{Value: &Numeric{}, Name: "numeric", OID: NumericOID}) - ci.RegisterDataType(DataType{Value: &Numrange{}, Name: "numrange", OID: NumrangeOID}) - ci.RegisterDataType(DataType{Value: &Nummultirange{}, Name: "nummultirange", OID: NummultirangeOID}) - ci.RegisterDataType(DataType{Value: &OIDValue{}, Name: "oid", OID: OIDOID}) - ci.RegisterDataType(DataType{Value: &Path{}, Name: "path", OID: PathOID}) - ci.RegisterDataType(DataType{Value: &Point{}, Name: "point", OID: PointOID}) - ci.RegisterDataType(DataType{Value: &Polygon{}, Name: "polygon", OID: PolygonOID}) - ci.RegisterDataType(DataType{Value: &Record{}, Name: "record", OID: RecordOID}) - ci.RegisterDataType(DataType{Value: &Text{}, Name: "text", OID: TextOID}) - ci.RegisterDataType(DataType{Value: &TID{}, Name: "tid", OID: TIDOID}) - ci.RegisterDataType(DataType{Value: &Time{}, Name: "time", OID: TimeOID}) - ci.RegisterDataType(DataType{Value: &Timestamp{}, Name: "timestamp", OID: TimestampOID}) - ci.RegisterDataType(DataType{Value: &Timestamptz{}, Name: "timestamptz", OID: TimestamptzOID}) - ci.RegisterDataType(DataType{Value: &Tsrange{}, Name: "tsrange", OID: TsrangeOID}) - ci.RegisterDataType(DataType{Value: &TsrangeArray{}, Name: "_tsrange", OID: TsrangeArrayOID}) - ci.RegisterDataType(DataType{Value: &Tstzrange{}, Name: "tstzrange", OID: TstzrangeOID}) - ci.RegisterDataType(DataType{Value: &TstzrangeArray{}, Name: "_tstzrange", OID: TstzrangeArrayOID}) - ci.RegisterDataType(DataType{Value: &Unknown{}, Name: "unknown", OID: UnknownOID}) - ci.RegisterDataType(DataType{Value: &UUID{}, Name: "uuid", OID: UUIDOID}) - ci.RegisterDataType(DataType{Value: &Varbit{}, Name: "varbit", OID: VarbitOID}) - ci.RegisterDataType(DataType{Value: &Varchar{}, Name: "varchar", OID: VarcharOID}) - ci.RegisterDataType(DataType{Value: &XID{}, Name: "xid", OID: XIDOID}) - - registerDefaultPgTypeVariants := func(name, arrayName string, value interface{}) { - ci.RegisterDefaultPgType(value, name) - valueType := reflect.TypeOf(value) - - ci.RegisterDefaultPgType(reflect.New(valueType).Interface(), name) - - sliceType := reflect.SliceOf(valueType) - ci.RegisterDefaultPgType(reflect.MakeSlice(sliceType, 0, 0).Interface(), arrayName) - - ci.RegisterDefaultPgType(reflect.New(sliceType).Interface(), arrayName) - } - - // Integer types that directly map to a PostgreSQL type - registerDefaultPgTypeVariants("int2", "_int2", int16(0)) - registerDefaultPgTypeVariants("int4", "_int4", int32(0)) - registerDefaultPgTypeVariants("int8", "_int8", int64(0)) - - // Integer types that do not have a direct match to a PostgreSQL type - registerDefaultPgTypeVariants("int8", "_int8", uint16(0)) - registerDefaultPgTypeVariants("int8", "_int8", uint32(0)) - registerDefaultPgTypeVariants("int8", "_int8", uint64(0)) - registerDefaultPgTypeVariants("int8", "_int8", int(0)) - registerDefaultPgTypeVariants("int8", "_int8", uint(0)) - - registerDefaultPgTypeVariants("float4", "_float4", float32(0)) - registerDefaultPgTypeVariants("float8", "_float8", float64(0)) - - registerDefaultPgTypeVariants("bool", "_bool", false) - registerDefaultPgTypeVariants("timestamptz", "_timestamptz", time.Time{}) - registerDefaultPgTypeVariants("text", "_text", "") - registerDefaultPgTypeVariants("bytea", "_bytea", []byte(nil)) - - registerDefaultPgTypeVariants("inet", "_inet", net.IP{}) - ci.RegisterDefaultPgType((*net.IPNet)(nil), "cidr") - ci.RegisterDefaultPgType([]*net.IPNet(nil), "_cidr") - - return ci -} - -func (ci *ConnInfo) InitializeDataTypes(nameOIDs map[string]uint32) { - for name, oid := range nameOIDs { - var value Value - if t, ok := nameValues[name]; ok { - value = reflect.New(reflect.ValueOf(t).Elem().Type()).Interface().(Value) - } else { - value = &GenericText{} - } - ci.RegisterDataType(DataType{Value: value, Name: name, OID: oid}) - } -} - -func (ci *ConnInfo) RegisterDataType(t DataType) { - t.Value = NewValue(t.Value) - - ci.oidToDataType[t.OID] = &t - ci.nameToDataType[t.Name] = &t - - { - var formatCode int16 - if pfp, ok := t.Value.(ParamFormatPreferrer); ok { - formatCode = pfp.PreferredParamFormat() - } else if _, ok := t.Value.(BinaryEncoder); ok { - formatCode = BinaryFormatCode - } - ci.oidToParamFormatCode[t.OID] = formatCode - } - - { - var formatCode int16 - if rfp, ok := t.Value.(ResultFormatPreferrer); ok { - formatCode = rfp.PreferredResultFormat() - } else if _, ok := t.Value.(BinaryDecoder); ok { - formatCode = BinaryFormatCode - } - ci.oidToResultFormatCode[t.OID] = formatCode - } - - if d, ok := t.Value.(TextDecoder); ok { - t.textDecoder = d - } - - if d, ok := t.Value.(BinaryDecoder); ok { - t.binaryDecoder = d - } - - ci.reflectTypeToDataType = nil // Invalidated by type registration -} - -// RegisterDefaultPgType registers a mapping of a Go type to a PostgreSQL type name. Typically the data type to be -// encoded or decoded is determined by the PostgreSQL OID. But if the OID of a value to be encoded or decoded is -// unknown, this additional mapping will be used by DataTypeForValue to determine a suitable data type. -func (ci *ConnInfo) RegisterDefaultPgType(value interface{}, name string) { - ci.reflectTypeToName[reflect.TypeOf(value)] = name - ci.reflectTypeToDataType = nil // Invalidated by registering a default type -} - -func (ci *ConnInfo) DataTypeForOID(oid uint32) (*DataType, bool) { - dt, ok := ci.oidToDataType[oid] - return dt, ok -} - -func (ci *ConnInfo) DataTypeForName(name string) (*DataType, bool) { - dt, ok := ci.nameToDataType[name] - return dt, ok -} - -func (ci *ConnInfo) buildReflectTypeToDataType() { - ci.reflectTypeToDataType = make(map[reflect.Type]*DataType) - - for _, dt := range ci.oidToDataType { - if _, is := dt.Value.(TypeValue); !is { - ci.reflectTypeToDataType[reflect.ValueOf(dt.Value).Type()] = dt - } - } - - for reflectType, name := range ci.reflectTypeToName { - if dt, ok := ci.nameToDataType[name]; ok { - ci.reflectTypeToDataType[reflectType] = dt - } - } -} - -// DataTypeForValue finds a data type suitable for v. Use RegisterDataType to register types that can encode and decode -// themselves. Use RegisterDefaultPgType to register that can be handled by a registered data type. -func (ci *ConnInfo) DataTypeForValue(v interface{}) (*DataType, bool) { - if ci.reflectTypeToDataType == nil { - ci.buildReflectTypeToDataType() - } - - if tv, ok := v.(TypeValue); ok { - dt, ok := ci.nameToDataType[tv.TypeName()] - return dt, ok - } - - dt, ok := ci.reflectTypeToDataType[reflect.TypeOf(v)] - return dt, ok -} - -func (ci *ConnInfo) ParamFormatCodeForOID(oid uint32) int16 { - fc, ok := ci.oidToParamFormatCode[oid] - if ok { - return fc - } - return TextFormatCode -} - -func (ci *ConnInfo) ResultFormatCodeForOID(oid uint32) int16 { - fc, ok := ci.oidToResultFormatCode[oid] - if ok { - return fc - } - return TextFormatCode -} - -// DeepCopy makes a deep copy of the ConnInfo. -func (ci *ConnInfo) DeepCopy() *ConnInfo { - ci2 := newConnInfo() - - for _, dt := range ci.oidToDataType { - ci2.RegisterDataType(DataType{ - Value: NewValue(dt.Value), - Name: dt.Name, - OID: dt.OID, - }) - } - - for t, n := range ci.reflectTypeToName { - ci2.reflectTypeToName[t] = n - } - - return ci2 -} - -// ScanPlan is a precompiled plan to scan into a type of destination. -type ScanPlan interface { - // Scan scans src into dst. If the dst type has changed in an incompatible way a ScanPlan should automatically - // replan and scan. - Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error -} - -type scanPlanDstBinaryDecoder struct{} - -func (scanPlanDstBinaryDecoder) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if d, ok := (dst).(BinaryDecoder); ok { - return d.DecodeBinary(ci, src) - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanDstTextDecoder struct{} - -func (plan scanPlanDstTextDecoder) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if d, ok := (dst).(TextDecoder); ok { - return d.DecodeText(ci, src) - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanDataTypeSQLScanner DataType - -func (plan *scanPlanDataTypeSQLScanner) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - scanner, ok := dst.(sql.Scanner) - if !ok { - dv := reflect.ValueOf(dst) - if dv.Kind() != reflect.Ptr || !dv.Type().Elem().Implements(scannerType) { - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) - } - if src == nil { - // Ensure the pointer points to a zero version of the value - dv.Elem().Set(reflect.Zero(dv.Type().Elem())) - return nil - } - dv = dv.Elem() - // If the pointer is to a nil pointer then set that before scanning - if dv.Kind() == reflect.Ptr && dv.IsNil() { - dv.Set(reflect.New(dv.Type().Elem())) - } - scanner = dv.Interface().(sql.Scanner) - } - - dt := (*DataType)(plan) - var err error - switch formatCode { - case BinaryFormatCode: - err = dt.binaryDecoder.DecodeBinary(ci, src) - case TextFormatCode: - err = dt.textDecoder.DecodeText(ci, src) - } - if err != nil { - return err - } - - sqlSrc, err := DatabaseSQLValue(ci, dt.Value) - if err != nil { - return err - } - return scanner.Scan(sqlSrc) -} - -type scanPlanDataTypeAssignTo DataType - -func (plan *scanPlanDataTypeAssignTo) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - dt := (*DataType)(plan) - var err error - switch formatCode { - case BinaryFormatCode: - err = dt.binaryDecoder.DecodeBinary(ci, src) - case TextFormatCode: - err = dt.textDecoder.DecodeText(ci, src) - } - if err != nil { - return err - } - - assignToErr := dt.Value.AssignTo(dst) - if assignToErr == nil { - return nil - } - - if dstPtr, ok := dst.(*interface{}); ok { - *dstPtr = dt.Value.Get() - return nil - } - - // assignToErr might have failed because the type of destination has changed - newPlan := ci.PlanScan(oid, formatCode, dst) - if newPlan, sameType := newPlan.(*scanPlanDataTypeAssignTo); !sameType { - return newPlan.Scan(ci, oid, formatCode, src, dst) - } - - return assignToErr -} - -type scanPlanSQLScanner struct{} - -func (scanPlanSQLScanner) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - scanner, ok := dst.(sql.Scanner) - if !ok { - dv := reflect.ValueOf(dst) - if dv.Kind() != reflect.Ptr || !dv.Type().Elem().Implements(scannerType) { - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) - } - if src == nil { - // Ensure the pointer points to a zero version of the value - dv.Elem().Set(reflect.Zero(dv.Elem().Type())) - return nil - } - dv = dv.Elem() - // If the pointer is to a nil pointer then set that before scanning - if dv.Kind() == reflect.Ptr && dv.IsNil() { - dv.Set(reflect.New(dv.Type().Elem())) - } - scanner = dv.Interface().(sql.Scanner) - } - if src == nil { - // This is necessary because interface value []byte:nil does not equal nil:nil for the binary format path and the - // text format path would be converted to empty string. - return scanner.Scan(nil) - } else if formatCode == BinaryFormatCode { - return scanner.Scan(src) - } else { - return scanner.Scan(string(src)) - } -} - -type scanPlanReflection struct{} - -func (scanPlanReflection) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - // We might be given a pointer to something that implements the decoder interface(s), - // even though the pointer itself doesn't. - refVal := reflect.ValueOf(dst) - if refVal.Kind() == reflect.Ptr && refVal.Type().Elem().Kind() == reflect.Ptr { - // If the database returned NULL, then we set dest as nil to indicate that. - if src == nil { - nilPtr := reflect.Zero(refVal.Type().Elem()) - refVal.Elem().Set(nilPtr) - return nil - } - - // We need to allocate an element, and set the destination to it - // Then we can retry as that element. - elemPtr := reflect.New(refVal.Type().Elem().Elem()) - refVal.Elem().Set(elemPtr) - - plan := ci.PlanScan(oid, formatCode, elemPtr.Interface()) - return plan.Scan(ci, oid, formatCode, src, elemPtr.Interface()) - } - - return scanUnknownType(oid, formatCode, src, dst) -} - -type scanPlanBinaryInt16 struct{} - -func (scanPlanBinaryInt16) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if len(src) != 2 { - return fmt.Errorf("invalid length for int2: %v", len(src)) - } - - if p, ok := (dst).(*int16); ok { - *p = int16(binary.BigEndian.Uint16(src)) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanBinaryInt32 struct{} - -func (scanPlanBinaryInt32) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if len(src) != 4 { - return fmt.Errorf("invalid length for int4: %v", len(src)) - } - - if p, ok := (dst).(*int32); ok { - *p = int32(binary.BigEndian.Uint32(src)) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanBinaryInt64 struct{} - -func (scanPlanBinaryInt64) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for int8: %v", len(src)) - } - - if p, ok := (dst).(*int64); ok { - *p = int64(binary.BigEndian.Uint64(src)) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanBinaryFloat32 struct{} - -func (scanPlanBinaryFloat32) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if len(src) != 4 { - return fmt.Errorf("invalid length for int4: %v", len(src)) - } - - if p, ok := (dst).(*float32); ok { - n := int32(binary.BigEndian.Uint32(src)) - *p = float32(math.Float32frombits(uint32(n))) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanBinaryFloat64 struct{} - -func (scanPlanBinaryFloat64) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for int8: %v", len(src)) - } - - if p, ok := (dst).(*float64); ok { - n := int64(binary.BigEndian.Uint64(src)) - *p = float64(math.Float64frombits(uint64(n))) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanBinaryBytes struct{} - -func (scanPlanBinaryBytes) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if p, ok := (dst).(*[]byte); ok { - *p = src - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -type scanPlanString struct{} - -func (scanPlanString) Scan(ci *ConnInfo, oid uint32, formatCode int16, src []byte, dst interface{}) error { - if src == nil { - return fmt.Errorf("cannot scan null into %T", dst) - } - - if p, ok := (dst).(*string); ok { - *p = string(src) - return nil - } - - newPlan := ci.PlanScan(oid, formatCode, dst) - return newPlan.Scan(ci, oid, formatCode, src, dst) -} - -var scannerType = reflect.TypeOf((*sql.Scanner)(nil)).Elem() - -func isScanner(dst interface{}) bool { - if _, ok := dst.(sql.Scanner); ok { - return true - } - if t := reflect.TypeOf(dst); t != nil && t.Kind() == reflect.Ptr && t.Elem().Implements(scannerType) { - return true - } - return false -} - -// PlanScan prepares a plan to scan a value into dst. -func (ci *ConnInfo) PlanScan(oid uint32, formatCode int16, dst interface{}) ScanPlan { - switch formatCode { - case BinaryFormatCode: - switch dst.(type) { - case *string: - switch oid { - case TextOID, VarcharOID: - return scanPlanString{} - } - case *int16: - if oid == Int2OID { - return scanPlanBinaryInt16{} - } - case *int32: - if oid == Int4OID { - return scanPlanBinaryInt32{} - } - case *int64: - if oid == Int8OID { - return scanPlanBinaryInt64{} - } - case *float32: - if oid == Float4OID { - return scanPlanBinaryFloat32{} - } - case *float64: - if oid == Float8OID { - return scanPlanBinaryFloat64{} - } - case *[]byte: - switch oid { - case ByteaOID, TextOID, VarcharOID, JSONOID: - return scanPlanBinaryBytes{} - } - case BinaryDecoder: - return scanPlanDstBinaryDecoder{} - } - case TextFormatCode: - switch dst.(type) { - case *string: - return scanPlanString{} - case *[]byte: - if oid != ByteaOID { - return scanPlanBinaryBytes{} - } - case TextDecoder: - return scanPlanDstTextDecoder{} - } - } - - var dt *DataType - - if oid == 0 { - if dataType, ok := ci.DataTypeForValue(dst); ok { - dt = dataType - } - } else { - if dataType, ok := ci.DataTypeForOID(oid); ok { - dt = dataType - } - } - - if dt != nil { - if isScanner(dst) { - return (*scanPlanDataTypeSQLScanner)(dt) - } - return (*scanPlanDataTypeAssignTo)(dt) - } - - if isScanner(dst) { - return scanPlanSQLScanner{} - } - - return scanPlanReflection{} -} - -func (ci *ConnInfo) Scan(oid uint32, formatCode int16, src []byte, dst interface{}) error { - if dst == nil { - return nil - } - - plan := ci.PlanScan(oid, formatCode, dst) - return plan.Scan(ci, oid, formatCode, src, dst) -} - -func scanUnknownType(oid uint32, formatCode int16, buf []byte, dest interface{}) error { - switch dest := dest.(type) { - case *string: - if formatCode == BinaryFormatCode { - return fmt.Errorf("unknown oid %d in binary format cannot be scanned into %T", oid, dest) - } - *dest = string(buf) - return nil - case *[]byte: - *dest = buf - return nil - default: - if nextDst, retry := GetAssignToDstType(dest); retry { - return scanUnknownType(oid, formatCode, buf, nextDst) - } - return fmt.Errorf("unknown oid %d cannot be scanned into %T", oid, dest) - } -} - -// NewValue returns a new instance of the same type as v. -func NewValue(v Value) Value { - if tv, ok := v.(TypeValue); ok { - return tv.NewTypeValue() - } else { - return reflect.New(reflect.ValueOf(v).Elem().Type()).Interface().(Value) - } -} - -var nameValues map[string]Value - -func init() { - nameValues = map[string]Value{ - "_aclitem": &ACLItemArray{}, - "_bool": &BoolArray{}, - "_bpchar": &BPCharArray{}, - "_bytea": &ByteaArray{}, - "_cidr": &CIDRArray{}, - "_date": &DateArray{}, - "_float4": &Float4Array{}, - "_float8": &Float8Array{}, - "_inet": &InetArray{}, - "_int2": &Int2Array{}, - "_int4": &Int4Array{}, - "_int8": &Int8Array{}, - "_numeric": &NumericArray{}, - "_text": &TextArray{}, - "_timestamp": &TimestampArray{}, - "_timestamptz": &TimestamptzArray{}, - "_uuid": &UUIDArray{}, - "_varchar": &VarcharArray{}, - "_json": &JSONArray{}, - "_jsonb": &JSONBArray{}, - "aclitem": &ACLItem{}, - "bit": &Bit{}, - "bool": &Bool{}, - "box": &Box{}, - "bpchar": &BPChar{}, - "bytea": &Bytea{}, - "char": &QChar{}, - "cid": &CID{}, - "cidr": &CIDR{}, - "circle": &Circle{}, - "date": &Date{}, - "daterange": &Daterange{}, - "float4": &Float4{}, - "float8": &Float8{}, - "hstore": &Hstore{}, - "inet": &Inet{}, - "int2": &Int2{}, - "int4": &Int4{}, - "int4range": &Int4range{}, - "int4multirange": &Int4multirange{}, - "int8": &Int8{}, - "int8range": &Int8range{}, - "int8multirange": &Int8multirange{}, - "interval": &Interval{}, - "json": &JSON{}, - "jsonb": &JSONB{}, - "line": &Line{}, - "lseg": &Lseg{}, - "ltree": &Ltree{}, - "macaddr": &Macaddr{}, - "name": &Name{}, - "numeric": &Numeric{}, - "numrange": &Numrange{}, - "nummultirange": &Nummultirange{}, - "oid": &OIDValue{}, - "path": &Path{}, - "point": &Point{}, - "polygon": &Polygon{}, - "record": &Record{}, - "text": &Text{}, - "tid": &TID{}, - "timestamp": &Timestamp{}, - "timestamptz": &Timestamptz{}, - "tsrange": &Tsrange{}, - "_tsrange": &TsrangeArray{}, - "tstzrange": &Tstzrange{}, - "_tstzrange": &TstzrangeArray{}, - "unknown": &Unknown{}, - "uuid": &UUID{}, - "varbit": &Varbit{}, - "varchar": &Varchar{}, - "xid": &XID{}, - } -} diff --git a/vendor/github.com/jackc/pgtype/pguint32.go b/vendor/github.com/jackc/pgtype/pguint32.go deleted file mode 100644 index a0e88ca2..00000000 --- a/vendor/github.com/jackc/pgtype/pguint32.go +++ /dev/null @@ -1,162 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - - "github.com/jackc/pgio" -) - -// pguint32 is the core type that is used to implement PostgreSQL types such as -// CID and XID. -type pguint32 struct { - Uint uint32 - Status Status -} - -// Set converts from src to dst. Note that as pguint32 is not a general -// number type Set does not do automatic type conversion as other number -// types do. -func (dst *pguint32) Set(src interface{}) error { - switch value := src.(type) { - case int64: - if value < 0 { - return fmt.Errorf("%d is less than minimum value for pguint32", value) - } - if value > math.MaxUint32 { - return fmt.Errorf("%d is greater than maximum value for pguint32", value) - } - *dst = pguint32{Uint: uint32(value), Status: Present} - case uint32: - *dst = pguint32{Uint: value, Status: Present} - default: - return fmt.Errorf("cannot convert %v to pguint32", value) - } - - return nil -} - -func (dst pguint32) Get() interface{} { - switch dst.Status { - case Present: - return dst.Uint - case Null: - return nil - default: - return dst.Status - } -} - -// AssignTo assigns from src to dst. Note that as pguint32 is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *pguint32) AssignTo(dst interface{}) error { - switch v := dst.(type) { - case *uint32: - if src.Status == Present { - *v = src.Uint - } else { - return fmt.Errorf("cannot assign %v into %T", src, dst) - } - case **uint32: - if src.Status == Present { - n := src.Uint - *v = &n - } else { - *v = nil - } - } - - return nil -} - -func (dst *pguint32) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = pguint32{Status: Null} - return nil - } - - n, err := strconv.ParseUint(string(src), 10, 32) - if err != nil { - return err - } - - *dst = pguint32{Uint: uint32(n), Status: Present} - return nil -} - -func (dst *pguint32) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = pguint32{Status: Null} - return nil - } - - if len(src) != 4 { - return fmt.Errorf("invalid length: %v", len(src)) - } - - n := binary.BigEndian.Uint32(src) - *dst = pguint32{Uint: n, Status: Present} - return nil -} - -func (src pguint32) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, strconv.FormatUint(uint64(src.Uint), 10)...), nil -} - -func (src pguint32) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return pgio.AppendUint32(buf, src.Uint), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *pguint32) Scan(src interface{}) error { - if src == nil { - *dst = pguint32{Status: Null} - return nil - } - - switch src := src.(type) { - case uint32: - *dst = pguint32{Uint: src, Status: Present} - return nil - case int64: - *dst = pguint32{Uint: uint32(src), Status: Present} - return nil - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src pguint32) Value() (driver.Value, error) { - switch src.Status { - case Present: - return int64(src.Uint), nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/point.go b/vendor/github.com/jackc/pgtype/point.go deleted file mode 100644 index 0c799106..00000000 --- a/vendor/github.com/jackc/pgtype/point.go +++ /dev/null @@ -1,214 +0,0 @@ -package pgtype - -import ( - "bytes" - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Vec2 struct { - X float64 - Y float64 -} - -type Point struct { - P Vec2 - Status Status -} - -func (dst *Point) Set(src interface{}) error { - if src == nil { - dst.Status = Null - return nil - } - err := fmt.Errorf("cannot convert %v to Point", src) - var p *Point - switch value := src.(type) { - case string: - p, err = parsePoint([]byte(value)) - case []byte: - p, err = parsePoint(value) - default: - return err - } - if err != nil { - return err - } - *dst = *p - return nil -} - -func parsePoint(src []byte) (*Point, error) { - if src == nil || bytes.Compare(src, []byte("null")) == 0 { - return &Point{Status: Null}, nil - } - - if len(src) < 5 { - return nil, fmt.Errorf("invalid length for point: %v", len(src)) - } - if src[0] == '"' && src[len(src)-1] == '"' { - src = src[1 : len(src)-1] - } - parts := strings.SplitN(string(src[1:len(src)-1]), ",", 2) - if len(parts) < 2 { - return nil, fmt.Errorf("invalid format for point") - } - - x, err := strconv.ParseFloat(parts[0], 64) - if err != nil { - return nil, err - } - - y, err := strconv.ParseFloat(parts[1], 64) - if err != nil { - return nil, err - } - - return &Point{P: Vec2{x, y}, Status: Present}, nil -} - -func (dst Point) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Point) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Point) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Point{Status: Null} - return nil - } - - if len(src) < 5 { - return fmt.Errorf("invalid length for point: %v", len(src)) - } - - parts := strings.SplitN(string(src[1:len(src)-1]), ",", 2) - if len(parts) < 2 { - return fmt.Errorf("invalid format for point") - } - - x, err := strconv.ParseFloat(parts[0], 64) - if err != nil { - return err - } - - y, err := strconv.ParseFloat(parts[1], 64) - if err != nil { - return err - } - - *dst = Point{P: Vec2{x, y}, Status: Present} - return nil -} - -func (dst *Point) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Point{Status: Null} - return nil - } - - if len(src) != 16 { - return fmt.Errorf("invalid length for point: %v", len(src)) - } - - x := binary.BigEndian.Uint64(src) - y := binary.BigEndian.Uint64(src[8:]) - - *dst = Point{ - P: Vec2{math.Float64frombits(x), math.Float64frombits(y)}, - Status: Present, - } - return nil -} - -func (src Point) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, fmt.Sprintf(`(%s,%s)`, - strconv.FormatFloat(src.P.X, 'f', -1, 64), - strconv.FormatFloat(src.P.Y, 'f', -1, 64), - )...), nil -} - -func (src Point) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint64(buf, math.Float64bits(src.P.X)) - buf = pgio.AppendUint64(buf, math.Float64bits(src.P.Y)) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Point) Scan(src interface{}) error { - if src == nil { - *dst = Point{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Point) Value() (driver.Value, error) { - return EncodeValueText(src) -} - -func (src Point) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - var buff bytes.Buffer - buff.WriteByte('"') - buff.WriteString(fmt.Sprintf("(%g,%g)", src.P.X, src.P.Y)) - buff.WriteByte('"') - return buff.Bytes(), nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - return nil, errBadStatus -} - -func (dst *Point) UnmarshalJSON(point []byte) error { - p, err := parsePoint(point) - if err != nil { - return err - } - *dst = *p - return nil -} diff --git a/vendor/github.com/jackc/pgtype/polygon.go b/vendor/github.com/jackc/pgtype/polygon.go deleted file mode 100644 index 207cadc0..00000000 --- a/vendor/github.com/jackc/pgtype/polygon.go +++ /dev/null @@ -1,226 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "math" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -type Polygon struct { - P []Vec2 - Status Status -} - -// Set converts src to dest. -// -// src can be nil, string, []float64, and []pgtype.Vec2. -// -// If src is string the format must be ((x1,y1),(x2,y2),...,(xn,yn)). -// Important that there are no spaces in it. -func (dst *Polygon) Set(src interface{}) error { - if src == nil { - dst.Status = Null - return nil - } - err := fmt.Errorf("cannot convert %v to Polygon", src) - var p *Polygon - switch value := src.(type) { - case string: - p, err = stringToPolygon(value) - case []Vec2: - p = &Polygon{Status: Present, P: value} - err = nil - case []float64: - p, err = float64ToPolygon(value) - default: - return err - } - if err != nil { - return err - } - *dst = *p - return nil -} - -func stringToPolygon(src string) (*Polygon, error) { - p := &Polygon{} - err := p.DecodeText(nil, []byte(src)) - return p, err -} - -func float64ToPolygon(src []float64) (*Polygon, error) { - p := &Polygon{Status: Null} - if len(src) == 0 { - return p, nil - } - if len(src)%2 != 0 { - p.Status = Undefined - return p, fmt.Errorf("invalid length for polygon: %v", len(src)) - } - p.Status = Present - p.P = make([]Vec2, 0) - for i := 0; i < len(src); i += 2 { - p.P = append(p.P, Vec2{X: src[i], Y: src[i+1]}) - } - return p, nil -} - -func (dst Polygon) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Polygon) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Polygon) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Polygon{Status: Null} - return nil - } - - if len(src) < 7 { - return fmt.Errorf("invalid length for Polygon: %v", len(src)) - } - - points := make([]Vec2, 0) - - str := string(src[2:]) - - for { - end := strings.IndexByte(str, ',') - x, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - str = str[end+1:] - end = strings.IndexByte(str, ')') - - y, err := strconv.ParseFloat(str[:end], 64) - if err != nil { - return err - } - - points = append(points, Vec2{x, y}) - - if end+3 < len(str) { - str = str[end+3:] - } else { - break - } - } - - *dst = Polygon{P: points, Status: Present} - return nil -} - -func (dst *Polygon) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Polygon{Status: Null} - return nil - } - - if len(src) < 5 { - return fmt.Errorf("invalid length for Polygon: %v", len(src)) - } - - pointCount := int(binary.BigEndian.Uint32(src)) - rp := 4 - - if 4+pointCount*16 != len(src) { - return fmt.Errorf("invalid length for Polygon with %d points: %v", pointCount, len(src)) - } - - points := make([]Vec2, pointCount) - for i := 0; i < len(points); i++ { - x := binary.BigEndian.Uint64(src[rp:]) - rp += 8 - y := binary.BigEndian.Uint64(src[rp:]) - rp += 8 - points[i] = Vec2{math.Float64frombits(x), math.Float64frombits(y)} - } - - *dst = Polygon{ - P: points, - Status: Present, - } - return nil -} - -func (src Polygon) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, '(') - - for i, p := range src.P { - if i > 0 { - buf = append(buf, ',') - } - buf = append(buf, fmt.Sprintf(`(%s,%s)`, - strconv.FormatFloat(p.X, 'f', -1, 64), - strconv.FormatFloat(p.Y, 'f', -1, 64), - )...) - } - - return append(buf, ')'), nil -} - -func (src Polygon) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.P))) - - for _, p := range src.P { - buf = pgio.AppendUint64(buf, math.Float64bits(p.X)) - buf = pgio.AppendUint64(buf, math.Float64bits(p.Y)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Polygon) Scan(src interface{}) error { - if src == nil { - *dst = Polygon{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Polygon) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/qchar.go b/vendor/github.com/jackc/pgtype/qchar.go deleted file mode 100644 index 574f6066..00000000 --- a/vendor/github.com/jackc/pgtype/qchar.go +++ /dev/null @@ -1,152 +0,0 @@ -package pgtype - -import ( - "fmt" - "math" - "strconv" -) - -// QChar is for PostgreSQL's special 8-bit-only "char" type more akin to the C -// language's char type, or Go's byte type. (Note that the name in PostgreSQL -// itself is "char", in double-quotes, and not char.) It gets used a lot in -// PostgreSQL's system tables to hold a single ASCII character value (eg -// pg_class.relkind). It is named Qchar for quoted char to disambiguate from SQL -// standard type char. -// -// Not all possible values of QChar are representable in the text format. -// Therefore, QChar does not implement TextEncoder and TextDecoder. In -// addition, database/sql Scanner and database/sql/driver Value are not -// implemented. -type QChar struct { - Int int8 - Status Status -} - -func (dst *QChar) Set(src interface{}) error { - if src == nil { - *dst = QChar{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case int8: - *dst = QChar{Int: value, Status: Present} - case uint8: - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case int16: - if value < math.MinInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case uint16: - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case int32: - if value < math.MinInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case uint32: - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case int64: - if value < math.MinInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case uint64: - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case int: - if value < math.MinInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case uint: - if value > math.MaxInt8 { - return fmt.Errorf("%d is greater than maximum value for QChar", value) - } - *dst = QChar{Int: int8(value), Status: Present} - case string: - num, err := strconv.ParseInt(value, 10, 8) - if err != nil { - return err - } - *dst = QChar{Int: int8(num), Status: Present} - default: - if originalSrc, ok := underlyingNumberType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to QChar", value) - } - - return nil -} - -func (dst QChar) Get() interface{} { - switch dst.Status { - case Present: - return dst.Int - case Null: - return nil - default: - return dst.Status - } -} - -func (src *QChar) AssignTo(dst interface{}) error { - return int64AssignTo(int64(src.Int), src.Status, dst) -} - -func (dst *QChar) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = QChar{Status: Null} - return nil - } - - if len(src) != 1 { - return fmt.Errorf(`invalid length for "char": %v`, len(src)) - } - - *dst = QChar{Int: int8(src[0]), Status: Present} - return nil -} - -func (src QChar) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, byte(src.Int)), nil -} diff --git a/vendor/github.com/jackc/pgtype/range.go b/vendor/github.com/jackc/pgtype/range.go deleted file mode 100644 index e999f6a9..00000000 --- a/vendor/github.com/jackc/pgtype/range.go +++ /dev/null @@ -1,277 +0,0 @@ -package pgtype - -import ( - "bytes" - "encoding/binary" - "fmt" -) - -type BoundType byte - -const ( - Inclusive = BoundType('i') - Exclusive = BoundType('e') - Unbounded = BoundType('U') - Empty = BoundType('E') -) - -func (bt BoundType) String() string { - return string(bt) -} - -type UntypedTextRange struct { - Lower string - Upper string - LowerType BoundType - UpperType BoundType -} - -func ParseUntypedTextRange(src string) (*UntypedTextRange, error) { - utr := &UntypedTextRange{} - if src == "empty" { - utr.LowerType = Empty - utr.UpperType = Empty - return utr, nil - } - - buf := bytes.NewBufferString(src) - - skipWhitespace(buf) - - r, _, err := buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid lower bound: %v", err) - } - switch r { - case '(': - utr.LowerType = Exclusive - case '[': - utr.LowerType = Inclusive - default: - return nil, fmt.Errorf("missing lower bound, instead got: %v", string(r)) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid lower value: %v", err) - } - buf.UnreadRune() - - if r == ',' { - utr.LowerType = Unbounded - } else { - utr.Lower, err = rangeParseValue(buf) - if err != nil { - return nil, fmt.Errorf("invalid lower value: %v", err) - } - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("missing range separator: %v", err) - } - if r != ',' { - return nil, fmt.Errorf("missing range separator: %v", r) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("invalid upper value: %v", err) - } - - if r == ')' || r == ']' { - utr.UpperType = Unbounded - } else { - buf.UnreadRune() - utr.Upper, err = rangeParseValue(buf) - if err != nil { - return nil, fmt.Errorf("invalid upper value: %v", err) - } - - r, _, err = buf.ReadRune() - if err != nil { - return nil, fmt.Errorf("missing upper bound: %v", err) - } - switch r { - case ')': - utr.UpperType = Exclusive - case ']': - utr.UpperType = Inclusive - default: - return nil, fmt.Errorf("missing upper bound, instead got: %v", string(r)) - } - } - - skipWhitespace(buf) - - if buf.Len() > 0 { - return nil, fmt.Errorf("unexpected trailing data: %v", buf.String()) - } - - return utr, nil -} - -func rangeParseValue(buf *bytes.Buffer) (string, error) { - r, _, err := buf.ReadRune() - if err != nil { - return "", err - } - if r == '"' { - return rangeParseQuotedValue(buf) - } - buf.UnreadRune() - - s := &bytes.Buffer{} - - for { - r, _, err := buf.ReadRune() - if err != nil { - return "", err - } - - switch r { - case '\\': - r, _, err = buf.ReadRune() - if err != nil { - return "", err - } - case ',', '[', ']', '(', ')': - buf.UnreadRune() - return s.String(), nil - } - - s.WriteRune(r) - } -} - -func rangeParseQuotedValue(buf *bytes.Buffer) (string, error) { - s := &bytes.Buffer{} - - for { - r, _, err := buf.ReadRune() - if err != nil { - return "", err - } - - switch r { - case '\\': - r, _, err = buf.ReadRune() - if err != nil { - return "", err - } - case '"': - r, _, err = buf.ReadRune() - if err != nil { - return "", err - } - if r != '"' { - buf.UnreadRune() - return s.String(), nil - } - } - s.WriteRune(r) - } -} - -type UntypedBinaryRange struct { - Lower []byte - Upper []byte - LowerType BoundType - UpperType BoundType -} - -// 0 = () = 00000 -// 1 = empty = 00001 -// 2 = [) = 00010 -// 4 = (] = 00100 -// 6 = [] = 00110 -// 8 = ) = 01000 -// 12 = ] = 01100 -// 16 = ( = 10000 -// 18 = [ = 10010 -// 24 = = 11000 - -const emptyMask = 1 -const lowerInclusiveMask = 2 -const upperInclusiveMask = 4 -const lowerUnboundedMask = 8 -const upperUnboundedMask = 16 - -func ParseUntypedBinaryRange(src []byte) (*UntypedBinaryRange, error) { - ubr := &UntypedBinaryRange{} - - if len(src) == 0 { - return nil, fmt.Errorf("range too short: %v", len(src)) - } - - rangeType := src[0] - rp := 1 - - if rangeType&emptyMask > 0 { - if len(src[rp:]) > 0 { - return nil, fmt.Errorf("unexpected trailing bytes parsing empty range: %v", len(src[rp:])) - } - ubr.LowerType = Empty - ubr.UpperType = Empty - return ubr, nil - } - - if rangeType&lowerInclusiveMask > 0 { - ubr.LowerType = Inclusive - } else if rangeType&lowerUnboundedMask > 0 { - ubr.LowerType = Unbounded - } else { - ubr.LowerType = Exclusive - } - - if rangeType&upperInclusiveMask > 0 { - ubr.UpperType = Inclusive - } else if rangeType&upperUnboundedMask > 0 { - ubr.UpperType = Unbounded - } else { - ubr.UpperType = Exclusive - } - - if ubr.LowerType == Unbounded && ubr.UpperType == Unbounded { - if len(src[rp:]) > 0 { - return nil, fmt.Errorf("unexpected trailing bytes parsing unbounded range: %v", len(src[rp:])) - } - return ubr, nil - } - - if len(src[rp:]) < 4 { - return nil, fmt.Errorf("too few bytes for size: %v", src[rp:]) - } - valueLen := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - val := src[rp : rp+valueLen] - rp += valueLen - - if ubr.LowerType != Unbounded { - ubr.Lower = val - } else { - ubr.Upper = val - if len(src[rp:]) > 0 { - return nil, fmt.Errorf("unexpected trailing bytes parsing range: %v", len(src[rp:])) - } - return ubr, nil - } - - if ubr.UpperType != Unbounded { - if len(src[rp:]) < 4 { - return nil, fmt.Errorf("too few bytes for size: %v", src[rp:]) - } - valueLen := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - ubr.Upper = src[rp : rp+valueLen] - rp += valueLen - } - - if len(src[rp:]) > 0 { - return nil, fmt.Errorf("unexpected trailing bytes parsing range: %v", len(src[rp:])) - } - - return ubr, nil - -} diff --git a/vendor/github.com/jackc/pgtype/record.go b/vendor/github.com/jackc/pgtype/record.go deleted file mode 100644 index 5cf2c93a..00000000 --- a/vendor/github.com/jackc/pgtype/record.go +++ /dev/null @@ -1,126 +0,0 @@ -package pgtype - -import ( - "fmt" - "reflect" -) - -// Record is the generic PostgreSQL record type such as is created with the -// "row" function. Record only implements BinaryDecoder and Value. The text -// format output format from PostgreSQL does not include type information and is -// therefore impossible to decode. No encoders are implemented because -// PostgreSQL does not support input of generic records. -type Record struct { - Fields []Value - Status Status -} - -func (dst *Record) Set(src interface{}) error { - if src == nil { - *dst = Record{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case []Value: - *dst = Record{Fields: value, Status: Present} - default: - return fmt.Errorf("cannot convert %v to Record", src) - } - - return nil -} - -func (dst Record) Get() interface{} { - switch dst.Status { - case Present: - return dst.Fields - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Record) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *[]Value: - *v = make([]Value, len(src.Fields)) - copy(*v, src.Fields) - return nil - case *[]interface{}: - *v = make([]interface{}, len(src.Fields)) - for i := range *v { - (*v)[i] = src.Fields[i].Get() - } - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func prepareNewBinaryDecoder(ci *ConnInfo, fieldOID uint32, v *Value) (BinaryDecoder, error) { - var binaryDecoder BinaryDecoder - - if dt, ok := ci.DataTypeForOID(fieldOID); ok { - binaryDecoder, _ = dt.Value.(BinaryDecoder) - } else { - return nil, fmt.Errorf("unknown oid while decoding record: %v", fieldOID) - } - - if binaryDecoder == nil { - return nil, fmt.Errorf("no binary decoder registered for: %v", fieldOID) - } - - // Duplicate struct to scan into - binaryDecoder = reflect.New(reflect.ValueOf(binaryDecoder).Elem().Type()).Interface().(BinaryDecoder) - *v = binaryDecoder.(Value) - return binaryDecoder, nil -} - -func (dst *Record) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Record{Status: Null} - return nil - } - - scanner := NewCompositeBinaryScanner(ci, src) - - fields := make([]Value, scanner.FieldCount()) - - for i := 0; scanner.Next(); i++ { - binaryDecoder, err := prepareNewBinaryDecoder(ci, scanner.OID(), &fields[i]) - if err != nil { - return err - } - - if err = binaryDecoder.DecodeBinary(ci, scanner.Bytes()); err != nil { - return err - } - } - - if scanner.Err() != nil { - return scanner.Err() - } - - *dst = Record{Fields: fields, Status: Present} - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/record_array.go b/vendor/github.com/jackc/pgtype/record_array.go deleted file mode 100644 index 2271717a..00000000 --- a/vendor/github.com/jackc/pgtype/record_array.go +++ /dev/null @@ -1,318 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "encoding/binary" - "fmt" - "reflect" -) - -type RecordArray struct { - Elements []Record - Dimensions []ArrayDimension - Status Status -} - -func (dst *RecordArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = RecordArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case [][]Value: - if value == nil { - *dst = RecordArray{Status: Null} - } else if len(value) == 0 { - *dst = RecordArray{Status: Present} - } else { - elements := make([]Record, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = RecordArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Record: - if value == nil { - *dst = RecordArray{Status: Null} - } else if len(value) == 0 { - *dst = RecordArray{Status: Present} - } else { - *dst = RecordArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = RecordArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for RecordArray", src) - } - if elementsLength == 0 { - *dst = RecordArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to RecordArray", src) - } - - *dst = RecordArray{ - Elements: make([]Record, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Record, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to RecordArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *RecordArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to RecordArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in RecordArray", err) - } - index++ - - return index, nil -} - -func (dst RecordArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *RecordArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[][]Value: - *v = make([][]Value, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *RecordArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from RecordArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from RecordArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *RecordArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = RecordArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = RecordArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Record, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = RecordArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} diff --git a/vendor/github.com/jackc/pgtype/text.go b/vendor/github.com/jackc/pgtype/text.go deleted file mode 100644 index a01815d9..00000000 --- a/vendor/github.com/jackc/pgtype/text.go +++ /dev/null @@ -1,212 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/json" - "fmt" -) - -type Text struct { - String string - Status Status -} - -func (dst *Text) Set(src interface{}) error { - if src == nil { - *dst = Text{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case string: - *dst = Text{String: value, Status: Present} - case *string: - if value == nil { - *dst = Text{Status: Null} - } else { - *dst = Text{String: *value, Status: Present} - } - case []byte: - if value == nil { - *dst = Text{Status: Null} - } else { - *dst = Text{String: string(value), Status: Present} - } - case fmt.Stringer: - if value == fmt.Stringer(nil) { - *dst = Text{Status: Null} - } else { - *dst = Text{String: value.String(), Status: Present} - } - default: - // Cannot be part of the switch: If Value() returns nil on - // non-string, we should still try to checks the underlying type - // using reflection. - // - // For example the struct might implement driver.Valuer with - // pointer receiver and fmt.Stringer with value receiver. - if value, ok := src.(driver.Valuer); ok { - if value == driver.Valuer(nil) { - *dst = Text{Status: Null} - return nil - } else { - v, err := value.Value() - if err != nil { - return fmt.Errorf("driver.Valuer Value() method failed: %w", err) - } - - // Handles also v == nil case. - if s, ok := v.(string); ok { - *dst = Text{String: s, Status: Present} - return nil - } - } - } - - if originalSrc, ok := underlyingStringType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Text", value) - } - - return nil -} - -func (dst Text) Get() interface{} { - switch dst.Status { - case Present: - return dst.String - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Text) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *string: - *v = src.String - return nil - case *[]byte: - *v = make([]byte, len(src.String)) - copy(*v, src.String) - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (Text) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *Text) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Text{Status: Null} - return nil - } - - *dst = Text{String: string(src), Status: Present} - return nil -} - -func (dst *Text) DecodeBinary(ci *ConnInfo, src []byte) error { - return dst.DecodeText(ci, src) -} - -func (Text) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src Text) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.String...), nil -} - -func (src Text) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return src.EncodeText(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *Text) Scan(src interface{}) error { - if src == nil { - *dst = Text{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Text) Value() (driver.Value, error) { - switch src.Status { - case Present: - return src.String, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Text) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - return json.Marshal(src.String) - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - return nil, errBadStatus -} - -func (dst *Text) UnmarshalJSON(b []byte) error { - var s *string - err := json.Unmarshal(b, &s) - if err != nil { - return err - } - - if s == nil { - *dst = Text{Status: Null} - } else { - *dst = Text{String: *s, Status: Present} - } - - return nil -} diff --git a/vendor/github.com/jackc/pgtype/text_array.go b/vendor/github.com/jackc/pgtype/text_array.go deleted file mode 100644 index 2461966b..00000000 --- a/vendor/github.com/jackc/pgtype/text_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type TextArray struct { - Elements []Text - Dimensions []ArrayDimension - Status Status -} - -func (dst *TextArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = TextArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = TextArray{Status: Null} - } else if len(value) == 0 { - *dst = TextArray{Status: Present} - } else { - elements := make([]Text, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TextArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = TextArray{Status: Null} - } else if len(value) == 0 { - *dst = TextArray{Status: Present} - } else { - elements := make([]Text, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TextArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Text: - if value == nil { - *dst = TextArray{Status: Null} - } else if len(value) == 0 { - *dst = TextArray{Status: Present} - } else { - *dst = TextArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = TextArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for TextArray", src) - } - if elementsLength == 0 { - *dst = TextArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to TextArray", src) - } - - *dst = TextArray{ - Elements: make([]Text, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Text, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to TextArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *TextArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to TextArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in TextArray", err) - } - index++ - - return index, nil -} - -func (dst TextArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TextArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *TextArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from TextArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from TextArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *TextArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TextArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Text - - if len(uta.Elements) > 0 { - elements = make([]Text, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Text - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = TextArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *TextArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TextArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = TextArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Text, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = TextArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src TextArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src TextArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("text"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "text") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TextArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TextArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/tid.go b/vendor/github.com/jackc/pgtype/tid.go deleted file mode 100644 index 4bb57f64..00000000 --- a/vendor/github.com/jackc/pgtype/tid.go +++ /dev/null @@ -1,156 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "strconv" - "strings" - - "github.com/jackc/pgio" -) - -// TID is PostgreSQL's Tuple Identifier type. -// -// When one does -// -// select ctid, * from some_table; -// -// it is the data type of the ctid hidden system column. -// -// It is currently implemented as a pair unsigned two byte integers. -// Its conversion functions can be found in src/backend/utils/adt/tid.c -// in the PostgreSQL sources. -type TID struct { - BlockNumber uint32 - OffsetNumber uint16 - Status Status -} - -func (dst *TID) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to TID", src) -} - -func (dst TID) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TID) AssignTo(dst interface{}) error { - if src.Status == Present { - switch v := dst.(type) { - case *string: - *v = fmt.Sprintf(`(%d,%d)`, src.BlockNumber, src.OffsetNumber) - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - } - - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *TID) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TID{Status: Null} - return nil - } - - if len(src) < 5 { - return fmt.Errorf("invalid length for tid: %v", len(src)) - } - - parts := strings.SplitN(string(src[1:len(src)-1]), ",", 2) - if len(parts) < 2 { - return fmt.Errorf("invalid format for tid") - } - - blockNumber, err := strconv.ParseUint(parts[0], 10, 32) - if err != nil { - return err - } - - offsetNumber, err := strconv.ParseUint(parts[1], 10, 16) - if err != nil { - return err - } - - *dst = TID{BlockNumber: uint32(blockNumber), OffsetNumber: uint16(offsetNumber), Status: Present} - return nil -} - -func (dst *TID) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TID{Status: Null} - return nil - } - - if len(src) != 6 { - return fmt.Errorf("invalid length for tid: %v", len(src)) - } - - *dst = TID{ - BlockNumber: binary.BigEndian.Uint32(src), - OffsetNumber: binary.BigEndian.Uint16(src[4:]), - Status: Present, - } - return nil -} - -func (src TID) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, fmt.Sprintf(`(%d,%d)`, src.BlockNumber, src.OffsetNumber)...) - return buf, nil -} - -func (src TID) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendUint32(buf, src.BlockNumber) - buf = pgio.AppendUint16(buf, src.OffsetNumber) - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TID) Scan(src interface{}) error { - if src == nil { - *dst = TID{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TID) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/time.go b/vendor/github.com/jackc/pgtype/time.go deleted file mode 100644 index f7a28870..00000000 --- a/vendor/github.com/jackc/pgtype/time.go +++ /dev/null @@ -1,231 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "strconv" - "time" - - "github.com/jackc/pgio" -) - -// Time represents the PostgreSQL time type. The PostgreSQL time is a time of day without time zone. -// -// Time is represented as the number of microseconds since midnight in the same way that PostgreSQL does. Other time -// and date types in pgtype can use time.Time as the underlying representation. However, pgtype.Time type cannot due -// to needing to handle 24:00:00. time.Time converts that to 00:00:00 on the following day. -type Time struct { - Microseconds int64 // Number of microseconds since midnight - Status Status -} - -// Set converts src into a Time and stores in dst. -func (dst *Time) Set(src interface{}) error { - if src == nil { - *dst = Time{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case time.Time: - usec := int64(value.Hour())*microsecondsPerHour + - int64(value.Minute())*microsecondsPerMinute + - int64(value.Second())*microsecondsPerSecond + - int64(value.Nanosecond())/1000 - *dst = Time{Microseconds: usec, Status: Present} - case *time.Time: - if value == nil { - *dst = Time{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingTimeType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Time", value) - } - - return nil -} - -func (dst Time) Get() interface{} { - switch dst.Status { - case Present: - return dst.Microseconds - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Time) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *time.Time: - // 24:00:00 is max allowed time in PostgreSQL, but time.Time will normalize that to 00:00:00 the next day. - var maxRepresentableByTime int64 = 24*60*60*1000000 - 1 - if src.Microseconds > maxRepresentableByTime { - return fmt.Errorf("%d microseconds cannot be represented as time.Time", src.Microseconds) - } - - usec := src.Microseconds - hours := usec / microsecondsPerHour - usec -= hours * microsecondsPerHour - minutes := usec / microsecondsPerMinute - usec -= minutes * microsecondsPerMinute - seconds := usec / microsecondsPerSecond - usec -= seconds * microsecondsPerSecond - ns := usec * 1000 - *v = time.Date(2000, 1, 1, int(hours), int(minutes), int(seconds), int(ns), time.UTC) - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -// DecodeText decodes from src into dst. -func (dst *Time) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Time{Status: Null} - return nil - } - - s := string(src) - - if len(s) < 8 { - return fmt.Errorf("cannot decode %v into Time", s) - } - - hours, err := strconv.ParseInt(s[0:2], 10, 64) - if err != nil { - return fmt.Errorf("cannot decode %v into Time", s) - } - usec := hours * microsecondsPerHour - - minutes, err := strconv.ParseInt(s[3:5], 10, 64) - if err != nil { - return fmt.Errorf("cannot decode %v into Time", s) - } - usec += minutes * microsecondsPerMinute - - seconds, err := strconv.ParseInt(s[6:8], 10, 64) - if err != nil { - return fmt.Errorf("cannot decode %v into Time", s) - } - usec += seconds * microsecondsPerSecond - - if len(s) > 9 { - fraction := s[9:] - n, err := strconv.ParseInt(fraction, 10, 64) - if err != nil { - return fmt.Errorf("cannot decode %v into Time", s) - } - - for i := len(fraction); i < 6; i++ { - n *= 10 - } - - usec += n - } - - *dst = Time{Microseconds: usec, Status: Present} - - return nil -} - -// DecodeBinary decodes from src into dst. -func (dst *Time) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Time{Status: Null} - return nil - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for time: %v", len(src)) - } - - usec := int64(binary.BigEndian.Uint64(src)) - *dst = Time{Microseconds: usec, Status: Present} - - return nil -} - -// EncodeText writes the text encoding of src into w. -func (src Time) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - usec := src.Microseconds - hours := usec / microsecondsPerHour - usec -= hours * microsecondsPerHour - minutes := usec / microsecondsPerMinute - usec -= minutes * microsecondsPerMinute - seconds := usec / microsecondsPerSecond - usec -= seconds * microsecondsPerSecond - - s := fmt.Sprintf("%02d:%02d:%02d.%06d", hours, minutes, seconds, usec) - - return append(buf, s...), nil -} - -// EncodeBinary writes the binary encoding of src into w. If src.Time is not in -// the UTC time zone it returns an error. -func (src Time) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return pgio.AppendInt64(buf, src.Microseconds), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Time) Scan(src interface{}) error { - if src == nil { - *dst = Time{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - case time.Time: - return dst.Set(src) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Time) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/timestamp.go b/vendor/github.com/jackc/pgtype/timestamp.go deleted file mode 100644 index fce490c8..00000000 --- a/vendor/github.com/jackc/pgtype/timestamp.go +++ /dev/null @@ -1,261 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "strings" - "time" - - "github.com/jackc/pgio" -) - -const pgTimestampFormat = "2006-01-02 15:04:05.999999999" - -// Timestamp represents the PostgreSQL timestamp type. The PostgreSQL -// timestamp does not have a time zone. This presents a problem when -// translating to and from time.Time which requires a time zone. It is highly -// recommended to use timestamptz whenever possible. Timestamp methods either -// convert to UTC or return an error on non-UTC times. -type Timestamp struct { - Time time.Time // Time must always be in UTC. - Status Status - InfinityModifier InfinityModifier -} - -// Set converts src into a Timestamp and stores in dst. If src is a -// time.Time in a non-UTC time zone, the time zone is discarded. -func (dst *Timestamp) Set(src interface{}) error { - if src == nil { - *dst = Timestamp{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case time.Time: - *dst = Timestamp{Time: time.Date(value.Year(), value.Month(), value.Day(), value.Hour(), value.Minute(), value.Second(), value.Nanosecond(), time.UTC), Status: Present} - case *time.Time: - if value == nil { - *dst = Timestamp{Status: Null} - } else { - return dst.Set(*value) - } - case string: - return dst.DecodeText(nil, []byte(value)) - case *string: - if value == nil { - *dst = Timestamp{Status: Null} - } else { - return dst.Set(*value) - } - case InfinityModifier: - *dst = Timestamp{InfinityModifier: value, Status: Present} - default: - if originalSrc, ok := underlyingTimeType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Timestamp", value) - } - - return nil -} - -func (dst Timestamp) Get() interface{} { - switch dst.Status { - case Present: - if dst.InfinityModifier != None { - return dst.InfinityModifier - } - return dst.Time - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Timestamp) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *time.Time: - if src.InfinityModifier != None { - return fmt.Errorf("cannot assign %v to %T", src, dst) - } - *v = src.Time - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -// DecodeText decodes from src into dst. The decoded time is considered to -// be in UTC. -func (dst *Timestamp) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Timestamp{Status: Null} - return nil - } - - sbuf := string(src) - switch sbuf { - case "infinity": - *dst = Timestamp{Status: Present, InfinityModifier: Infinity} - case "-infinity": - *dst = Timestamp{Status: Present, InfinityModifier: -Infinity} - default: - if strings.HasSuffix(sbuf, " BC") { - t, err := time.Parse(pgTimestampFormat, strings.TrimRight(sbuf, " BC")) - t2 := time.Date(1-t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), t.Location()) - if err != nil { - return err - } - *dst = Timestamp{Time: t2, Status: Present} - return nil - } - tim, err := time.Parse(pgTimestampFormat, sbuf) - if err != nil { - return err - } - - *dst = Timestamp{Time: tim, Status: Present} - } - - return nil -} - -// DecodeBinary decodes from src into dst. The decoded time is considered to -// be in UTC. -func (dst *Timestamp) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Timestamp{Status: Null} - return nil - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for timestamp: %v", len(src)) - } - - microsecSinceY2K := int64(binary.BigEndian.Uint64(src)) - - switch microsecSinceY2K { - case infinityMicrosecondOffset: - *dst = Timestamp{Status: Present, InfinityModifier: Infinity} - case negativeInfinityMicrosecondOffset: - *dst = Timestamp{Status: Present, InfinityModifier: -Infinity} - default: - tim := time.Unix( - microsecFromUnixEpochToY2K/1000000+microsecSinceY2K/1000000, - (microsecFromUnixEpochToY2K%1000000*1000)+(microsecSinceY2K%1000000*1000), - ).UTC() - *dst = Timestamp{Time: tim, Status: Present} - } - - return nil -} - -// EncodeText writes the text encoding of src into w. If src.Time is not in -// the UTC time zone it returns an error. -func (src Timestamp) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - if src.Time.Location() != time.UTC { - return nil, fmt.Errorf("cannot encode non-UTC time into timestamp") - } - - var s string - - switch src.InfinityModifier { - case None: - s = src.Time.Truncate(time.Microsecond).Format(pgTimestampFormat) - case Infinity: - s = "infinity" - case NegativeInfinity: - s = "-infinity" - } - - return append(buf, s...), nil -} - -// EncodeBinary writes the binary encoding of src into w. If src.Time is not in -// the UTC time zone it returns an error. -func (src Timestamp) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - if src.Time.Location() != time.UTC { - return nil, fmt.Errorf("cannot encode non-UTC time into timestamp") - } - - var microsecSinceY2K int64 - switch src.InfinityModifier { - case None: - microsecSinceUnixEpoch := src.Time.Unix()*1000000 + int64(src.Time.Nanosecond())/1000 - microsecSinceY2K = microsecSinceUnixEpoch - microsecFromUnixEpochToY2K - case Infinity: - microsecSinceY2K = infinityMicrosecondOffset - case NegativeInfinity: - microsecSinceY2K = negativeInfinityMicrosecondOffset - } - - return pgio.AppendInt64(buf, microsecSinceY2K), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Timestamp) Scan(src interface{}) error { - if src == nil { - *dst = Timestamp{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - case time.Time: - *dst = Timestamp{Time: src, Status: Present} - return nil - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Timestamp) Value() (driver.Value, error) { - switch src.Status { - case Present: - if src.InfinityModifier != None { - return src.InfinityModifier.String(), nil - } - return src.Time, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} diff --git a/vendor/github.com/jackc/pgtype/timestamp_array.go b/vendor/github.com/jackc/pgtype/timestamp_array.go deleted file mode 100644 index e12481e3..00000000 --- a/vendor/github.com/jackc/pgtype/timestamp_array.go +++ /dev/null @@ -1,518 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - "time" - - "github.com/jackc/pgio" -) - -type TimestampArray struct { - Elements []Timestamp - Dimensions []ArrayDimension - Status Status -} - -func (dst *TimestampArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = TimestampArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []time.Time: - if value == nil { - *dst = TimestampArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestampArray{Status: Present} - } else { - elements := make([]Timestamp, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TimestampArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*time.Time: - if value == nil { - *dst = TimestampArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestampArray{Status: Present} - } else { - elements := make([]Timestamp, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TimestampArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Timestamp: - if value == nil { - *dst = TimestampArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestampArray{Status: Present} - } else { - *dst = TimestampArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = TimestampArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for TimestampArray", src) - } - if elementsLength == 0 { - *dst = TimestampArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to TimestampArray", src) - } - - *dst = TimestampArray{ - Elements: make([]Timestamp, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Timestamp, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to TimestampArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *TimestampArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to TimestampArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in TimestampArray", err) - } - index++ - - return index, nil -} - -func (dst TimestampArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TimestampArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]time.Time: - *v = make([]time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*time.Time: - *v = make([]*time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *TimestampArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from TimestampArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from TimestampArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *TimestampArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TimestampArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Timestamp - - if len(uta.Elements) > 0 { - elements = make([]Timestamp, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Timestamp - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = TimestampArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *TimestampArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TimestampArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = TimestampArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Timestamp, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = TimestampArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src TimestampArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src TimestampArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("timestamp"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "timestamp") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TimestampArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TimestampArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/timestamptz.go b/vendor/github.com/jackc/pgtype/timestamptz.go deleted file mode 100644 index 72ae4991..00000000 --- a/vendor/github.com/jackc/pgtype/timestamptz.go +++ /dev/null @@ -1,322 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "encoding/json" - "fmt" - "time" - - "github.com/jackc/pgio" -) - -const pgTimestamptzHourFormat = "2006-01-02 15:04:05.999999999Z07" -const pgTimestamptzMinuteFormat = "2006-01-02 15:04:05.999999999Z07:00" -const pgTimestamptzSecondFormat = "2006-01-02 15:04:05.999999999Z07:00:00" -const microsecFromUnixEpochToY2K = 946684800 * 1000000 - -const ( - negativeInfinityMicrosecondOffset = -9223372036854775808 - infinityMicrosecondOffset = 9223372036854775807 -) - -type Timestamptz struct { - Time time.Time - Status Status - InfinityModifier InfinityModifier -} - -func (dst *Timestamptz) Set(src interface{}) error { - if src == nil { - *dst = Timestamptz{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - switch value := src.(type) { - case time.Time: - *dst = Timestamptz{Time: value, Status: Present} - case *time.Time: - if value == nil { - *dst = Timestamptz{Status: Null} - } else { - return dst.Set(*value) - } - case string: - return dst.DecodeText(nil, []byte(value)) - case *string: - if value == nil { - *dst = Timestamptz{Status: Null} - } else { - return dst.Set(*value) - } - case InfinityModifier: - *dst = Timestamptz{InfinityModifier: value, Status: Present} - default: - if originalSrc, ok := underlyingTimeType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to Timestamptz", value) - } - - return nil -} - -func (dst Timestamptz) Get() interface{} { - switch dst.Status { - case Present: - if dst.InfinityModifier != None { - return dst.InfinityModifier - } - return dst.Time - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Timestamptz) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *time.Time: - if src.InfinityModifier != None { - return fmt.Errorf("cannot assign %v to %T", src, dst) - } - *v = src.Time - return nil - default: - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - return fmt.Errorf("unable to assign to %T", dst) - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (dst *Timestamptz) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Timestamptz{Status: Null} - return nil - } - - sbuf := string(src) - switch sbuf { - case "infinity": - *dst = Timestamptz{Status: Present, InfinityModifier: Infinity} - case "-infinity": - *dst = Timestamptz{Status: Present, InfinityModifier: -Infinity} - default: - var format string - if len(sbuf) >= 9 && (sbuf[len(sbuf)-9] == '-' || sbuf[len(sbuf)-9] == '+') { - format = pgTimestamptzSecondFormat - } else if len(sbuf) >= 6 && (sbuf[len(sbuf)-6] == '-' || sbuf[len(sbuf)-6] == '+') { - format = pgTimestamptzMinuteFormat - } else { - format = pgTimestamptzHourFormat - } - - tim, err := time.Parse(format, sbuf) - if err != nil { - return err - } - - *dst = Timestamptz{Time: normalizePotentialUTC(tim), Status: Present} - } - - return nil -} - -func (dst *Timestamptz) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Timestamptz{Status: Null} - return nil - } - - if len(src) != 8 { - return fmt.Errorf("invalid length for timestamptz: %v", len(src)) - } - - microsecSinceY2K := int64(binary.BigEndian.Uint64(src)) - - switch microsecSinceY2K { - case infinityMicrosecondOffset: - *dst = Timestamptz{Status: Present, InfinityModifier: Infinity} - case negativeInfinityMicrosecondOffset: - *dst = Timestamptz{Status: Present, InfinityModifier: -Infinity} - default: - tim := time.Unix( - microsecFromUnixEpochToY2K/1000000+microsecSinceY2K/1000000, - (microsecFromUnixEpochToY2K%1000000*1000)+(microsecSinceY2K%1000000*1000), - ) - *dst = Timestamptz{Time: tim, Status: Present} - } - - return nil -} - -func (src Timestamptz) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var s string - - switch src.InfinityModifier { - case None: - s = src.Time.UTC().Truncate(time.Microsecond).Format(pgTimestamptzSecondFormat) - case Infinity: - s = "infinity" - case NegativeInfinity: - s = "-infinity" - } - - return append(buf, s...), nil -} - -func (src Timestamptz) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var microsecSinceY2K int64 - switch src.InfinityModifier { - case None: - microsecSinceUnixEpoch := src.Time.Unix()*1000000 + int64(src.Time.Nanosecond())/1000 - microsecSinceY2K = microsecSinceUnixEpoch - microsecFromUnixEpochToY2K - case Infinity: - microsecSinceY2K = infinityMicrosecondOffset - case NegativeInfinity: - microsecSinceY2K = negativeInfinityMicrosecondOffset - } - - return pgio.AppendInt64(buf, microsecSinceY2K), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Timestamptz) Scan(src interface{}) error { - if src == nil { - *dst = Timestamptz{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - case time.Time: - *dst = Timestamptz{Time: src, Status: Present} - return nil - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Timestamptz) Value() (driver.Value, error) { - switch src.Status { - case Present: - if src.InfinityModifier != None { - return src.InfinityModifier.String(), nil - } - if src.Time.Location().String() == time.UTC.String() { - return src.Time.UTC(), nil - } - return src.Time, nil - case Null: - return nil, nil - default: - return nil, errUndefined - } -} - -func (src Timestamptz) MarshalJSON() ([]byte, error) { - switch src.Status { - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - - if src.Status != Present { - return nil, errBadStatus - } - - var s string - - switch src.InfinityModifier { - case None: - s = src.Time.Format(time.RFC3339Nano) - case Infinity: - s = "infinity" - case NegativeInfinity: - s = "-infinity" - } - - return json.Marshal(s) -} - -func (dst *Timestamptz) UnmarshalJSON(b []byte) error { - var s *string - err := json.Unmarshal(b, &s) - if err != nil { - return err - } - - if s == nil { - *dst = Timestamptz{Status: Null} - return nil - } - - switch *s { - case "infinity": - *dst = Timestamptz{Status: Present, InfinityModifier: Infinity} - case "-infinity": - *dst = Timestamptz{Status: Present, InfinityModifier: -Infinity} - default: - // PostgreSQL uses ISO 8601 for to_json function and casting from a string to timestamptz - tim, err := time.Parse(time.RFC3339Nano, *s) - if err != nil { - return err - } - - *dst = Timestamptz{Time: normalizePotentialUTC(tim), Status: Present} - } - - return nil -} - -// Normalize timestamps in UTC location to behave similarly to how the Golang -// standard library does it: UTC timestamps lack a .loc value. -// -// Reason for this: when comparing two timestamps with reflect.DeepEqual (generally -// speaking not a good idea, but several testing libraries (for example testify) -// does this), their location data needs to be equal for them to be considered -// equal. -func normalizePotentialUTC(timestamp time.Time) time.Time { - if timestamp.Location().String() != time.UTC.String() { - return timestamp - } - - return timestamp.UTC() -} diff --git a/vendor/github.com/jackc/pgtype/timestamptz_array.go b/vendor/github.com/jackc/pgtype/timestamptz_array.go deleted file mode 100644 index a3b4b263..00000000 --- a/vendor/github.com/jackc/pgtype/timestamptz_array.go +++ /dev/null @@ -1,518 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - "time" - - "github.com/jackc/pgio" -) - -type TimestamptzArray struct { - Elements []Timestamptz - Dimensions []ArrayDimension - Status Status -} - -func (dst *TimestamptzArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = TimestamptzArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []time.Time: - if value == nil { - *dst = TimestamptzArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestamptzArray{Status: Present} - } else { - elements := make([]Timestamptz, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TimestamptzArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*time.Time: - if value == nil { - *dst = TimestamptzArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestamptzArray{Status: Present} - } else { - elements := make([]Timestamptz, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = TimestamptzArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Timestamptz: - if value == nil { - *dst = TimestamptzArray{Status: Null} - } else if len(value) == 0 { - *dst = TimestamptzArray{Status: Present} - } else { - *dst = TimestamptzArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = TimestamptzArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for TimestamptzArray", src) - } - if elementsLength == 0 { - *dst = TimestamptzArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to TimestamptzArray", src) - } - - *dst = TimestamptzArray{ - Elements: make([]Timestamptz, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Timestamptz, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to TimestamptzArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *TimestamptzArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to TimestamptzArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in TimestamptzArray", err) - } - index++ - - return index, nil -} - -func (dst TimestamptzArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TimestamptzArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]time.Time: - *v = make([]time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*time.Time: - *v = make([]*time.Time, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *TimestamptzArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from TimestamptzArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from TimestamptzArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *TimestamptzArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TimestamptzArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Timestamptz - - if len(uta.Elements) > 0 { - elements = make([]Timestamptz, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Timestamptz - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = TimestamptzArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *TimestamptzArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TimestamptzArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = TimestamptzArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Timestamptz, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = TimestamptzArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src TimestamptzArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src TimestamptzArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("timestamptz"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "timestamptz") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TimestamptzArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TimestamptzArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/tsrange.go b/vendor/github.com/jackc/pgtype/tsrange.go deleted file mode 100644 index 19ecf446..00000000 --- a/vendor/github.com/jackc/pgtype/tsrange.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Tsrange struct { - Lower Timestamp - Upper Timestamp - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Tsrange) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Tsrange{Status: Null} - return nil - } - - switch value := src.(type) { - case Tsrange: - *dst = value - case *Tsrange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Tsrange", src) - } - - return nil -} - -func (dst Tsrange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Tsrange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Tsrange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Tsrange{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Tsrange{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Tsrange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Tsrange{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Tsrange{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Tsrange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Tsrange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Tsrange) Scan(src interface{}) error { - if src == nil { - *dst = Tsrange{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Tsrange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/tsrange_array.go b/vendor/github.com/jackc/pgtype/tsrange_array.go deleted file mode 100644 index c64048eb..00000000 --- a/vendor/github.com/jackc/pgtype/tsrange_array.go +++ /dev/null @@ -1,470 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type TsrangeArray struct { - Elements []Tsrange - Dimensions []ArrayDimension - Status Status -} - -func (dst *TsrangeArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = TsrangeArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []Tsrange: - if value == nil { - *dst = TsrangeArray{Status: Null} - } else if len(value) == 0 { - *dst = TsrangeArray{Status: Present} - } else { - *dst = TsrangeArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = TsrangeArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for TsrangeArray", src) - } - if elementsLength == 0 { - *dst = TsrangeArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to TsrangeArray", src) - } - - *dst = TsrangeArray{ - Elements: make([]Tsrange, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Tsrange, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to TsrangeArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *TsrangeArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to TsrangeArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in TsrangeArray", err) - } - index++ - - return index, nil -} - -func (dst TsrangeArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TsrangeArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]Tsrange: - *v = make([]Tsrange, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *TsrangeArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from TsrangeArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from TsrangeArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *TsrangeArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TsrangeArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Tsrange - - if len(uta.Elements) > 0 { - elements = make([]Tsrange, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Tsrange - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = TsrangeArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *TsrangeArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TsrangeArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = TsrangeArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Tsrange, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = TsrangeArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src TsrangeArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src TsrangeArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("tsrange"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "tsrange") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TsrangeArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TsrangeArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/tstzrange.go b/vendor/github.com/jackc/pgtype/tstzrange.go deleted file mode 100644 index 25576308..00000000 --- a/vendor/github.com/jackc/pgtype/tstzrange.go +++ /dev/null @@ -1,267 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "fmt" - - "github.com/jackc/pgio" -) - -type Tstzrange struct { - Lower Timestamptz - Upper Timestamptz - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *Tstzrange) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = Tstzrange{Status: Null} - return nil - } - - switch value := src.(type) { - case Tstzrange: - *dst = value - case *Tstzrange: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to Tstzrange", src) - } - - return nil -} - -func (dst Tstzrange) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Tstzrange) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Tstzrange) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Tstzrange{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = Tstzrange{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *Tstzrange) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Tstzrange{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = Tstzrange{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src Tstzrange) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src Tstzrange) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Tstzrange) Scan(src interface{}) error { - if src == nil { - *dst = Tstzrange{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Tstzrange) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/tstzrange_array.go b/vendor/github.com/jackc/pgtype/tstzrange_array.go deleted file mode 100644 index a216820a..00000000 --- a/vendor/github.com/jackc/pgtype/tstzrange_array.go +++ /dev/null @@ -1,470 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type TstzrangeArray struct { - Elements []Tstzrange - Dimensions []ArrayDimension - Status Status -} - -func (dst *TstzrangeArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = TstzrangeArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []Tstzrange: - if value == nil { - *dst = TstzrangeArray{Status: Null} - } else if len(value) == 0 { - *dst = TstzrangeArray{Status: Present} - } else { - *dst = TstzrangeArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = TstzrangeArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for TstzrangeArray", src) - } - if elementsLength == 0 { - *dst = TstzrangeArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to TstzrangeArray", src) - } - - *dst = TstzrangeArray{ - Elements: make([]Tstzrange, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Tstzrange, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to TstzrangeArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *TstzrangeArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to TstzrangeArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in TstzrangeArray", err) - } - index++ - - return index, nil -} - -func (dst TstzrangeArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *TstzrangeArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]Tstzrange: - *v = make([]Tstzrange, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *TstzrangeArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from TstzrangeArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from TstzrangeArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *TstzrangeArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TstzrangeArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Tstzrange - - if len(uta.Elements) > 0 { - elements = make([]Tstzrange, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Tstzrange - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = TstzrangeArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *TstzrangeArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = TstzrangeArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = TstzrangeArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Tstzrange, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = TstzrangeArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src TstzrangeArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src TstzrangeArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("tstzrange"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "tstzrange") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *TstzrangeArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src TstzrangeArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/typed_array.go.erb b/vendor/github.com/jackc/pgtype/typed_array.go.erb deleted file mode 100644 index e8433c04..00000000 --- a/vendor/github.com/jackc/pgtype/typed_array.go.erb +++ /dev/null @@ -1,512 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -<% - # defaults when not explicitly set on command line - - binary_format ||= "true" - text_format ||= "true" - - text_null ||= "NULL" - - encode_binary ||= binary_format - decode_binary ||= binary_format -%> - -package pgtype - -import ( - "bytes" - "fmt" - "io" - - "github.com/jackc/pgio" -) - -type <%= pgtype_array_type %> struct { - Elements []<%= pgtype_element_type %> - Dimensions []ArrayDimension - Status Status -} - -func (dst *<%= pgtype_array_type %>) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = <%= pgtype_array_type %>{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - <% go_array_types.split(",").each do |t| %> - <% if t != "[]#{pgtype_element_type}" %> - case <%= t %>: - if value == nil { - *dst = <%= pgtype_array_type %>{Status: Null} - } else if len(value) == 0 { - *dst = <%= pgtype_array_type %>{Status: Present} - } else { - elements := make([]<%= pgtype_element_type %>, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = <%= pgtype_array_type %>{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - <% end %> - <% end %> - case []<%= pgtype_element_type %>: - if value == nil { - *dst = <%= pgtype_array_type %>{Status: Null} - } else if len(value) == 0 { - *dst = <%= pgtype_array_type %>{Status: Present} - } else { - *dst = <%= pgtype_array_type %>{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status : Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = <%= pgtype_array_type %>{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for <%= pgtype_array_type %>", src) - } - if elementsLength == 0 { - *dst = <%= pgtype_array_type %>{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to <%= pgtype_array_type %>", src) - } - - *dst = <%= pgtype_array_type %> { - Elements: make([]<%= pgtype_element_type %>, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]<%= pgtype_element_type %>, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to <%= pgtype_array_type %>, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *<%= pgtype_array_type %>) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to <%= pgtype_array_type %>") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in <%= pgtype_array_type %>", err) - } - index++ - - return index, nil -} - -func (dst <%= pgtype_array_type %>) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *<%= pgtype_array_type %>) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1{ - // Attempt to match to select common types: - switch v := dst.(type) { - <% go_array_types.split(",").each do |t| %> - case *<%= t %>: - *v = make(<%= t %>, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - <% end %> - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *<%= pgtype_array_type %>) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr(){ - return 0, fmt.Errorf("cannot assign all values from <%= pgtype_array_type %>") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from <%= pgtype_array_type %>") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -<% if text_format == "true" %> -func (dst *<%= pgtype_array_type %>) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= pgtype_array_type %>{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []<%= pgtype_element_type %> - - if len(uta.Elements) > 0 { - elements = make([]<%= pgtype_element_type %>, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem <%= pgtype_element_type %> - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = <%= pgtype_array_type %>{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} -<% end %> - -<% if decode_binary == "true" %> -func (dst *<%= pgtype_array_type %>) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= pgtype_array_type %>{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = <%= pgtype_array_type %>{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]<%= pgtype_element_type %>, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp:rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = <%= pgtype_array_type %>{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} -<% end %> - -<% if text_format == "true" %> -func (src <%= pgtype_array_type %>) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `<%= text_null %>`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} -<% end %> - -<% if encode_binary == "true" %> - func (src <%= pgtype_array_type %>) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("<%= element_type_name %>"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "<%= element_type_name %>") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil - } -<% end %> - -<% if text_format == "true" %> -// Scan implements the database/sql Scanner interface. -func (dst *<%= pgtype_array_type %>) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src <%= pgtype_array_type %>) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} -<% end %> diff --git a/vendor/github.com/jackc/pgtype/typed_multirange.go.erb b/vendor/github.com/jackc/pgtype/typed_multirange.go.erb deleted file mode 100644 index 84c8299f..00000000 --- a/vendor/github.com/jackc/pgtype/typed_multirange.go.erb +++ /dev/null @@ -1,239 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - - "github.com/jackc/pgio" -) - -type <%= multirange_type %> struct { - Ranges []<%= range_type %> - Status Status -} - -func (dst *<%= multirange_type %>) Set(src interface{}) error { - //untyped nil and typed nil interfaces are different - if src == nil { - *dst = <%= multirange_type %>{Status: Null} - return nil - } - - switch value := src.(type) { - case <%= multirange_type %>: - *dst = value - case *<%= multirange_type %>: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - case []<%= range_type %>: - if value == nil { - *dst = <%= multirange_type %>{Status: Null} - } else if len(value) == 0 { - *dst = <%= multirange_type %>{Status: Present} - } else { - elements := make([]<%= range_type %>, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = <%= multirange_type %>{ - Ranges: elements, - Status: Present, - } - } - case []*<%= range_type %>: - if value == nil { - *dst = <%= multirange_type %>{Status: Null} - } else if len(value) == 0 { - *dst = <%= multirange_type %>{Status: Present} - } else { - elements := make([]<%= range_type %>, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = <%= multirange_type %>{ - Ranges: elements, - Status: Present, - } - } - default: - return fmt.Errorf("cannot convert %v to <%= multirange_type %>", src) - } - - return nil - -} - -func (dst <%= multirange_type %>) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *<%= multirange_type %>) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *<%= multirange_type %>) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= multirange_type %>{Status: Null} - return nil - } - - utmr, err := ParseUntypedTextMultirange(string(src)) - if err != nil { - return err - } - - var elements []<%= range_type %> - - if len(utmr.Elements) > 0 { - elements = make([]<%= range_type %>, len(utmr.Elements)) - - for i, s := range utmr.Elements { - var elem <%= range_type %> - - elemSrc := []byte(s) - - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = <%= multirange_type %>{Ranges: elements, Status: Present} - - return nil -} - -func (dst *<%= multirange_type %>) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= multirange_type %>{Status: Null} - return nil - } - - rp := 0 - - numElems := int(binary.BigEndian.Uint32(src[rp:])) - rp += 4 - - if numElems == 0 { - *dst = <%= multirange_type %>{Status: Present} - return nil - } - - elements := make([]<%= range_type %>, numElems) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err := elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = <%= multirange_type %>{Ranges: elements, Status: Present} - return nil -} - -func (src <%= multirange_type %>) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = append(buf, '{') - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Ranges { - if i > 0 { - buf = append(buf, ',') - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - return nil, fmt.Errorf("multi-range does not allow null range") - } else { - buf = append(buf, string(elemBuf)...) - } - - } - - buf = append(buf, '}') - - return buf, nil -} - -func (src <%= multirange_type %>) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, int32(len(src.Ranges))) - - for i := range src.Ranges { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Ranges[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *<%= multirange_type %>) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src <%= multirange_type %>) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/typed_range.go.erb b/vendor/github.com/jackc/pgtype/typed_range.go.erb deleted file mode 100644 index 5625587a..00000000 --- a/vendor/github.com/jackc/pgtype/typed_range.go.erb +++ /dev/null @@ -1,269 +0,0 @@ -package pgtype - -import ( - "bytes" - "database/sql/driver" - "fmt" - "io" - - "github.com/jackc/pgio" -) - -type <%= range_type %> struct { - Lower <%= element_type %> - Upper <%= element_type %> - LowerType BoundType - UpperType BoundType - Status Status -} - -func (dst *<%= range_type %>) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = <%= range_type %>{Status: Null} - return nil - } - - switch value := src.(type) { - case <%= range_type %>: - *dst = value - case *<%= range_type %>: - *dst = *value - case string: - return dst.DecodeText(nil, []byte(value)) - default: - return fmt.Errorf("cannot convert %v to <%= range_type %>", src) - } - - return nil -} - -func (dst <%= range_type %>) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *<%= range_type %>) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *<%= range_type %>) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= range_type %>{Status: Null} - return nil - } - - utr, err := ParseUntypedTextRange(string(src)) - if err != nil { - return err - } - - *dst = <%= range_type %>{Status: Present} - - dst.LowerType = utr.LowerType - dst.UpperType = utr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeText(ci, []byte(utr.Lower)); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeText(ci, []byte(utr.Upper)); err != nil { - return err - } - } - - return nil -} - -func (dst *<%= range_type %>) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = <%= range_type %>{Status: Null} - return nil - } - - ubr, err := ParseUntypedBinaryRange(src) - if err != nil { - return err - } - - *dst = <%= range_type %>{Status: Present} - - dst.LowerType = ubr.LowerType - dst.UpperType = ubr.UpperType - - if dst.LowerType == Empty { - return nil - } - - if dst.LowerType == Inclusive || dst.LowerType == Exclusive { - if err := dst.Lower.DecodeBinary(ci, ubr.Lower); err != nil { - return err - } - } - - if dst.UpperType == Inclusive || dst.UpperType == Exclusive { - if err := dst.Upper.DecodeBinary(ci, ubr.Upper); err != nil { - return err - } - } - - return nil -} - -func (src <%= range_type %>) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - switch src.LowerType { - case Exclusive, Unbounded: - buf = append(buf, '(') - case Inclusive: - buf = append(buf, '[') - case Empty: - return append(buf, "empty"...), nil - default: - return nil, fmt.Errorf("unknown lower bound type %v", src.LowerType) - } - - var err error - - if src.LowerType != Unbounded { - buf, err = src.Lower.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - } - - buf = append(buf, ',') - - if src.UpperType != Unbounded { - buf, err = src.Upper.EncodeText(ci, buf) - if err != nil { - return nil, err - } else if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - } - - switch src.UpperType { - case Exclusive, Unbounded: - buf = append(buf, ')') - case Inclusive: - buf = append(buf, ']') - default: - return nil, fmt.Errorf("unknown upper bound type %v", src.UpperType) - } - - return buf, nil -} - -func (src <%= range_type %>) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - var rangeType byte - switch src.LowerType { - case Inclusive: - rangeType |= lowerInclusiveMask - case Unbounded: - rangeType |= lowerUnboundedMask - case Exclusive: - case Empty: - return append(buf, emptyMask), nil - default: - return nil, fmt.Errorf("unknown LowerType: %v", src.LowerType) - } - - switch src.UpperType { - case Inclusive: - rangeType |= upperInclusiveMask - case Unbounded: - rangeType |= upperUnboundedMask - case Exclusive: - default: - return nil, fmt.Errorf("unknown UpperType: %v", src.UpperType) - } - - buf = append(buf, rangeType) - - var err error - - if src.LowerType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Lower.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Lower cannot be null unless LowerType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - if src.UpperType != Unbounded { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - buf, err = src.Upper.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if buf == nil { - return nil, fmt.Errorf("Upper cannot be null unless UpperType is Unbounded") - } - - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *<%= range_type %>) Scan(src interface{}) error { - if src == nil { - *dst = <%= range_type %>{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src <%= range_type %>) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/unknown.go b/vendor/github.com/jackc/pgtype/unknown.go deleted file mode 100644 index c591b708..00000000 --- a/vendor/github.com/jackc/pgtype/unknown.go +++ /dev/null @@ -1,44 +0,0 @@ -package pgtype - -import "database/sql/driver" - -// Unknown represents the PostgreSQL unknown type. It is either a string literal -// or NULL. It is used when PostgreSQL does not know the type of a value. In -// general, this will only be used in pgx when selecting a null value without -// type information. e.g. SELECT NULL; -type Unknown struct { - String string - Status Status -} - -func (dst *Unknown) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -func (dst Unknown) Get() interface{} { - return (Text)(dst).Get() -} - -// AssignTo assigns from src to dst. Note that as Unknown is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *Unknown) AssignTo(dst interface{}) error { - return (*Text)(src).AssignTo(dst) -} - -func (dst *Unknown) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (dst *Unknown) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeBinary(ci, src) -} - -// Scan implements the database/sql Scanner interface. -func (dst *Unknown) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Unknown) Value() (driver.Value, error) { - return (Text)(src).Value() -} diff --git a/vendor/github.com/jackc/pgtype/uuid.go b/vendor/github.com/jackc/pgtype/uuid.go deleted file mode 100644 index 6839c052..00000000 --- a/vendor/github.com/jackc/pgtype/uuid.go +++ /dev/null @@ -1,231 +0,0 @@ -package pgtype - -import ( - "bytes" - "database/sql/driver" - "encoding/hex" - "fmt" -) - -type UUID struct { - Bytes [16]byte - Status Status -} - -func (dst *UUID) Set(src interface{}) error { - if src == nil { - *dst = UUID{Status: Null} - return nil - } - - switch value := src.(type) { - case interface{ Get() interface{} }: - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - case fmt.Stringer: - value2 := value.String() - return dst.Set(value2) - case [16]byte: - *dst = UUID{Bytes: value, Status: Present} - case []byte: - if value != nil { - if len(value) != 16 { - return fmt.Errorf("[]byte must be 16 bytes to convert to UUID: %d", len(value)) - } - *dst = UUID{Status: Present} - copy(dst.Bytes[:], value) - } else { - *dst = UUID{Status: Null} - } - case string: - uuid, err := parseUUID(value) - if err != nil { - return err - } - *dst = UUID{Bytes: uuid, Status: Present} - case *string: - if value == nil { - *dst = UUID{Status: Null} - } else { - return dst.Set(*value) - } - default: - if originalSrc, ok := underlyingUUIDType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to UUID", value) - } - - return nil -} - -func (dst UUID) Get() interface{} { - switch dst.Status { - case Present: - return dst.Bytes - case Null: - return nil - default: - return dst.Status - } -} - -func (src *UUID) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - switch v := dst.(type) { - case *[16]byte: - *v = src.Bytes - return nil - case *[]byte: - *v = make([]byte, 16) - copy(*v, src.Bytes[:]) - return nil - case *string: - *v = encodeUUID(src.Bytes) - return nil - default: - if nextDst, retry := GetAssignToDstType(v); retry { - return src.AssignTo(nextDst) - } - } - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot assign %v into %T", src, dst) -} - -// parseUUID converts a string UUID in standard form to a byte array. -func parseUUID(src string) (dst [16]byte, err error) { - switch len(src) { - case 36: - src = src[0:8] + src[9:13] + src[14:18] + src[19:23] + src[24:] - case 32: - // dashes already stripped, assume valid - default: - // assume invalid. - return dst, fmt.Errorf("cannot parse UUID %v", src) - } - - buf, err := hex.DecodeString(src) - if err != nil { - return dst, err - } - - copy(dst[:], buf) - return dst, err -} - -// encodeUUID converts a uuid byte array to UUID standard string form. -func encodeUUID(src [16]byte) string { - return fmt.Sprintf("%x-%x-%x-%x-%x", src[0:4], src[4:6], src[6:8], src[8:10], src[10:16]) -} - -func (dst *UUID) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = UUID{Status: Null} - return nil - } - - if len(src) != 36 { - return fmt.Errorf("invalid length for UUID: %v", len(src)) - } - - buf, err := parseUUID(string(src)) - if err != nil { - return err - } - - *dst = UUID{Bytes: buf, Status: Present} - return nil -} - -func (dst *UUID) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = UUID{Status: Null} - return nil - } - - if len(src) != 16 { - return fmt.Errorf("invalid length for UUID: %v", len(src)) - } - - *dst = UUID{Status: Present} - copy(dst.Bytes[:], src) - return nil -} - -func (src UUID) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, encodeUUID(src.Bytes)...), nil -} - -func (src UUID) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - return append(buf, src.Bytes[:]...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *UUID) Scan(src interface{}) error { - if src == nil { - *dst = UUID{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src UUID) Value() (driver.Value, error) { - return EncodeValueText(src) -} - -func (src UUID) MarshalJSON() ([]byte, error) { - switch src.Status { - case Present: - var buff bytes.Buffer - buff.WriteByte('"') - buff.WriteString(encodeUUID(src.Bytes)) - buff.WriteByte('"') - return buff.Bytes(), nil - case Null: - return []byte("null"), nil - case Undefined: - return nil, errUndefined - } - return nil, errBadStatus -} - -func (dst *UUID) UnmarshalJSON(src []byte) error { - if bytes.Compare(src, []byte("null")) == 0 { - return dst.Set(nil) - } - if len(src) != 38 { - return fmt.Errorf("invalid length for UUID: %v", len(src)) - } - return dst.Set(string(src[1 : len(src)-1])) -} diff --git a/vendor/github.com/jackc/pgtype/uuid_array.go b/vendor/github.com/jackc/pgtype/uuid_array.go deleted file mode 100644 index 00721ef9..00000000 --- a/vendor/github.com/jackc/pgtype/uuid_array.go +++ /dev/null @@ -1,573 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type UUIDArray struct { - Elements []UUID - Dimensions []ArrayDimension - Status Status -} - -func (dst *UUIDArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = UUIDArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case [][16]byte: - if value == nil { - *dst = UUIDArray{Status: Null} - } else if len(value) == 0 { - *dst = UUIDArray{Status: Present} - } else { - elements := make([]UUID, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = UUIDArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case [][]byte: - if value == nil { - *dst = UUIDArray{Status: Null} - } else if len(value) == 0 { - *dst = UUIDArray{Status: Present} - } else { - elements := make([]UUID, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = UUIDArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []string: - if value == nil { - *dst = UUIDArray{Status: Null} - } else if len(value) == 0 { - *dst = UUIDArray{Status: Present} - } else { - elements := make([]UUID, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = UUIDArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = UUIDArray{Status: Null} - } else if len(value) == 0 { - *dst = UUIDArray{Status: Present} - } else { - elements := make([]UUID, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = UUIDArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []UUID: - if value == nil { - *dst = UUIDArray{Status: Null} - } else if len(value) == 0 { - *dst = UUIDArray{Status: Present} - } else { - *dst = UUIDArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = UUIDArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for UUIDArray", src) - } - if elementsLength == 0 { - *dst = UUIDArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to UUIDArray", src) - } - - *dst = UUIDArray{ - Elements: make([]UUID, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]UUID, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to UUIDArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *UUIDArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to UUIDArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in UUIDArray", err) - } - index++ - - return index, nil -} - -func (dst UUIDArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *UUIDArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[][16]byte: - *v = make([][16]byte, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[][]byte: - *v = make([][]byte, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *UUIDArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from UUIDArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from UUIDArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *UUIDArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = UUIDArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []UUID - - if len(uta.Elements) > 0 { - elements = make([]UUID, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem UUID - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = UUIDArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *UUIDArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = UUIDArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = UUIDArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]UUID, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = UUIDArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src UUIDArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src UUIDArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("uuid"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "uuid") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *UUIDArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src UUIDArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/varbit.go b/vendor/github.com/jackc/pgtype/varbit.go deleted file mode 100644 index f24dc5bc..00000000 --- a/vendor/github.com/jackc/pgtype/varbit.go +++ /dev/null @@ -1,133 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - - "github.com/jackc/pgio" -) - -type Varbit struct { - Bytes []byte - Len int32 // Number of bits - Status Status -} - -func (dst *Varbit) Set(src interface{}) error { - return fmt.Errorf("cannot convert %v to Varbit", src) -} - -func (dst Varbit) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *Varbit) AssignTo(dst interface{}) error { - return fmt.Errorf("cannot assign %v to %T", src, dst) -} - -func (dst *Varbit) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Varbit{Status: Null} - return nil - } - - bitLen := len(src) - byteLen := bitLen / 8 - if bitLen%8 > 0 { - byteLen++ - } - buf := make([]byte, byteLen) - - for i, b := range src { - if b == '1' { - byteIdx := i / 8 - bitIdx := uint(i % 8) - buf[byteIdx] = buf[byteIdx] | (128 >> bitIdx) - } - } - - *dst = Varbit{Bytes: buf, Len: int32(bitLen), Status: Present} - return nil -} - -func (dst *Varbit) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = Varbit{Status: Null} - return nil - } - - if len(src) < 4 { - return fmt.Errorf("invalid length for varbit: %v", len(src)) - } - - bitLen := int32(binary.BigEndian.Uint32(src)) - rp := 4 - - *dst = Varbit{Bytes: src[rp:], Len: bitLen, Status: Present} - return nil -} - -func (src Varbit) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - for i := int32(0); i < src.Len; i++ { - byteIdx := i / 8 - bitMask := byte(128 >> byte(i%8)) - char := byte('0') - if src.Bytes[byteIdx]&bitMask > 0 { - char = '1' - } - buf = append(buf, char) - } - - return buf, nil -} - -func (src Varbit) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - buf = pgio.AppendInt32(buf, src.Len) - return append(buf, src.Bytes...), nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *Varbit) Scan(src interface{}) error { - if src == nil { - *dst = Varbit{Status: Null} - return nil - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Varbit) Value() (driver.Value, error) { - return EncodeValueText(src) -} diff --git a/vendor/github.com/jackc/pgtype/varchar.go b/vendor/github.com/jackc/pgtype/varchar.go deleted file mode 100644 index fea31d18..00000000 --- a/vendor/github.com/jackc/pgtype/varchar.go +++ /dev/null @@ -1,66 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -type Varchar Text - -// Set converts from src to dst. Note that as Varchar is not a general -// number type Set does not do automatic type conversion as other number -// types do. -func (dst *Varchar) Set(src interface{}) error { - return (*Text)(dst).Set(src) -} - -func (dst Varchar) Get() interface{} { - return (Text)(dst).Get() -} - -// AssignTo assigns from src to dst. Note that as Varchar is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *Varchar) AssignTo(dst interface{}) error { - return (*Text)(src).AssignTo(dst) -} - -func (Varchar) PreferredResultFormat() int16 { - return TextFormatCode -} - -func (dst *Varchar) DecodeText(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeText(ci, src) -} - -func (dst *Varchar) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*Text)(dst).DecodeBinary(ci, src) -} - -func (Varchar) PreferredParamFormat() int16 { - return TextFormatCode -} - -func (src Varchar) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeText(ci, buf) -} - -func (src Varchar) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (Text)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *Varchar) Scan(src interface{}) error { - return (*Text)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src Varchar) Value() (driver.Value, error) { - return (Text)(src).Value() -} - -func (src Varchar) MarshalJSON() ([]byte, error) { - return (Text)(src).MarshalJSON() -} - -func (dst *Varchar) UnmarshalJSON(b []byte) error { - return (*Text)(dst).UnmarshalJSON(b) -} diff --git a/vendor/github.com/jackc/pgtype/varchar_array.go b/vendor/github.com/jackc/pgtype/varchar_array.go deleted file mode 100644 index 8a309a3f..00000000 --- a/vendor/github.com/jackc/pgtype/varchar_array.go +++ /dev/null @@ -1,517 +0,0 @@ -// Code generated by erb. DO NOT EDIT. - -package pgtype - -import ( - "database/sql/driver" - "encoding/binary" - "fmt" - "reflect" - - "github.com/jackc/pgio" -) - -type VarcharArray struct { - Elements []Varchar - Dimensions []ArrayDimension - Status Status -} - -func (dst *VarcharArray) Set(src interface{}) error { - // untyped nil and typed nil interfaces are different - if src == nil { - *dst = VarcharArray{Status: Null} - return nil - } - - if value, ok := src.(interface{ Get() interface{} }); ok { - value2 := value.Get() - if value2 != value { - return dst.Set(value2) - } - } - - // Attempt to match to select common types: - switch value := src.(type) { - - case []string: - if value == nil { - *dst = VarcharArray{Status: Null} - } else if len(value) == 0 { - *dst = VarcharArray{Status: Present} - } else { - elements := make([]Varchar, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = VarcharArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []*string: - if value == nil { - *dst = VarcharArray{Status: Null} - } else if len(value) == 0 { - *dst = VarcharArray{Status: Present} - } else { - elements := make([]Varchar, len(value)) - for i := range value { - if err := elements[i].Set(value[i]); err != nil { - return err - } - } - *dst = VarcharArray{ - Elements: elements, - Dimensions: []ArrayDimension{{Length: int32(len(elements)), LowerBound: 1}}, - Status: Present, - } - } - - case []Varchar: - if value == nil { - *dst = VarcharArray{Status: Null} - } else if len(value) == 0 { - *dst = VarcharArray{Status: Present} - } else { - *dst = VarcharArray{ - Elements: value, - Dimensions: []ArrayDimension{{Length: int32(len(value)), LowerBound: 1}}, - Status: Present, - } - } - default: - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - reflectedValue := reflect.ValueOf(src) - if !reflectedValue.IsValid() || reflectedValue.IsZero() { - *dst = VarcharArray{Status: Null} - return nil - } - - dimensions, elementsLength, ok := findDimensionsFromValue(reflectedValue, nil, 0) - if !ok { - return fmt.Errorf("cannot find dimensions of %v for VarcharArray", src) - } - if elementsLength == 0 { - *dst = VarcharArray{Status: Present} - return nil - } - if len(dimensions) == 0 { - if originalSrc, ok := underlyingSliceType(src); ok { - return dst.Set(originalSrc) - } - return fmt.Errorf("cannot convert %v to VarcharArray", src) - } - - *dst = VarcharArray{ - Elements: make([]Varchar, elementsLength), - Dimensions: dimensions, - Status: Present, - } - elementCount, err := dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - // Maybe the target was one dimension too far, try again: - if len(dst.Dimensions) > 1 { - dst.Dimensions = dst.Dimensions[:len(dst.Dimensions)-1] - elementsLength = 0 - for _, dim := range dst.Dimensions { - if elementsLength == 0 { - elementsLength = int(dim.Length) - } else { - elementsLength *= int(dim.Length) - } - } - dst.Elements = make([]Varchar, elementsLength) - elementCount, err = dst.setRecursive(reflectedValue, 0, 0) - if err != nil { - return err - } - } else { - return err - } - } - if elementCount != len(dst.Elements) { - return fmt.Errorf("cannot convert %v to VarcharArray, expected %d dst.Elements, but got %d instead", src, len(dst.Elements), elementCount) - } - } - - return nil -} - -func (dst *VarcharArray) setRecursive(value reflect.Value, index, dimension int) (int, error) { - switch value.Kind() { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(dst.Dimensions) == dimension { - break - } - - valueLen := value.Len() - if int32(valueLen) != dst.Dimensions[dimension].Length { - return 0, fmt.Errorf("multidimensional arrays must have array expressions with matching dimensions") - } - for i := 0; i < valueLen; i++ { - var err error - index, err = dst.setRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if !value.CanInterface() { - return 0, fmt.Errorf("cannot convert all values to VarcharArray") - } - if err := dst.Elements[index].Set(value.Interface()); err != nil { - return 0, fmt.Errorf("%v in VarcharArray", err) - } - index++ - - return index, nil -} - -func (dst VarcharArray) Get() interface{} { - switch dst.Status { - case Present: - return dst - case Null: - return nil - default: - return dst.Status - } -} - -func (src *VarcharArray) AssignTo(dst interface{}) error { - switch src.Status { - case Present: - if len(src.Dimensions) <= 1 { - // Attempt to match to select common types: - switch v := dst.(type) { - - case *[]string: - *v = make([]string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - case *[]*string: - *v = make([]*string, len(src.Elements)) - for i := range src.Elements { - if err := src.Elements[i].AssignTo(&((*v)[i])); err != nil { - return err - } - } - return nil - - } - } - - // Try to convert to something AssignTo can use directly. - if nextDst, retry := GetAssignToDstType(dst); retry { - return src.AssignTo(nextDst) - } - - // Fallback to reflection if an optimised match was not found. - // The reflection is necessary for arrays and multidimensional slices, - // but it comes with a 20-50% performance penalty for large arrays/slices - value := reflect.ValueOf(dst) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - - switch value.Kind() { - case reflect.Array, reflect.Slice: - default: - return fmt.Errorf("cannot assign %T to %T", src, dst) - } - - if len(src.Elements) == 0 { - if value.Kind() == reflect.Slice { - value.Set(reflect.MakeSlice(value.Type(), 0, 0)) - return nil - } - } - - elementCount, err := src.assignToRecursive(value, 0, 0) - if err != nil { - return err - } - if elementCount != len(src.Elements) { - return fmt.Errorf("cannot assign %v, needed to assign %d elements, but only assigned %d", dst, len(src.Elements), elementCount) - } - - return nil - case Null: - return NullAssignTo(dst) - } - - return fmt.Errorf("cannot decode %#v into %T", src, dst) -} - -func (src *VarcharArray) assignToRecursive(value reflect.Value, index, dimension int) (int, error) { - switch kind := value.Kind(); kind { - case reflect.Array: - fallthrough - case reflect.Slice: - if len(src.Dimensions) == dimension { - break - } - - length := int(src.Dimensions[dimension].Length) - if reflect.Array == kind { - typ := value.Type() - if typ.Len() != length { - return 0, fmt.Errorf("expected size %d array, but %s has size %d array", length, typ, typ.Len()) - } - value.Set(reflect.New(typ).Elem()) - } else { - value.Set(reflect.MakeSlice(value.Type(), length, length)) - } - - var err error - for i := 0; i < length; i++ { - index, err = src.assignToRecursive(value.Index(i), index, dimension+1) - if err != nil { - return 0, err - } - } - - return index, nil - } - if len(src.Dimensions) != dimension { - return 0, fmt.Errorf("incorrect dimensions, expected %d, found %d", len(src.Dimensions), dimension) - } - if !value.CanAddr() { - return 0, fmt.Errorf("cannot assign all values from VarcharArray") - } - addr := value.Addr() - if !addr.CanInterface() { - return 0, fmt.Errorf("cannot assign all values from VarcharArray") - } - if err := src.Elements[index].AssignTo(addr.Interface()); err != nil { - return 0, err - } - index++ - return index, nil -} - -func (dst *VarcharArray) DecodeText(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = VarcharArray{Status: Null} - return nil - } - - uta, err := ParseUntypedTextArray(string(src)) - if err != nil { - return err - } - - var elements []Varchar - - if len(uta.Elements) > 0 { - elements = make([]Varchar, len(uta.Elements)) - - for i, s := range uta.Elements { - var elem Varchar - var elemSrc []byte - if s != "NULL" || uta.Quoted[i] { - elemSrc = []byte(s) - } - err = elem.DecodeText(ci, elemSrc) - if err != nil { - return err - } - - elements[i] = elem - } - } - - *dst = VarcharArray{Elements: elements, Dimensions: uta.Dimensions, Status: Present} - - return nil -} - -func (dst *VarcharArray) DecodeBinary(ci *ConnInfo, src []byte) error { - if src == nil { - *dst = VarcharArray{Status: Null} - return nil - } - - var arrayHeader ArrayHeader - rp, err := arrayHeader.DecodeBinary(ci, src) - if err != nil { - return err - } - - if len(arrayHeader.Dimensions) == 0 { - *dst = VarcharArray{Dimensions: arrayHeader.Dimensions, Status: Present} - return nil - } - - elementCount := arrayHeader.Dimensions[0].Length - for _, d := range arrayHeader.Dimensions[1:] { - elementCount *= d.Length - } - - elements := make([]Varchar, elementCount) - - for i := range elements { - elemLen := int(int32(binary.BigEndian.Uint32(src[rp:]))) - rp += 4 - var elemSrc []byte - if elemLen >= 0 { - elemSrc = src[rp : rp+elemLen] - rp += elemLen - } - err = elements[i].DecodeBinary(ci, elemSrc) - if err != nil { - return err - } - } - - *dst = VarcharArray{Elements: elements, Dimensions: arrayHeader.Dimensions, Status: Present} - return nil -} - -func (src VarcharArray) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - if len(src.Dimensions) == 0 { - return append(buf, '{', '}'), nil - } - - buf = EncodeTextArrayDimensions(buf, src.Dimensions) - - // dimElemCounts is the multiples of elements that each array lies on. For - // example, a single dimension array of length 4 would have a dimElemCounts of - // [4]. A multi-dimensional array of lengths [3,5,2] would have a - // dimElemCounts of [30,10,2]. This is used to simplify when to render a '{' - // or '}'. - dimElemCounts := make([]int, len(src.Dimensions)) - dimElemCounts[len(src.Dimensions)-1] = int(src.Dimensions[len(src.Dimensions)-1].Length) - for i := len(src.Dimensions) - 2; i > -1; i-- { - dimElemCounts[i] = int(src.Dimensions[i].Length) * dimElemCounts[i+1] - } - - inElemBuf := make([]byte, 0, 32) - for i, elem := range src.Elements { - if i > 0 { - buf = append(buf, ',') - } - - for _, dec := range dimElemCounts { - if i%dec == 0 { - buf = append(buf, '{') - } - } - - elemBuf, err := elem.EncodeText(ci, inElemBuf) - if err != nil { - return nil, err - } - if elemBuf == nil { - buf = append(buf, `NULL`...) - } else { - buf = append(buf, QuoteArrayElementIfNeeded(string(elemBuf))...) - } - - for _, dec := range dimElemCounts { - if (i+1)%dec == 0 { - buf = append(buf, '}') - } - } - } - - return buf, nil -} - -func (src VarcharArray) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - switch src.Status { - case Null: - return nil, nil - case Undefined: - return nil, errUndefined - } - - arrayHeader := ArrayHeader{ - Dimensions: src.Dimensions, - } - - if dt, ok := ci.DataTypeForName("varchar"); ok { - arrayHeader.ElementOID = int32(dt.OID) - } else { - return nil, fmt.Errorf("unable to find oid for type name %v", "varchar") - } - - for i := range src.Elements { - if src.Elements[i].Status == Null { - arrayHeader.ContainsNull = true - break - } - } - - buf = arrayHeader.EncodeBinary(ci, buf) - - for i := range src.Elements { - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - - elemBuf, err := src.Elements[i].EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if elemBuf != nil { - buf = elemBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - } - - return buf, nil -} - -// Scan implements the database/sql Scanner interface. -func (dst *VarcharArray) Scan(src interface{}) error { - if src == nil { - return dst.DecodeText(nil, nil) - } - - switch src := src.(type) { - case string: - return dst.DecodeText(nil, []byte(src)) - case []byte: - srcCopy := make([]byte, len(src)) - copy(srcCopy, src) - return dst.DecodeText(nil, srcCopy) - } - - return fmt.Errorf("cannot scan %T", src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src VarcharArray) Value() (driver.Value, error) { - buf, err := src.EncodeText(nil, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - - return string(buf), nil -} diff --git a/vendor/github.com/jackc/pgtype/xid.go b/vendor/github.com/jackc/pgtype/xid.go deleted file mode 100644 index f6d6b22d..00000000 --- a/vendor/github.com/jackc/pgtype/xid.go +++ /dev/null @@ -1,64 +0,0 @@ -package pgtype - -import ( - "database/sql/driver" -) - -// XID is PostgreSQL's Transaction ID type. -// -// In later versions of PostgreSQL, it is the type used for the backend_xid -// and backend_xmin columns of the pg_stat_activity system view. -// -// Also, when one does -// -// select xmin, xmax, * from some_table; -// -// it is the data type of the xmin and xmax hidden system columns. -// -// It is currently implemented as an unsigned four byte integer. -// Its definition can be found in src/include/postgres_ext.h as TransactionId -// in the PostgreSQL sources. -type XID pguint32 - -// Set converts from src to dst. Note that as XID is not a general -// number type Set does not do automatic type conversion as other number -// types do. -func (dst *XID) Set(src interface{}) error { - return (*pguint32)(dst).Set(src) -} - -func (dst XID) Get() interface{} { - return (pguint32)(dst).Get() -} - -// AssignTo assigns from src to dst. Note that as XID is not a general number -// type AssignTo does not do automatic type conversion as other number types do. -func (src *XID) AssignTo(dst interface{}) error { - return (*pguint32)(src).AssignTo(dst) -} - -func (dst *XID) DecodeText(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeText(ci, src) -} - -func (dst *XID) DecodeBinary(ci *ConnInfo, src []byte) error { - return (*pguint32)(dst).DecodeBinary(ci, src) -} - -func (src XID) EncodeText(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeText(ci, buf) -} - -func (src XID) EncodeBinary(ci *ConnInfo, buf []byte) ([]byte, error) { - return (pguint32)(src).EncodeBinary(ci, buf) -} - -// Scan implements the database/sql Scanner interface. -func (dst *XID) Scan(src interface{}) error { - return (*pguint32)(dst).Scan(src) -} - -// Value implements the database/sql/driver Valuer interface. -func (src XID) Value() (driver.Value, error) { - return (pguint32)(src).Value() -} diff --git a/vendor/github.com/jackc/pgx/v4/.gitignore b/vendor/github.com/jackc/pgx/v4/.gitignore deleted file mode 100644 index 39175a96..00000000 --- a/vendor/github.com/jackc/pgx/v4/.gitignore +++ /dev/null @@ -1,24 +0,0 @@ -# Compiled Object files, Static and Dynamic libs (Shared Objects) -*.o -*.a -*.so - -# Folders -_obj -_test - -# Architecture specific extensions/prefixes -*.[568vq] -[568vq].out - -*.cgo1.go -*.cgo2.c -_cgo_defun.c -_cgo_gotypes.go -_cgo_export.* - -_testmain.go - -*.exe - -.envrc diff --git a/vendor/github.com/jackc/pgx/v4/CHANGELOG.md b/vendor/github.com/jackc/pgx/v4/CHANGELOG.md deleted file mode 100644 index e8f20129..00000000 --- a/vendor/github.com/jackc/pgx/v4/CHANGELOG.md +++ /dev/null @@ -1,268 +0,0 @@ -# 4.17.2 (September 3, 2022) - -* Fix panic when logging batch error (Tom Möller) - -# 4.17.1 (August 27, 2022) - -* Upgrade puddle to v1.3.0 - fixes context failing to cancel Acquire when acquire is creating resource which was introduced in v4.17.0 (James Hartig) -* Fix atomic alignment on 32-bit platforms - -# 4.17.0 (August 6, 2022) - -* Upgrade pgconn to v1.13.0 -* Upgrade pgproto3 to v2.3.1 -* Upgrade pgtype to v1.12.0 -* Allow background pool connections to continue even if cause is canceled (James Hartig) -* Add LoggerFunc (Gabor Szabad) -* pgxpool: health check should avoid going below minConns (James Hartig) -* Add pgxpool.Conn.Hijack() -* Logging improvements (Stepan Rabotkin) - -# 4.16.1 (May 7, 2022) - -* Upgrade pgconn to v1.12.1 -* Fix explicitly prepared statements with describe statement cache mode - -# 4.16.0 (April 21, 2022) - -* Upgrade pgconn to v1.12.0 -* Upgrade pgproto3 to v2.3.0 -* Upgrade pgtype to v1.11.0 -* Fix: Do not panic when context cancelled while getting statement from cache. -* Fix: Less memory pinning from old Rows. -* Fix: Support '\r' line ending when sanitizing SQL comment. -* Add pluggable GSSAPI support (Oliver Tan) - -# 4.15.0 (February 7, 2022) - -* Upgrade to pgconn v1.11.0 -* Upgrade to pgtype v1.10.0 -* Upgrade puddle to v1.2.1 -* Make BatchResults.Close safe to be called multiple times - -# 4.14.1 (November 28, 2021) - -* Upgrade pgtype to v1.9.1 (fixes unintentional change to timestamp binary decoding) -* Start pgxpool background health check after initial connections - -# 4.14.0 (November 20, 2021) - -* Upgrade pgconn to v1.10.1 -* Upgrade pgproto3 to v2.2.0 -* Upgrade pgtype to v1.9.0 -* Upgrade puddle to v1.2.0 -* Add QueryFunc to BatchResults -* Add context options to zerologadapter (Thomas Frössman) -* Add zerologadapter.NewContextLogger (urso) -* Eager initialize minpoolsize on connect (Daniel) -* Unpin memory used by large queries immediately after use - -# 4.13.0 (July 24, 2021) - -* Trimmed pseudo-dependencies in Go modules from other packages tests -* Upgrade pgconn -- context cancellation no longer will return a net.Error -* Support time durations for simple protocol (Michael Darr) - -# 4.12.0 (July 10, 2021) - -* ResetSession hook is called before a connection is reused from pool for another query (Dmytro Haranzha) -* stdlib: Add RandomizeHostOrderFunc (dkinder) -* stdlib: add OptionBeforeConnect (dkinder) -* stdlib: Do not reuse ConnConfig strings (Andrew Kimball) -* stdlib: implement Conn.ResetSession (Jonathan Amsterdam) -* Upgrade pgconn to v1.9.0 -* Upgrade pgtype to v1.8.0 - -# 4.11.0 (March 25, 2021) - -* Add BeforeConnect callback to pgxpool.Config (Robert Froehlich) -* Add Ping method to pgxpool.Conn (davidsbond) -* Added a kitlog level log adapter (Fabrice Aneche) -* Make ScanArgError public to allow identification of offending column (Pau Sanchez) -* Add *pgxpool.AcquireFunc -* Add BeginFunc and BeginTxFunc -* Add prefer_simple_protocol to connection string -* Add logging on CopyFrom (Patrick Hemmer) -* Add comment support when sanitizing SQL queries (Rusakow Andrew) -* Do not panic on double close of pgxpool.Pool (Matt Schultz) -* Avoid panic on SendBatch on closed Tx (Matt Schultz) -* Update pgconn to v1.8.1 -* Update pgtype to v1.7.0 - -# 4.10.1 (December 19, 2020) - -* Fix panic on Query error with nil stmtcache. - -# 4.10.0 (December 3, 2020) - -* Add CopyFromSlice to simplify CopyFrom usage (Egon Elbre) -* Remove broken prepared statements from stmtcache (Ethan Pailes) -* stdlib: consider any Ping error as fatal -* Update puddle to v1.1.3 - this fixes an issue where concurrent Acquires can hang when a connection cannot be established -* Update pgtype to v1.6.2 - -# 4.9.2 (November 3, 2020) - -The underlying library updates fix an issue where appending to a scanned slice could corrupt other data. - -* Update pgconn to v1.7.2 -* Update pgproto3 to v2.0.6 - -# 4.9.1 (October 31, 2020) - -* Update pgconn to v1.7.1 -* Update pgtype to v1.6.1 -* Fix SendBatch of all prepared statements with statement cache disabled - -# 4.9.0 (September 26, 2020) - -* pgxpool now waits for connection cleanup to finish before making room in pool for another connection. This prevents temporarily exceeding max pool size. -* Fix when scanning a column to nil to skip it on the first row but scanning it to a real value on a subsequent row. -* Fix prefer simple protocol with prepared statements. (Jinzhu) -* Fix FieldDescriptions not being available on Rows before calling Next the first time. -* Various minor fixes in updated versions of pgconn, pgtype, and puddle. - -# 4.8.1 (July 29, 2020) - -* Update pgconn to v1.6.4 - * Fix deadlock on error after CommandComplete but before ReadyForQuery - * Fix panic on parsing DSN with trailing '=' - -# 4.8.0 (July 22, 2020) - -* All argument types supported by native pgx should now also work through database/sql -* Update pgconn to v1.6.3 -* Update pgtype to v1.4.2 - -# 4.7.2 (July 14, 2020) - -* Improve performance of Columns() (zikaeroh) -* Fix fatal Commit() failure not being considered fatal -* Update pgconn to v1.6.2 -* Update pgtype to v1.4.1 - -# 4.7.1 (June 29, 2020) - -* Fix stdlib decoding error with certain order and combination of fields - -# 4.7.0 (June 27, 2020) - -* Update pgtype to v1.4.0 -* Update pgconn to v1.6.1 -* Update puddle to v1.1.1 -* Fix context propagation with Tx commit and Rollback (georgysavva) -* Add lazy connect option to pgxpool (georgysavva) -* Fix connection leak if pgxpool.BeginTx() fail (Jean-Baptiste Bronisz) -* Add native Go slice support for strings and numbers to simple protocol -* stdlib add default timeouts for Conn.Close() and Stmt.Close() (georgysavva) -* Assorted performance improvements especially with large result sets -* Fix close pool on not lazy connect failure (Yegor Myskin) -* Add Config copy (georgysavva) -* Support SendBatch with Simple Protocol (Jordan Lewis) -* Better error logging on rows close (Igor V. Kozinov) -* Expose stdlib.Conn.Conn() to enable database/sql.Conn.Raw() -* Improve unknown type support for database/sql -* Fix transaction commit failure closing connection - -# 4.6.0 (March 30, 2020) - -* stdlib: Bail early if preloading rows.Next() results in rows.Err() (Bas van Beek) -* Sanitize time to microsecond accuracy (Andrew Nicoll) -* Update pgtype to v1.3.0 -* Update pgconn to v1.5.0 - * Update golang.org/x/crypto for security fix - * Implement "verify-ca" SSL mode - -# 4.5.0 (March 7, 2020) - -* Update to pgconn v1.4.0 - * Fixes QueryRow with empty SQL - * Adds PostgreSQL service file support -* Add Len() to *pgx.Batch (WGH) -* Better logging for individual batch items (Ben Bader) - -# 4.4.1 (February 14, 2020) - -* Update pgconn to v1.3.2 - better default read buffer size -* Fix race in CopyFrom - -# 4.4.0 (February 5, 2020) - -* Update puddle to v1.1.0 - fixes possible deadlock when acquire is cancelled -* Update pgconn to v1.3.1 - fixes CopyFrom deadlock when multiple NoticeResponse received during copy -* Update pgtype to v1.2.0 -* Add MaxConnIdleTime to pgxpool (Patrick Ellul) -* Add MinConns to pgxpool (Patrick Ellul) -* Fix: stdlib.ReleaseConn closes connections left in invalid state - -# 4.3.0 (January 23, 2020) - -* Fix Rows.Values panic when unable to decode -* Add Rows.Values support for unknown types -* Add DriverContext support for stdlib (Alex Gaynor) -* Update pgproto3 to v2.0.1 to never return an io.EOF as it would be misinterpreted by database/sql. Instead return io.UnexpectedEOF. - -# 4.2.1 (January 13, 2020) - -* Update pgconn to v1.2.1 (fixes context cancellation data race introduced in v1.2.0)) - -# 4.2.0 (January 11, 2020) - -* Update pgconn to v1.2.0. -* Update pgtype to v1.1.0. -* Return error instead of panic when wrong number of arguments passed to Exec. (malstoun) -* Fix large objects functionality when PreferSimpleProtocol = true. -* Restore GetDefaultDriver which existed in v3. (Johan Brandhorst) -* Add RegisterConnConfig to stdlib which replaces the removed RegisterDriverConfig from v3. - -# 4.1.2 (October 22, 2019) - -* Fix dbSavepoint.Begin recursive self call -* Upgrade pgtype to v1.0.2 - fix scan pointer to pointer - -# 4.1.1 (October 21, 2019) - -* Fix pgxpool Rows.CommandTag() infinite loop / typo - -# 4.1.0 (October 12, 2019) - -## Potentially Breaking Changes - -Technically, two changes are breaking changes, but in practice these are extremely unlikely to break existing code. - -* Conn.Begin and Conn.BeginTx return a Tx interface instead of the internal dbTx struct. This is necessary for the Conn.Begin method to signature as other methods that begin a transaction. -* Add Conn() to Tx interface. This is necessary to allow code using a Tx to access the *Conn (and pgconn.PgConn) on which the Tx is executing. - -## Fixes - -* Releasing a busy connection closes the connection instead of returning an unusable connection to the pool -* Do not mutate config.Config.OnNotification in connect - -# 4.0.1 (September 19, 2019) - -* Fix statement cache cleanup. -* Corrected daterange OID. -* Fix Tx when committing or rolling back multiple times in certain cases. -* Improve documentation. - -# 4.0.0 (September 14, 2019) - -v4 is a major release with many significant changes some of which are breaking changes. The most significant are -included below. - -* Simplified establishing a connection with a connection string. -* All potentially blocking operations now require a context.Context. The non-context aware functions have been removed. -* OIDs are hard-coded for known types. This saves the query on connection. -* Context cancellations while network activity is in progress is now always fatal. Previously, it was sometimes recoverable. This led to increased complexity in pgx itself and in application code. -* Go modules are required. -* Errors are now implemented in the Go 1.13 style. -* `Rows` and `Tx` are now interfaces. -* The connection pool as been decoupled from pgx and is now a separate, included package (github.com/jackc/pgx/v4/pgxpool). -* pgtype has been spun off to a separate package (github.com/jackc/pgtype). -* pgproto3 has been spun off to a separate package (github.com/jackc/pgproto3/v2). -* Logical replication support has been spun off to a separate package (github.com/jackc/pglogrepl). -* Lower level PostgreSQL functionality is now implemented in a separate package (github.com/jackc/pgconn). -* Tests are now configured with environment variables. -* Conn has an automatic statement cache by default. -* Batch interface has been simplified. -* QueryArgs has been removed. diff --git a/vendor/github.com/jackc/pgx/v4/LICENSE b/vendor/github.com/jackc/pgx/v4/LICENSE deleted file mode 100644 index 5c486c39..00000000 --- a/vendor/github.com/jackc/pgx/v4/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2013-2021 Jack Christensen - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/jackc/pgx/v4/README.md b/vendor/github.com/jackc/pgx/v4/README.md deleted file mode 100644 index ec921271..00000000 --- a/vendor/github.com/jackc/pgx/v4/README.md +++ /dev/null @@ -1,196 +0,0 @@ -[![](https://godoc.org/github.com/jackc/pgx?status.svg)](https://pkg.go.dev/github.com/jackc/pgx/v4) -[![Build Status](https://travis-ci.org/jackc/pgx.svg)](https://travis-ci.org/jackc/pgx) - ---- - -This is the previous stable `v4` release. `v5` been released. - ---- -# pgx - PostgreSQL Driver and Toolkit - -pgx is a pure Go driver and toolkit for PostgreSQL. - -pgx aims to be low-level, fast, and performant, while also enabling PostgreSQL-specific features that the standard `database/sql` package does not allow for. - -The driver component of pgx can be used alongside the standard `database/sql` package. - -The toolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocol -and type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers, -proxies, load balancers, logical replication clients, etc. - -The current release of `pgx v4` requires Go modules. To use the previous version, checkout and vendor the `v3` branch. - -## Example Usage - -```go -package main - -import ( - "context" - "fmt" - "os" - - "github.com/jackc/pgx/v4" -) - -func main() { - // urlExample := "postgres://username:password@localhost:5432/database_name" - conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL")) - if err != nil { - fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err) - os.Exit(1) - } - defer conn.Close(context.Background()) - - var name string - var weight int64 - err = conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight) - if err != nil { - fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err) - os.Exit(1) - } - - fmt.Println(name, weight) -} -``` - -See the [getting started guide](https://github.com/jackc/pgx/wiki/Getting-started-with-pgx) for more information. - -## Choosing Between the pgx and database/sql Interfaces - -It is recommended to use the pgx interface if: -1. The application only targets PostgreSQL. -2. No other libraries that require `database/sql` are in use. - -The pgx interface is faster and exposes more features. - -The `database/sql` interface only allows the underlying driver to return or receive the following types: `int64`, -`float64`, `bool`, `[]byte`, `string`, `time.Time`, or `nil`. Handling other types requires implementing the -`database/sql.Scanner` and the `database/sql/driver/driver.Valuer` interfaces which require transmission of values in text format. The binary format can be substantially faster, which is what the pgx interface uses. - -## Features - -pgx supports many features beyond what is available through `database/sql`: - -* Support for approximately 70 different PostgreSQL types -* Automatic statement preparation and caching -* Batch queries -* Single-round trip query mode -* Full TLS connection control -* Binary format support for custom types (allows for much quicker encoding/decoding) -* COPY protocol support for faster bulk data loads -* Extendable logging support including built-in support for `log15adapter`, [`logrus`](https://github.com/sirupsen/logrus), [`zap`](https://github.com/uber-go/zap), and [`zerolog`](https://github.com/rs/zerolog) -* Connection pool with after-connect hook for arbitrary connection setup -* Listen / notify -* Conversion of PostgreSQL arrays to Go slice mappings for integers, floats, and strings -* Hstore support -* JSON and JSONB support -* Maps `inet` and `cidr` PostgreSQL types to `net.IPNet` and `net.IP` -* Large object support -* NULL mapping to Null* struct or pointer to pointer -* Supports `database/sql.Scanner` and `database/sql/driver.Valuer` interfaces for custom types -* Notice response handling -* Simulated nested transactions with savepoints - -## Performance - -There are three areas in particular where pgx can provide a significant performance advantage over the standard -`database/sql` interface and other drivers: - -1. PostgreSQL specific types - Types such as arrays can be parsed much quicker because pgx uses the binary format. -2. Automatic statement preparation and caching - pgx will prepare and cache statements by default. This can provide an - significant free improvement to code that does not explicitly use prepared statements. Under certain workloads, it can - perform nearly 3x the number of queries per second. -3. Batched queries - Multiple queries can be batched together to minimize network round trips. - -## Testing - -pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the `PGX_TEST_DATABASE` environment -variable. The `PGX_TEST_DATABASE` environment variable can either be a URL or DSN. In addition, the standard `PG*` environment -variables will be respected. Consider using [direnv](https://github.com/direnv/direnv) to simplify environment variable -handling. - -### Example Test Environment - -Connect to your PostgreSQL server and run: - -``` -create database pgx_test; -``` - -Connect to the newly-created database and run: - -``` -create domain uint64 as numeric(20,0); -``` - -Now, you can run the tests: - -``` -PGX_TEST_DATABASE="host=/var/run/postgresql database=pgx_test" go test ./... -``` - -In addition, there are tests specific for PgBouncer that will be executed if `PGX_TEST_PGBOUNCER_CONN_STRING` is set. - -## Supported Go and PostgreSQL Versions - -pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For [Go](https://golang.org/doc/devel/release.html#policy) that is the two most recent major releases and for [PostgreSQL](https://www.postgresql.org/support/versioning/) the major releases in the last 5 years. This means pgx supports Go 1.16 and higher and PostgreSQL 10 and higher. pgx also is tested against the latest version of [CockroachDB](https://www.cockroachlabs.com/product/). - -## Version Policy - -pgx follows semantic versioning for the documented public API on stable releases. `v4` is the latest stable major version. - -## PGX Family Libraries - -pgx is the head of a family of PostgreSQL libraries. Many of these can be used independently. Many can also be accessed -from pgx for lower-level control. - -### [github.com/jackc/pgconn](https://github.com/jackc/pgconn) - -`pgconn` is a lower-level PostgreSQL database driver that operates at nearly the same level as the C library `libpq`. - -### [github.com/jackc/pgx/v4/pgxpool](https://github.com/jackc/pgx/tree/master/pgxpool) - -`pgxpool` is a connection pool for pgx. pgx is entirely decoupled from its default pool implementation. This means that pgx can be used with a different pool or without any pool at all. - -### [github.com/jackc/pgx/v4/stdlib](https://github.com/jackc/pgx/tree/master/stdlib) - -This is a `database/sql` compatibility layer for pgx. pgx can be used as a normal `database/sql` driver, but at any time, the native interface can be acquired for more performance or PostgreSQL specific functionality. - -### [github.com/jackc/pgtype](https://github.com/jackc/pgtype) - -Over 70 PostgreSQL types are supported including `uuid`, `hstore`, `json`, `bytea`, `numeric`, `interval`, `inet`, and arrays. These types support `database/sql` interfaces and are usable outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver. - -### [github.com/jackc/pgproto3](https://github.com/jackc/pgproto3) - -pgproto3 provides standalone encoding and decoding of the PostgreSQL v3 wire protocol. This is useful for implementing very low level PostgreSQL tooling. - -### [github.com/jackc/pglogrepl](https://github.com/jackc/pglogrepl) - -pglogrepl provides functionality to act as a client for PostgreSQL logical replication. - -### [github.com/jackc/pgmock](https://github.com/jackc/pgmock) - -pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler). - -### [github.com/jackc/tern](https://github.com/jackc/tern) - -tern is a stand-alone SQL migration system. - -### [github.com/jackc/pgerrcode](https://github.com/jackc/pgerrcode) - -pgerrcode contains constants for the PostgreSQL error codes. - -## 3rd Party Libraries with PGX Support - -### [github.com/georgysavva/scany](https://github.com/georgysavva/scany) - -Library for scanning data from a database into Go structs and more. - -### [https://github.com/otan/gopgkrb5](https://github.com/otan/gopgkrb5) - -Adds GSSAPI / Kerberos authentication support. - -### [https://github.com/vgarvardt/pgx-google-uuid](https://github.com/vgarvardt/pgx-google-uuid) - -Adds support for [`github.com/google/uuid`](https://github.com/google/uuid). diff --git a/vendor/github.com/jackc/pgx/v4/batch.go b/vendor/github.com/jackc/pgx/v4/batch.go deleted file mode 100644 index 7f86ad5c..00000000 --- a/vendor/github.com/jackc/pgx/v4/batch.go +++ /dev/null @@ -1,228 +0,0 @@ -package pgx - -import ( - "context" - "errors" - "fmt" - - "github.com/jackc/pgconn" -) - -type batchItem struct { - query string - arguments []interface{} -} - -// Batch queries are a way of bundling multiple queries together to avoid -// unnecessary network round trips. -type Batch struct { - items []*batchItem -} - -// Queue queues a query to batch b. query can be an SQL query or the name of a prepared statement. -func (b *Batch) Queue(query string, arguments ...interface{}) { - b.items = append(b.items, &batchItem{ - query: query, - arguments: arguments, - }) -} - -// Len returns number of queries that have been queued so far. -func (b *Batch) Len() int { - return len(b.items) -} - -type BatchResults interface { - // Exec reads the results from the next query in the batch as if the query has been sent with Conn.Exec. - Exec() (pgconn.CommandTag, error) - - // Query reads the results from the next query in the batch as if the query has been sent with Conn.Query. - Query() (Rows, error) - - // QueryRow reads the results from the next query in the batch as if the query has been sent with Conn.QueryRow. - QueryRow() Row - - // QueryFunc reads the results from the next query in the batch as if the query has been sent with Conn.QueryFunc. - QueryFunc(scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) - - // Close closes the batch operation. This must be called before the underlying connection can be used again. Any error - // that occurred during a batch operation may have made it impossible to resyncronize the connection with the server. - // In this case the underlying connection will have been closed. Close is safe to call multiple times. - Close() error -} - -type batchResults struct { - ctx context.Context - conn *Conn - mrr *pgconn.MultiResultReader - err error - b *Batch - ix int - closed bool -} - -// Exec reads the results from the next query in the batch as if the query has been sent with Exec. -func (br *batchResults) Exec() (pgconn.CommandTag, error) { - if br.err != nil { - return nil, br.err - } - if br.closed { - return nil, fmt.Errorf("batch already closed") - } - - query, arguments, _ := br.nextQueryAndArgs() - - if !br.mrr.NextResult() { - err := br.mrr.Close() - if err == nil { - err = errors.New("no result") - } - if br.conn.shouldLog(LogLevelError) { - br.conn.log(br.ctx, LogLevelError, "BatchResult.Exec", map[string]interface{}{ - "sql": query, - "args": logQueryArgs(arguments), - "err": err, - }) - } - return nil, err - } - - commandTag, err := br.mrr.ResultReader().Close() - - if err != nil { - if br.conn.shouldLog(LogLevelError) { - br.conn.log(br.ctx, LogLevelError, "BatchResult.Exec", map[string]interface{}{ - "sql": query, - "args": logQueryArgs(arguments), - "err": err, - }) - } - } else if br.conn.shouldLog(LogLevelInfo) { - br.conn.log(br.ctx, LogLevelInfo, "BatchResult.Exec", map[string]interface{}{ - "sql": query, - "args": logQueryArgs(arguments), - "commandTag": commandTag, - }) - } - - return commandTag, err -} - -// Query reads the results from the next query in the batch as if the query has been sent with Query. -func (br *batchResults) Query() (Rows, error) { - query, arguments, ok := br.nextQueryAndArgs() - if !ok { - query = "batch query" - } - - if br.err != nil { - return &connRows{err: br.err, closed: true}, br.err - } - - if br.closed { - alreadyClosedErr := fmt.Errorf("batch already closed") - return &connRows{err: alreadyClosedErr, closed: true}, alreadyClosedErr - } - - rows := br.conn.getRows(br.ctx, query, arguments) - - if !br.mrr.NextResult() { - rows.err = br.mrr.Close() - if rows.err == nil { - rows.err = errors.New("no result") - } - rows.closed = true - - if br.conn.shouldLog(LogLevelError) { - br.conn.log(br.ctx, LogLevelError, "BatchResult.Query", map[string]interface{}{ - "sql": query, - "args": logQueryArgs(arguments), - "err": rows.err, - }) - } - - return rows, rows.err - } - - rows.resultReader = br.mrr.ResultReader() - return rows, nil -} - -// QueryFunc reads the results from the next query in the batch as if the query has been sent with Conn.QueryFunc. -func (br *batchResults) QueryFunc(scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) { - if br.closed { - return nil, fmt.Errorf("batch already closed") - } - - rows, err := br.Query() - if err != nil { - return nil, err - } - defer rows.Close() - - for rows.Next() { - err = rows.Scan(scans...) - if err != nil { - return nil, err - } - - err = f(rows) - if err != nil { - return nil, err - } - } - - if err := rows.Err(); err != nil { - return nil, err - } - - return rows.CommandTag(), nil -} - -// QueryRow reads the results from the next query in the batch as if the query has been sent with QueryRow. -func (br *batchResults) QueryRow() Row { - rows, _ := br.Query() - return (*connRow)(rows.(*connRows)) - -} - -// Close closes the batch operation. Any error that occurred during a batch operation may have made it impossible to -// resyncronize the connection with the server. In this case the underlying connection will have been closed. -func (br *batchResults) Close() error { - if br.err != nil { - return br.err - } - - if br.closed { - return nil - } - br.closed = true - - // log any queries that haven't yet been logged by Exec or Query - for { - query, args, ok := br.nextQueryAndArgs() - if !ok { - break - } - - if br.conn.shouldLog(LogLevelInfo) { - br.conn.log(br.ctx, LogLevelInfo, "BatchResult.Close", map[string]interface{}{ - "sql": query, - "args": logQueryArgs(args), - }) - } - } - - return br.mrr.Close() -} - -func (br *batchResults) nextQueryAndArgs() (query string, args []interface{}, ok bool) { - if br.b != nil && br.ix < len(br.b.items) { - bi := br.b.items[br.ix] - query = bi.query - args = bi.arguments - ok = true - br.ix++ - } - return -} diff --git a/vendor/github.com/jackc/pgx/v4/conn.go b/vendor/github.com/jackc/pgx/v4/conn.go deleted file mode 100644 index 6f83f497..00000000 --- a/vendor/github.com/jackc/pgx/v4/conn.go +++ /dev/null @@ -1,857 +0,0 @@ -package pgx - -import ( - "context" - "errors" - "fmt" - "strconv" - "strings" - "time" - - "github.com/jackc/pgconn" - "github.com/jackc/pgconn/stmtcache" - "github.com/jackc/pgproto3/v2" - "github.com/jackc/pgtype" - "github.com/jackc/pgx/v4/internal/sanitize" -) - -// ConnConfig contains all the options used to establish a connection. It must be created by ParseConfig and -// then it can be modified. A manually initialized ConnConfig will cause ConnectConfig to panic. -type ConnConfig struct { - pgconn.Config - Logger Logger - LogLevel LogLevel - - // Original connection string that was parsed into config. - connString string - - // BuildStatementCache creates the stmtcache.Cache implementation for connections created with this config. Set - // to nil to disable automatic prepared statements. - BuildStatementCache BuildStatementCacheFunc - - // PreferSimpleProtocol disables implicit prepared statement usage. By default pgx automatically uses the extended - // protocol. This can improve performance due to being able to use the binary format. It also does not rely on client - // side parameter sanitization. However, it does incur two round-trips per query (unless using a prepared statement) - // and may be incompatible proxies such as PGBouncer. Setting PreferSimpleProtocol causes the simple protocol to be - // used by default. The same functionality can be controlled on a per query basis by setting - // QueryExOptions.SimpleProtocol. - PreferSimpleProtocol bool - - createdByParseConfig bool // Used to enforce created by ParseConfig rule. -} - -// Copy returns a deep copy of the config that is safe to use and modify. -// The only exception is the tls.Config: -// according to the tls.Config docs it must not be modified after creation. -func (cc *ConnConfig) Copy() *ConnConfig { - newConfig := new(ConnConfig) - *newConfig = *cc - newConfig.Config = *newConfig.Config.Copy() - return newConfig -} - -// ConnString returns the connection string as parsed by pgx.ParseConfig into pgx.ConnConfig. -func (cc *ConnConfig) ConnString() string { return cc.connString } - -// BuildStatementCacheFunc is a function that can be used to create a stmtcache.Cache implementation for connection. -type BuildStatementCacheFunc func(conn *pgconn.PgConn) stmtcache.Cache - -// Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use a connection pool to manage access -// to multiple database connections from multiple goroutines. -type Conn struct { - pgConn *pgconn.PgConn - config *ConnConfig // config used when establishing this connection - preparedStatements map[string]*pgconn.StatementDescription - stmtcache stmtcache.Cache - logger Logger - logLevel LogLevel - - notifications []*pgconn.Notification - - doneChan chan struct{} - closedChan chan error - - connInfo *pgtype.ConnInfo - - wbuf []byte - eqb extendedQueryBuilder -} - -// Identifier a PostgreSQL identifier or name. Identifiers can be composed of -// multiple parts such as ["schema", "table"] or ["table", "column"]. -type Identifier []string - -// Sanitize returns a sanitized string safe for SQL interpolation. -func (ident Identifier) Sanitize() string { - parts := make([]string, len(ident)) - for i := range ident { - s := strings.ReplaceAll(ident[i], string([]byte{0}), "") - parts[i] = `"` + strings.ReplaceAll(s, `"`, `""`) + `"` - } - return strings.Join(parts, ".") -} - -// ErrNoRows occurs when rows are expected but none are returned. -var ErrNoRows = errors.New("no rows in result set") - -// ErrInvalidLogLevel occurs on attempt to set an invalid log level. -var ErrInvalidLogLevel = errors.New("invalid log level") - -// Connect establishes a connection with a PostgreSQL server with a connection string. See -// pgconn.Connect for details. -func Connect(ctx context.Context, connString string) (*Conn, error) { - connConfig, err := ParseConfig(connString) - if err != nil { - return nil, err - } - return connect(ctx, connConfig) -} - -// ConnectConfig establishes a connection with a PostgreSQL server with a configuration struct. -// connConfig must have been created by ParseConfig. -func ConnectConfig(ctx context.Context, connConfig *ConnConfig) (*Conn, error) { - return connect(ctx, connConfig) -} - -// ParseConfig creates a ConnConfig from a connection string. ParseConfig handles all options that pgconn.ParseConfig -// does. In addition, it accepts the following options: -// -// statement_cache_capacity -// The maximum size of the automatic statement cache. Set to 0 to disable automatic statement caching. Default: 512. -// -// statement_cache_mode -// Possible values: "prepare" and "describe". "prepare" will create prepared statements on the PostgreSQL server. -// "describe" will use the anonymous prepared statement to describe a statement without creating a statement on the -// server. "describe" is primarily useful when the environment does not allow prepared statements such as when -// running a connection pooler like PgBouncer. Default: "prepare" -// -// prefer_simple_protocol -// Possible values: "true" and "false". Use the simple protocol instead of extended protocol. Default: false -func ParseConfig(connString string) (*ConnConfig, error) { - config, err := pgconn.ParseConfig(connString) - if err != nil { - return nil, err - } - - var buildStatementCache BuildStatementCacheFunc - statementCacheCapacity := 512 - statementCacheMode := stmtcache.ModePrepare - if s, ok := config.RuntimeParams["statement_cache_capacity"]; ok { - delete(config.RuntimeParams, "statement_cache_capacity") - n, err := strconv.ParseInt(s, 10, 32) - if err != nil { - return nil, fmt.Errorf("cannot parse statement_cache_capacity: %w", err) - } - statementCacheCapacity = int(n) - } - - if s, ok := config.RuntimeParams["statement_cache_mode"]; ok { - delete(config.RuntimeParams, "statement_cache_mode") - switch s { - case "prepare": - statementCacheMode = stmtcache.ModePrepare - case "describe": - statementCacheMode = stmtcache.ModeDescribe - default: - return nil, fmt.Errorf("invalid statement_cache_mod: %s", s) - } - } - - if statementCacheCapacity > 0 { - buildStatementCache = func(conn *pgconn.PgConn) stmtcache.Cache { - return stmtcache.New(conn, statementCacheMode, statementCacheCapacity) - } - } - - preferSimpleProtocol := false - if s, ok := config.RuntimeParams["prefer_simple_protocol"]; ok { - delete(config.RuntimeParams, "prefer_simple_protocol") - if b, err := strconv.ParseBool(s); err == nil { - preferSimpleProtocol = b - } else { - return nil, fmt.Errorf("invalid prefer_simple_protocol: %v", err) - } - } - - connConfig := &ConnConfig{ - Config: *config, - createdByParseConfig: true, - LogLevel: LogLevelInfo, - BuildStatementCache: buildStatementCache, - PreferSimpleProtocol: preferSimpleProtocol, - connString: connString, - } - - return connConfig, nil -} - -func connect(ctx context.Context, config *ConnConfig) (c *Conn, err error) { - // Default values are set in ParseConfig. Enforce initial creation by ParseConfig rather than setting defaults from - // zero values. - if !config.createdByParseConfig { - panic("config must be created by ParseConfig") - } - originalConfig := config - - // This isn't really a deep copy. But it is enough to avoid the config.Config.OnNotification mutation from affecting - // other connections with the same config. See https://github.com/jackc/pgx/issues/618. - { - configCopy := *config - config = &configCopy - } - - c = &Conn{ - config: originalConfig, - connInfo: pgtype.NewConnInfo(), - logLevel: config.LogLevel, - logger: config.Logger, - } - - // Only install pgx notification system if no other callback handler is present. - if config.Config.OnNotification == nil { - config.Config.OnNotification = c.bufferNotifications - } else { - if c.shouldLog(LogLevelDebug) { - c.log(ctx, LogLevelDebug, "pgx notification handler disabled by application supplied OnNotification", map[string]interface{}{"host": config.Config.Host}) - } - } - - if c.shouldLog(LogLevelInfo) { - c.log(ctx, LogLevelInfo, "Dialing PostgreSQL server", map[string]interface{}{"host": config.Config.Host}) - } - c.pgConn, err = pgconn.ConnectConfig(ctx, &config.Config) - if err != nil { - if c.shouldLog(LogLevelError) { - c.log(ctx, LogLevelError, "connect failed", map[string]interface{}{"err": err}) - } - return nil, err - } - - c.preparedStatements = make(map[string]*pgconn.StatementDescription) - c.doneChan = make(chan struct{}) - c.closedChan = make(chan error) - c.wbuf = make([]byte, 0, 1024) - - if c.config.BuildStatementCache != nil { - c.stmtcache = c.config.BuildStatementCache(c.pgConn) - } - - // Replication connections can't execute the queries to - // populate the c.PgTypes and c.pgsqlAfInet - if _, ok := config.Config.RuntimeParams["replication"]; ok { - return c, nil - } - - return c, nil -} - -// Close closes a connection. It is safe to call Close on a already closed -// connection. -func (c *Conn) Close(ctx context.Context) error { - if c.IsClosed() { - return nil - } - - err := c.pgConn.Close(ctx) - if c.shouldLog(LogLevelInfo) { - c.log(ctx, LogLevelInfo, "closed connection", nil) - } - return err -} - -// Prepare creates a prepared statement with name and sql. sql can contain placeholders -// for bound parameters. These placeholders are referenced positional as $1, $2, etc. -// -// Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same -// name and sql arguments. This allows a code path to Prepare and Query/Exec without -// concern for if the statement has already been prepared. -func (c *Conn) Prepare(ctx context.Context, name, sql string) (sd *pgconn.StatementDescription, err error) { - if name != "" { - var ok bool - if sd, ok = c.preparedStatements[name]; ok && sd.SQL == sql { - return sd, nil - } - } - - if c.shouldLog(LogLevelError) { - defer func() { - if err != nil { - c.log(ctx, LogLevelError, "Prepare failed", map[string]interface{}{"err": err, "name": name, "sql": sql}) - } - }() - } - - sd, err = c.pgConn.Prepare(ctx, name, sql, nil) - if err != nil { - return nil, err - } - - if name != "" { - c.preparedStatements[name] = sd - } - - return sd, nil -} - -// Deallocate released a prepared statement -func (c *Conn) Deallocate(ctx context.Context, name string) error { - delete(c.preparedStatements, name) - _, err := c.pgConn.Exec(ctx, "deallocate "+quoteIdentifier(name)).ReadAll() - return err -} - -func (c *Conn) bufferNotifications(_ *pgconn.PgConn, n *pgconn.Notification) { - c.notifications = append(c.notifications, n) -} - -// WaitForNotification waits for a PostgreSQL notification. It wraps the underlying pgconn notification system in a -// slightly more convenient form. -func (c *Conn) WaitForNotification(ctx context.Context) (*pgconn.Notification, error) { - var n *pgconn.Notification - - // Return already received notification immediately - if len(c.notifications) > 0 { - n = c.notifications[0] - c.notifications = c.notifications[1:] - return n, nil - } - - err := c.pgConn.WaitForNotification(ctx) - if len(c.notifications) > 0 { - n = c.notifications[0] - c.notifications = c.notifications[1:] - } - return n, err -} - -// IsClosed reports if the connection has been closed. -func (c *Conn) IsClosed() bool { - return c.pgConn.IsClosed() -} - -func (c *Conn) die(err error) { - if c.IsClosed() { - return - } - - ctx, cancel := context.WithCancel(context.Background()) - cancel() // force immediate hard cancel - c.pgConn.Close(ctx) -} - -func (c *Conn) shouldLog(lvl LogLevel) bool { - return c.logger != nil && c.logLevel >= lvl -} - -func (c *Conn) log(ctx context.Context, lvl LogLevel, msg string, data map[string]interface{}) { - if data == nil { - data = map[string]interface{}{} - } - if c.pgConn != nil && c.pgConn.PID() != 0 { - data["pid"] = c.pgConn.PID() - } - - c.logger.Log(ctx, lvl, msg, data) -} - -func quoteIdentifier(s string) string { - return `"` + strings.ReplaceAll(s, `"`, `""`) + `"` -} - -// Ping executes an empty sql statement against the *Conn -// If the sql returns without error, the database Ping is considered successful, otherwise, the error is returned. -func (c *Conn) Ping(ctx context.Context) error { - _, err := c.Exec(ctx, ";") - return err -} - -// PgConn returns the underlying *pgconn.PgConn. This is an escape hatch method that allows lower level access to the -// PostgreSQL connection than pgx exposes. -// -// It is strongly recommended that the connection be idle (no in-progress queries) before the underlying *pgconn.PgConn -// is used and the connection must be returned to the same state before any *pgx.Conn methods are again used. -func (c *Conn) PgConn() *pgconn.PgConn { return c.pgConn } - -// StatementCache returns the statement cache used for this connection. -func (c *Conn) StatementCache() stmtcache.Cache { return c.stmtcache } - -// ConnInfo returns the connection info used for this connection. -func (c *Conn) ConnInfo() *pgtype.ConnInfo { return c.connInfo } - -// Config returns a copy of config that was used to establish this connection. -func (c *Conn) Config() *ConnConfig { return c.config.Copy() } - -// Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced -// positionally from the sql string as $1, $2, etc. -func (c *Conn) Exec(ctx context.Context, sql string, arguments ...interface{}) (pgconn.CommandTag, error) { - startTime := time.Now() - - commandTag, err := c.exec(ctx, sql, arguments...) - if err != nil { - if c.shouldLog(LogLevelError) { - endTime := time.Now() - c.log(ctx, LogLevelError, "Exec", map[string]interface{}{"sql": sql, "args": logQueryArgs(arguments), "err": err, "time": endTime.Sub(startTime)}) - } - return commandTag, err - } - - if c.shouldLog(LogLevelInfo) { - endTime := time.Now() - c.log(ctx, LogLevelInfo, "Exec", map[string]interface{}{"sql": sql, "args": logQueryArgs(arguments), "time": endTime.Sub(startTime), "commandTag": commandTag}) - } - - return commandTag, err -} - -func (c *Conn) exec(ctx context.Context, sql string, arguments ...interface{}) (commandTag pgconn.CommandTag, err error) { - simpleProtocol := c.config.PreferSimpleProtocol - -optionLoop: - for len(arguments) > 0 { - switch arg := arguments[0].(type) { - case QuerySimpleProtocol: - simpleProtocol = bool(arg) - arguments = arguments[1:] - default: - break optionLoop - } - } - - if sd, ok := c.preparedStatements[sql]; ok { - return c.execPrepared(ctx, sd, arguments) - } - - if simpleProtocol { - return c.execSimpleProtocol(ctx, sql, arguments) - } - - if len(arguments) == 0 { - return c.execSimpleProtocol(ctx, sql, arguments) - } - - if c.stmtcache != nil { - sd, err := c.stmtcache.Get(ctx, sql) - if err != nil { - return nil, err - } - - if c.stmtcache.Mode() == stmtcache.ModeDescribe { - return c.execParams(ctx, sd, arguments) - } - return c.execPrepared(ctx, sd, arguments) - } - - sd, err := c.Prepare(ctx, "", sql) - if err != nil { - return nil, err - } - return c.execPrepared(ctx, sd, arguments) -} - -func (c *Conn) execSimpleProtocol(ctx context.Context, sql string, arguments []interface{}) (commandTag pgconn.CommandTag, err error) { - if len(arguments) > 0 { - sql, err = c.sanitizeForSimpleQuery(sql, arguments...) - if err != nil { - return nil, err - } - } - - mrr := c.pgConn.Exec(ctx, sql) - for mrr.NextResult() { - commandTag, err = mrr.ResultReader().Close() - } - err = mrr.Close() - return commandTag, err -} - -func (c *Conn) execParamsAndPreparedPrefix(sd *pgconn.StatementDescription, arguments []interface{}) error { - if len(sd.ParamOIDs) != len(arguments) { - return fmt.Errorf("expected %d arguments, got %d", len(sd.ParamOIDs), len(arguments)) - } - - c.eqb.Reset() - - args, err := convertDriverValuers(arguments) - if err != nil { - return err - } - - for i := range args { - err = c.eqb.AppendParam(c.connInfo, sd.ParamOIDs[i], args[i]) - if err != nil { - return err - } - } - - for i := range sd.Fields { - c.eqb.AppendResultFormat(c.ConnInfo().ResultFormatCodeForOID(sd.Fields[i].DataTypeOID)) - } - - return nil -} - -func (c *Conn) execParams(ctx context.Context, sd *pgconn.StatementDescription, arguments []interface{}) (pgconn.CommandTag, error) { - err := c.execParamsAndPreparedPrefix(sd, arguments) - if err != nil { - return nil, err - } - - result := c.pgConn.ExecParams(ctx, sd.SQL, c.eqb.paramValues, sd.ParamOIDs, c.eqb.paramFormats, c.eqb.resultFormats).Read() - c.eqb.Reset() // Allow c.eqb internal memory to be GC'ed as soon as possible. - return result.CommandTag, result.Err -} - -func (c *Conn) execPrepared(ctx context.Context, sd *pgconn.StatementDescription, arguments []interface{}) (pgconn.CommandTag, error) { - err := c.execParamsAndPreparedPrefix(sd, arguments) - if err != nil { - return nil, err - } - - result := c.pgConn.ExecPrepared(ctx, sd.Name, c.eqb.paramValues, c.eqb.paramFormats, c.eqb.resultFormats).Read() - c.eqb.Reset() // Allow c.eqb internal memory to be GC'ed as soon as possible. - return result.CommandTag, result.Err -} - -func (c *Conn) getRows(ctx context.Context, sql string, args []interface{}) *connRows { - r := &connRows{} - - r.ctx = ctx - r.logger = c - r.connInfo = c.connInfo - r.startTime = time.Now() - r.sql = sql - r.args = args - r.conn = c - - return r -} - -// QuerySimpleProtocol controls whether the simple or extended protocol is used to send the query. -type QuerySimpleProtocol bool - -// QueryResultFormats controls the result format (text=0, binary=1) of a query by result column position. -type QueryResultFormats []int16 - -// QueryResultFormatsByOID controls the result format (text=0, binary=1) of a query by the result column OID. -type QueryResultFormatsByOID map[uint32]int16 - -// Query sends a query to the server and returns a Rows to read the results. Only errors encountered sending the query -// and initializing Rows will be returned. Err() on the returned Rows must be checked after the Rows is closed to -// determine if the query executed successfully. -// -// The returned Rows must be closed before the connection can be used again. It is safe to attempt to read from the -// returned Rows even if an error is returned. The error will be the available in rows.Err() after rows are closed. It -// is allowed to ignore the error returned from Query and handle it in Rows. -// -// Err() on the returned Rows must be checked after the Rows is closed to determine if the query executed successfully -// as some errors can only be detected by reading the entire response. e.g. A divide by zero error on the last row. -// -// For extra control over how the query is executed, the types QuerySimpleProtocol, QueryResultFormats, and -// QueryResultFormatsByOID may be used as the first args to control exactly how the query is executed. This is rarely -// needed. See the documentation for those types for details. -func (c *Conn) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error) { - var resultFormats QueryResultFormats - var resultFormatsByOID QueryResultFormatsByOID - simpleProtocol := c.config.PreferSimpleProtocol - -optionLoop: - for len(args) > 0 { - switch arg := args[0].(type) { - case QueryResultFormats: - resultFormats = arg - args = args[1:] - case QueryResultFormatsByOID: - resultFormatsByOID = arg - args = args[1:] - case QuerySimpleProtocol: - simpleProtocol = bool(arg) - args = args[1:] - default: - break optionLoop - } - } - - rows := c.getRows(ctx, sql, args) - - var err error - sd, ok := c.preparedStatements[sql] - - if simpleProtocol && !ok { - sql, err = c.sanitizeForSimpleQuery(sql, args...) - if err != nil { - rows.fatal(err) - return rows, err - } - - mrr := c.pgConn.Exec(ctx, sql) - if mrr.NextResult() { - rows.resultReader = mrr.ResultReader() - rows.multiResultReader = mrr - } else { - err = mrr.Close() - rows.fatal(err) - return rows, err - } - - return rows, nil - } - - c.eqb.Reset() - - if !ok { - if c.stmtcache != nil { - sd, err = c.stmtcache.Get(ctx, sql) - if err != nil { - rows.fatal(err) - return rows, rows.err - } - } else { - sd, err = c.pgConn.Prepare(ctx, "", sql, nil) - if err != nil { - rows.fatal(err) - return rows, rows.err - } - } - } - if len(sd.ParamOIDs) != len(args) { - rows.fatal(fmt.Errorf("expected %d arguments, got %d", len(sd.ParamOIDs), len(args))) - return rows, rows.err - } - - rows.sql = sd.SQL - - args, err = convertDriverValuers(args) - if err != nil { - rows.fatal(err) - return rows, rows.err - } - - for i := range args { - err = c.eqb.AppendParam(c.connInfo, sd.ParamOIDs[i], args[i]) - if err != nil { - rows.fatal(err) - return rows, rows.err - } - } - - if resultFormatsByOID != nil { - resultFormats = make([]int16, len(sd.Fields)) - for i := range resultFormats { - resultFormats[i] = resultFormatsByOID[uint32(sd.Fields[i].DataTypeOID)] - } - } - - if resultFormats == nil { - for i := range sd.Fields { - c.eqb.AppendResultFormat(c.ConnInfo().ResultFormatCodeForOID(sd.Fields[i].DataTypeOID)) - } - - resultFormats = c.eqb.resultFormats - } - - if c.stmtcache != nil && c.stmtcache.Mode() == stmtcache.ModeDescribe && !ok { - rows.resultReader = c.pgConn.ExecParams(ctx, sql, c.eqb.paramValues, sd.ParamOIDs, c.eqb.paramFormats, resultFormats) - } else { - rows.resultReader = c.pgConn.ExecPrepared(ctx, sd.Name, c.eqb.paramValues, c.eqb.paramFormats, resultFormats) - } - - c.eqb.Reset() // Allow c.eqb internal memory to be GC'ed as soon as possible. - - return rows, rows.err -} - -// QueryRow is a convenience wrapper over Query. Any error that occurs while -// querying is deferred until calling Scan on the returned Row. That Row will -// error with ErrNoRows if no rows are returned. -func (c *Conn) QueryRow(ctx context.Context, sql string, args ...interface{}) Row { - rows, _ := c.Query(ctx, sql, args...) - return (*connRow)(rows.(*connRows)) -} - -// QueryFuncRow is the argument to the QueryFunc callback function. -// -// QueryFuncRow is an interface instead of a struct to allow tests to mock QueryFunc. However, adding a method to an -// interface is technically a breaking change. Because of this the QueryFuncRow interface is partially excluded from -// semantic version requirements. Methods will not be removed or changed, but new methods may be added. -type QueryFuncRow interface { - FieldDescriptions() []pgproto3.FieldDescription - - // RawValues returns the unparsed bytes of the row values. The returned [][]byte is only valid during the current - // function call. However, the underlying byte data is safe to retain a reference to and mutate. - RawValues() [][]byte -} - -// QueryFunc executes sql with args. For each row returned by the query the values will scanned into the elements of -// scans and f will be called. If any row fails to scan or f returns an error the query will be aborted and the error -// will be returned. -func (c *Conn) QueryFunc(ctx context.Context, sql string, args []interface{}, scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) { - rows, err := c.Query(ctx, sql, args...) - if err != nil { - return nil, err - } - defer rows.Close() - - for rows.Next() { - err = rows.Scan(scans...) - if err != nil { - return nil, err - } - - err = f(rows) - if err != nil { - return nil, err - } - } - - if err := rows.Err(); err != nil { - return nil, err - } - - return rows.CommandTag(), nil -} - -// SendBatch sends all queued queries to the server at once. All queries are run in an implicit transaction unless -// explicit transaction control statements are executed. The returned BatchResults must be closed before the connection -// is used again. -func (c *Conn) SendBatch(ctx context.Context, b *Batch) BatchResults { - startTime := time.Now() - - simpleProtocol := c.config.PreferSimpleProtocol - var sb strings.Builder - if simpleProtocol { - for i, bi := range b.items { - if i > 0 { - sb.WriteByte(';') - } - sql, err := c.sanitizeForSimpleQuery(bi.query, bi.arguments...) - if err != nil { - return &batchResults{ctx: ctx, conn: c, err: err} - } - sb.WriteString(sql) - } - mrr := c.pgConn.Exec(ctx, sb.String()) - return &batchResults{ - ctx: ctx, - conn: c, - mrr: mrr, - b: b, - ix: 0, - } - } - - distinctUnpreparedQueries := map[string]struct{}{} - - for _, bi := range b.items { - if _, ok := c.preparedStatements[bi.query]; ok { - continue - } - distinctUnpreparedQueries[bi.query] = struct{}{} - } - - var stmtCache stmtcache.Cache - if len(distinctUnpreparedQueries) > 0 { - if c.stmtcache != nil && c.stmtcache.Cap() >= len(distinctUnpreparedQueries) { - stmtCache = c.stmtcache - } else { - stmtCache = stmtcache.New(c.pgConn, stmtcache.ModeDescribe, len(distinctUnpreparedQueries)) - } - - for sql, _ := range distinctUnpreparedQueries { - _, err := stmtCache.Get(ctx, sql) - if err != nil { - return &batchResults{ctx: ctx, conn: c, err: err} - } - } - } - - batch := &pgconn.Batch{} - - for _, bi := range b.items { - c.eqb.Reset() - - sd := c.preparedStatements[bi.query] - if sd == nil { - var err error - sd, err = stmtCache.Get(ctx, bi.query) - if err != nil { - return c.logBatchResults(ctx, startTime, &batchResults{ctx: ctx, conn: c, err: err}) - } - } - - if len(sd.ParamOIDs) != len(bi.arguments) { - return c.logBatchResults(ctx, startTime, &batchResults{ctx: ctx, conn: c, err: fmt.Errorf("mismatched param and argument count")}) - } - - args, err := convertDriverValuers(bi.arguments) - if err != nil { - return c.logBatchResults(ctx, startTime, &batchResults{ctx: ctx, conn: c, err: err}) - } - - for i := range args { - err = c.eqb.AppendParam(c.connInfo, sd.ParamOIDs[i], args[i]) - if err != nil { - return c.logBatchResults(ctx, startTime, &batchResults{ctx: ctx, conn: c, err: err}) - } - } - - for i := range sd.Fields { - c.eqb.AppendResultFormat(c.ConnInfo().ResultFormatCodeForOID(sd.Fields[i].DataTypeOID)) - } - - if sd.Name == "" { - batch.ExecParams(bi.query, c.eqb.paramValues, sd.ParamOIDs, c.eqb.paramFormats, c.eqb.resultFormats) - } else { - batch.ExecPrepared(sd.Name, c.eqb.paramValues, c.eqb.paramFormats, c.eqb.resultFormats) - } - } - - c.eqb.Reset() // Allow c.eqb internal memory to be GC'ed as soon as possible. - - mrr := c.pgConn.ExecBatch(ctx, batch) - - return c.logBatchResults(ctx, startTime, &batchResults{ - ctx: ctx, - conn: c, - mrr: mrr, - b: b, - ix: 0, - }) -} - -func (c *Conn) logBatchResults(ctx context.Context, startTime time.Time, results *batchResults) BatchResults { - if results.err != nil { - if c.shouldLog(LogLevelError) { - endTime := time.Now() - c.log(ctx, LogLevelError, "SendBatch", map[string]interface{}{"err": results.err, "time": endTime.Sub(startTime)}) - } - return results - } - - if c.shouldLog(LogLevelInfo) { - endTime := time.Now() - c.log(ctx, LogLevelInfo, "SendBatch", map[string]interface{}{"batchLen": results.b.Len(), "time": endTime.Sub(startTime)}) - } - - return results -} - -func (c *Conn) sanitizeForSimpleQuery(sql string, args ...interface{}) (string, error) { - if c.pgConn.ParameterStatus("standard_conforming_strings") != "on" { - return "", errors.New("simple protocol queries must be run with standard_conforming_strings=on") - } - - if c.pgConn.ParameterStatus("client_encoding") != "UTF8" { - return "", errors.New("simple protocol queries must be run with client_encoding=UTF8") - } - - var err error - valueArgs := make([]interface{}, len(args)) - for i, a := range args { - valueArgs[i], err = convertSimpleArgument(c.connInfo, a) - if err != nil { - return "", err - } - } - - return sanitize.SanitizeSQL(sql, valueArgs...) -} diff --git a/vendor/github.com/jackc/pgx/v4/copy_from.go b/vendor/github.com/jackc/pgx/v4/copy_from.go deleted file mode 100644 index 49139d05..00000000 --- a/vendor/github.com/jackc/pgx/v4/copy_from.go +++ /dev/null @@ -1,211 +0,0 @@ -package pgx - -import ( - "bytes" - "context" - "fmt" - "io" - "time" - - "github.com/jackc/pgconn" - "github.com/jackc/pgio" -) - -// CopyFromRows returns a CopyFromSource interface over the provided rows slice -// making it usable by *Conn.CopyFrom. -func CopyFromRows(rows [][]interface{}) CopyFromSource { - return ©FromRows{rows: rows, idx: -1} -} - -type copyFromRows struct { - rows [][]interface{} - idx int -} - -func (ctr *copyFromRows) Next() bool { - ctr.idx++ - return ctr.idx < len(ctr.rows) -} - -func (ctr *copyFromRows) Values() ([]interface{}, error) { - return ctr.rows[ctr.idx], nil -} - -func (ctr *copyFromRows) Err() error { - return nil -} - -// CopyFromSlice returns a CopyFromSource interface over a dynamic func -// making it usable by *Conn.CopyFrom. -func CopyFromSlice(length int, next func(int) ([]interface{}, error)) CopyFromSource { - return ©FromSlice{next: next, idx: -1, len: length} -} - -type copyFromSlice struct { - next func(int) ([]interface{}, error) - idx int - len int - err error -} - -func (cts *copyFromSlice) Next() bool { - cts.idx++ - return cts.idx < cts.len -} - -func (cts *copyFromSlice) Values() ([]interface{}, error) { - values, err := cts.next(cts.idx) - if err != nil { - cts.err = err - } - return values, err -} - -func (cts *copyFromSlice) Err() error { - return cts.err -} - -// CopyFromSource is the interface used by *Conn.CopyFrom as the source for copy data. -type CopyFromSource interface { - // Next returns true if there is another row and makes the next row data - // available to Values(). When there are no more rows available or an error - // has occurred it returns false. - Next() bool - - // Values returns the values for the current row. - Values() ([]interface{}, error) - - // Err returns any error that has been encountered by the CopyFromSource. If - // this is not nil *Conn.CopyFrom will abort the copy. - Err() error -} - -type copyFrom struct { - conn *Conn - tableName Identifier - columnNames []string - rowSrc CopyFromSource - readerErrChan chan error -} - -func (ct *copyFrom) run(ctx context.Context) (int64, error) { - quotedTableName := ct.tableName.Sanitize() - cbuf := &bytes.Buffer{} - for i, cn := range ct.columnNames { - if i != 0 { - cbuf.WriteString(", ") - } - cbuf.WriteString(quoteIdentifier(cn)) - } - quotedColumnNames := cbuf.String() - - sd, err := ct.conn.Prepare(ctx, "", fmt.Sprintf("select %s from %s", quotedColumnNames, quotedTableName)) - if err != nil { - return 0, err - } - - r, w := io.Pipe() - doneChan := make(chan struct{}) - - go func() { - defer close(doneChan) - - // Purposely NOT using defer w.Close(). See https://github.com/golang/go/issues/24283. - buf := ct.conn.wbuf - - buf = append(buf, "PGCOPY\n\377\r\n\000"...) - buf = pgio.AppendInt32(buf, 0) - buf = pgio.AppendInt32(buf, 0) - - moreRows := true - for moreRows { - var err error - moreRows, buf, err = ct.buildCopyBuf(buf, sd) - if err != nil { - w.CloseWithError(err) - return - } - - if ct.rowSrc.Err() != nil { - w.CloseWithError(ct.rowSrc.Err()) - return - } - - if len(buf) > 0 { - _, err = w.Write(buf) - if err != nil { - w.Close() - return - } - } - - buf = buf[:0] - } - - w.Close() - }() - - startTime := time.Now() - - commandTag, err := ct.conn.pgConn.CopyFrom(ctx, r, fmt.Sprintf("copy %s ( %s ) from stdin binary;", quotedTableName, quotedColumnNames)) - - r.Close() - <-doneChan - - rowsAffected := commandTag.RowsAffected() - endTime := time.Now() - if err == nil { - if ct.conn.shouldLog(LogLevelInfo) { - ct.conn.log(ctx, LogLevelInfo, "CopyFrom", map[string]interface{}{"tableName": ct.tableName, "columnNames": ct.columnNames, "time": endTime.Sub(startTime), "rowCount": rowsAffected}) - } - } else if ct.conn.shouldLog(LogLevelError) { - ct.conn.log(ctx, LogLevelError, "CopyFrom", map[string]interface{}{"err": err, "tableName": ct.tableName, "columnNames": ct.columnNames, "time": endTime.Sub(startTime)}) - } - - return rowsAffected, err -} - -func (ct *copyFrom) buildCopyBuf(buf []byte, sd *pgconn.StatementDescription) (bool, []byte, error) { - - for ct.rowSrc.Next() { - values, err := ct.rowSrc.Values() - if err != nil { - return false, nil, err - } - if len(values) != len(ct.columnNames) { - return false, nil, fmt.Errorf("expected %d values, got %d values", len(ct.columnNames), len(values)) - } - - buf = pgio.AppendInt16(buf, int16(len(ct.columnNames))) - for i, val := range values { - buf, err = encodePreparedStatementArgument(ct.conn.connInfo, buf, sd.Fields[i].DataTypeOID, val) - if err != nil { - return false, nil, err - } - } - - if len(buf) > 65536 { - return true, buf, nil - } - } - - return false, buf, nil -} - -// CopyFrom uses the PostgreSQL copy protocol to perform bulk data insertion. -// It returns the number of rows copied and an error. -// -// CopyFrom requires all values use the binary format. Almost all types -// implemented by pgx use the binary format by default. Types implementing -// Encoder can only be used if they encode to the binary format. -func (c *Conn) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error) { - ct := ©From{ - conn: c, - tableName: tableName, - columnNames: columnNames, - rowSrc: rowSrc, - readerErrChan: make(chan error), - } - - return ct.run(ctx) -} diff --git a/vendor/github.com/jackc/pgx/v4/doc.go b/vendor/github.com/jackc/pgx/v4/doc.go deleted file mode 100644 index 222f9047..00000000 --- a/vendor/github.com/jackc/pgx/v4/doc.go +++ /dev/null @@ -1,340 +0,0 @@ -// Package pgx is a PostgreSQL database driver. -/* -pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql -interface as possible while providing better speed and access to PostgreSQL specific features. Import -github.com/jackc/pgx/v4/stdlib to use pgx as a database/sql compatible driver. - -Establishing a Connection - -The primary way of establishing a connection is with `pgx.Connect`. - - conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL")) - -The database connection string can be in URL or DSN format. Both PostgreSQL settings and pgx settings can be specified -here. In addition, a config struct can be created by `ParseConfig` and modified before establishing the connection with -`ConnectConfig`. - - config, err := pgx.ParseConfig(os.Getenv("DATABASE_URL")) - if err != nil { - // ... - } - config.Logger = log15adapter.NewLogger(log.New("module", "pgx")) - - conn, err := pgx.ConnectConfig(context.Background(), config) - -Connection Pool - -`*pgx.Conn` represents a single connection to the database and is not concurrency safe. Use sub-package pgxpool for a -concurrency safe connection pool. - -Query Interface - -pgx implements Query and Scan in the familiar database/sql style. - - var sum int32 - - // Send the query to the server. The returned rows MUST be closed - // before conn can be used again. - rows, err := conn.Query(context.Background(), "select generate_series(1,$1)", 10) - if err != nil { - return err - } - - // rows.Close is called by rows.Next when all rows are read - // or an error occurs in Next or Scan. So it may optionally be - // omitted if nothing in the rows.Next loop can panic. It is - // safe to close rows multiple times. - defer rows.Close() - - // Iterate through the result set - for rows.Next() { - var n int32 - err = rows.Scan(&n) - if err != nil { - return err - } - sum += n - } - - // Any errors encountered by rows.Next or rows.Scan will be returned here - if rows.Err() != nil { - return rows.Err() - } - - // No errors found - do something with sum - -pgx also implements QueryRow in the same style as database/sql. - - var name string - var weight int64 - err := conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight) - if err != nil { - return err - } - -Use Exec to execute a query that does not return a result set. - - commandTag, err := conn.Exec(context.Background(), "delete from widgets where id=$1", 42) - if err != nil { - return err - } - if commandTag.RowsAffected() != 1 { - return errors.New("No row found to delete") - } - -QueryFunc can be used to execute a callback function for every row. This is often easier to use than Query. - - var sum, n int32 - _, err = conn.QueryFunc( - context.Background(), - "select generate_series(1,$1)", - []interface{}{10}, - []interface{}{&n}, - func(pgx.QueryFuncRow) error { - sum += n - return nil - }, - ) - if err != nil { - return err - } - -Base Type Mapping - -pgx maps between all common base types directly between Go and PostgreSQL. In particular: - - Go PostgreSQL - ----------------------- - string varchar - text - - // Integers are automatically be converted to any other integer type if - // it can be done without overflow or underflow. - int8 - int16 smallint - int32 int - int64 bigint - int - uint8 - uint16 - uint32 - uint64 - uint - - // Floats are strict and do not automatically convert like integers. - float32 float4 - float64 float8 - - time.Time date - timestamp - timestamptz - - []byte bytea - - -Null Mapping - -pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. -They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. - - var foo pgtype.Varchar - var bar *string - err := conn.QueryRow("select foo, bar from widgets where id=$1", 42).Scan(&foo, &bar) - if err != nil { - return err - } - -Array Mapping - -pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. -Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go -slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly -map to native Go types. - -JSON and JSONB Mapping - -pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. - -Inet and CIDR Mapping - -pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode -from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. - -Custom Type Support - -pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct -mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgtype library to support more types. See -documention for that library for instructions on how to implement custom types. - -See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. - -pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer -interfaces. - -If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt -to encode the underlying type. While this is usually desired behavior it can produce surprising behavior if one the -underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It -is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. - -Composite types and row values - -Row values and composite types are represented as pgtype.Record (https://pkg.go.dev/github.com/jackc/pgtype?tab=doc#Record). -It is possible to get values of your custom type by implementing DecodeBinary interface. Decoding into -pgtype.Record first can simplify process by avoiding dealing with raw protocol directly. - -For example: - - type MyType struct { - a int // NULL will cause decoding error - b *string // there can be NULL in this position in SQL - } - - func (t *MyType) DecodeBinary(ci *pgtype.ConnInfo, src []byte) error { - r := pgtype.Record{ - Fields: []pgtype.Value{&pgtype.Int4{}, &pgtype.Text{}}, - } - - if err := r.DecodeBinary(ci, src); err != nil { - return err - } - - if r.Status != pgtype.Present { - return errors.New("BUG: decoding should not be called on NULL value") - } - - a := r.Fields[0].(*pgtype.Int4) - b := r.Fields[1].(*pgtype.Text) - - // type compatibility is checked by AssignTo - // only lossless assignments will succeed - if err := a.AssignTo(&t.a); err != nil { - return err - } - - // AssignTo also deals with null value handling - if err := b.AssignTo(&t.b); err != nil { - return err - } - return nil - } - - result := MyType{} - err := conn.QueryRow(context.Background(), "select row(1, 'foo'::text)", pgx.QueryResultFormats{pgx.BinaryFormatCode}).Scan(&r) - -Raw Bytes Mapping - -[]byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. - -Transactions - -Transactions are started by calling Begin. - - tx, err := conn.Begin(context.Background()) - if err != nil { - return err - } - // Rollback is safe to call even if the tx is already closed, so if - // the tx commits successfully, this is a no-op - defer tx.Rollback(context.Background()) - - _, err = tx.Exec(context.Background(), "insert into foo(id) values (1)") - if err != nil { - return err - } - - err = tx.Commit(context.Background()) - if err != nil { - return err - } - -The Tx returned from Begin also implements the Begin method. This can be used to implement pseudo nested transactions. -These are internally implemented with savepoints. - -Use BeginTx to control the transaction mode. - -BeginFunc and BeginTxFunc are variants that begin a transaction, execute a function, and commit or rollback the -transaction depending on the return value of the function. These can be simpler and less error prone to use. - - err = conn.BeginFunc(context.Background(), func(tx pgx.Tx) error { - _, err := tx.Exec(context.Background(), "insert into foo(id) values (1)") - return err - }) - if err != nil { - return err - } - -Prepared Statements - -Prepared statements can be manually created with the Prepare method. However, this is rarely necessary because pgx -includes an automatic statement cache by default. Queries run through the normal Query, QueryRow, and Exec functions are -automatically prepared on first execution and the prepared statement is reused on subsequent executions. See ParseConfig -for information on how to customize or disable the statement cache. - -Copy Protocol - -Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a -CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource -interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. - - rows := [][]interface{}{ - {"John", "Smith", int32(36)}, - {"Jane", "Doe", int32(29)}, - } - - copyCount, err := conn.CopyFrom( - context.Background(), - pgx.Identifier{"people"}, - []string{"first_name", "last_name", "age"}, - pgx.CopyFromRows(rows), - ) - -When you already have a typed array using CopyFromSlice can be more convenient. - - rows := []User{ - {"John", "Smith", 36}, - {"Jane", "Doe", 29}, - } - - copyCount, err := conn.CopyFrom( - context.Background(), - pgx.Identifier{"people"}, - []string{"first_name", "last_name", "age"}, - pgx.CopyFromSlice(len(rows), func(i int) ([]interface{}, error) { - return []interface{}{rows[i].FirstName, rows[i].LastName, rows[i].Age}, nil - }), - ) - -CopyFrom can be faster than an insert with as few as 5 rows. - -Listen and Notify - -pgx can listen to the PostgreSQL notification system with the `Conn.WaitForNotification` method. It blocks until a -notification is received or the context is canceled. - - _, err := conn.Exec(context.Background(), "listen channelname") - if err != nil { - return nil - } - - if notification, err := conn.WaitForNotification(context.Background()); err != nil { - // do something with notification - } - - -Logging - -pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set -LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, -go.uber.org/zap, github.com/rs/zerolog, and the testing log are provided in the log directory. - -Lower Level PostgreSQL Functionality - -pgx is implemented on top of github.com/jackc/pgconn a lower level PostgreSQL driver. The Conn.PgConn() method can be -used to access this lower layer. - -PgBouncer - -pgx is compatible with PgBouncer in two modes. One is when the connection has a statement cache in "describe" mode. The -other is when the connection is using the simple protocol. This can be set with the PreferSimpleProtocol config option. -*/ -package pgx diff --git a/vendor/github.com/jackc/pgx/v4/extended_query_builder.go b/vendor/github.com/jackc/pgx/v4/extended_query_builder.go deleted file mode 100644 index d06f63fd..00000000 --- a/vendor/github.com/jackc/pgx/v4/extended_query_builder.go +++ /dev/null @@ -1,161 +0,0 @@ -package pgx - -import ( - "database/sql/driver" - "fmt" - "reflect" - - "github.com/jackc/pgtype" -) - -type extendedQueryBuilder struct { - paramValues [][]byte - paramValueBytes []byte - paramFormats []int16 - resultFormats []int16 -} - -func (eqb *extendedQueryBuilder) AppendParam(ci *pgtype.ConnInfo, oid uint32, arg interface{}) error { - f := chooseParameterFormatCode(ci, oid, arg) - eqb.paramFormats = append(eqb.paramFormats, f) - - v, err := eqb.encodeExtendedParamValue(ci, oid, f, arg) - if err != nil { - return err - } - eqb.paramValues = append(eqb.paramValues, v) - - return nil -} - -func (eqb *extendedQueryBuilder) AppendResultFormat(f int16) { - eqb.resultFormats = append(eqb.resultFormats, f) -} - -// Reset readies eqb to build another query. -func (eqb *extendedQueryBuilder) Reset() { - eqb.paramValues = eqb.paramValues[0:0] - eqb.paramValueBytes = eqb.paramValueBytes[0:0] - eqb.paramFormats = eqb.paramFormats[0:0] - eqb.resultFormats = eqb.resultFormats[0:0] - - if cap(eqb.paramValues) > 64 { - eqb.paramValues = make([][]byte, 0, 64) - } - - if cap(eqb.paramValueBytes) > 256 { - eqb.paramValueBytes = make([]byte, 0, 256) - } - - if cap(eqb.paramFormats) > 64 { - eqb.paramFormats = make([]int16, 0, 64) - } - if cap(eqb.resultFormats) > 64 { - eqb.resultFormats = make([]int16, 0, 64) - } -} - -func (eqb *extendedQueryBuilder) encodeExtendedParamValue(ci *pgtype.ConnInfo, oid uint32, formatCode int16, arg interface{}) ([]byte, error) { - if arg == nil { - return nil, nil - } - - refVal := reflect.ValueOf(arg) - argIsPtr := refVal.Kind() == reflect.Ptr - - if argIsPtr && refVal.IsNil() { - return nil, nil - } - - if eqb.paramValueBytes == nil { - eqb.paramValueBytes = make([]byte, 0, 128) - } - - var err error - var buf []byte - pos := len(eqb.paramValueBytes) - - if arg, ok := arg.(string); ok { - return []byte(arg), nil - } - - if formatCode == TextFormatCode { - if arg, ok := arg.(pgtype.TextEncoder); ok { - buf, err = arg.EncodeText(ci, eqb.paramValueBytes) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - eqb.paramValueBytes = buf - return eqb.paramValueBytes[pos:], nil - } - } else if formatCode == BinaryFormatCode { - if arg, ok := arg.(pgtype.BinaryEncoder); ok { - buf, err = arg.EncodeBinary(ci, eqb.paramValueBytes) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - eqb.paramValueBytes = buf - return eqb.paramValueBytes[pos:], nil - } - } - - if argIsPtr { - // We have already checked that arg is not pointing to nil, - // so it is safe to dereference here. - arg = refVal.Elem().Interface() - return eqb.encodeExtendedParamValue(ci, oid, formatCode, arg) - } - - if dt, ok := ci.DataTypeForOID(oid); ok { - value := dt.Value - err := value.Set(arg) - if err != nil { - { - if arg, ok := arg.(driver.Valuer); ok { - v, err := callValuerValue(arg) - if err != nil { - return nil, err - } - return eqb.encodeExtendedParamValue(ci, oid, formatCode, v) - } - } - - return nil, err - } - - return eqb.encodeExtendedParamValue(ci, oid, formatCode, value) - } - - // There is no data type registered for the destination OID, but maybe there is data type registered for the arg - // type. If so use it's text encoder (if available). - if dt, ok := ci.DataTypeForValue(arg); ok { - value := dt.Value - if textEncoder, ok := value.(pgtype.TextEncoder); ok { - err := value.Set(arg) - if err != nil { - return nil, err - } - - buf, err = textEncoder.EncodeText(ci, eqb.paramValueBytes) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - eqb.paramValueBytes = buf - return eqb.paramValueBytes[pos:], nil - } - } - - if strippedArg, ok := stripNamedType(&refVal); ok { - return eqb.encodeExtendedParamValue(ci, oid, formatCode, strippedArg) - } - return nil, SerializationError(fmt.Sprintf("Cannot encode %T into oid %v - %T must implement Encoder or be converted to a string", arg, oid, arg)) -} diff --git a/vendor/github.com/jackc/pgx/v4/go_stdlib.go b/vendor/github.com/jackc/pgx/v4/go_stdlib.go deleted file mode 100644 index 9372f9ef..00000000 --- a/vendor/github.com/jackc/pgx/v4/go_stdlib.go +++ /dev/null @@ -1,61 +0,0 @@ -package pgx - -import ( - "database/sql/driver" - "reflect" -) - -// This file contains code copied from the Go standard library due to the -// required function not being public. - -// Copyright (c) 2009 The Go Authors. All rights reserved. - -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are -// met: - -// * Redistributions of source code must retain the above copyright -// notice, this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above -// copyright notice, this list of conditions and the following disclaimer -// in the documentation and/or other materials provided with the -// distribution. -// * Neither the name of Google Inc. nor the names of its -// contributors may be used to endorse or promote products derived from -// this software without specific prior written permission. - -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -// From database/sql/convert.go - -var valuerReflectType = reflect.TypeOf((*driver.Valuer)(nil)).Elem() - -// callValuerValue returns vr.Value(), with one exception: -// If vr.Value is an auto-generated method on a pointer type and the -// pointer is nil, it would panic at runtime in the panicwrap -// method. Treat it like nil instead. -// Issue 8415. -// -// This is so people can implement driver.Value on value types and -// still use nil pointers to those types to mean nil/NULL, just like -// string/*string. -// -// This function is mirrored in the database/sql/driver package. -func callValuerValue(vr driver.Valuer) (v driver.Value, err error) { - if rv := reflect.ValueOf(vr); rv.Kind() == reflect.Ptr && - rv.IsNil() && - rv.Type().Elem().Implements(valuerReflectType) { - return nil, nil - } - return vr.Value() -} diff --git a/vendor/github.com/jackc/pgx/v4/internal/sanitize/sanitize.go b/vendor/github.com/jackc/pgx/v4/internal/sanitize/sanitize.go deleted file mode 100644 index 5eef456c..00000000 --- a/vendor/github.com/jackc/pgx/v4/internal/sanitize/sanitize.go +++ /dev/null @@ -1,322 +0,0 @@ -package sanitize - -import ( - "bytes" - "encoding/hex" - "fmt" - "strconv" - "strings" - "time" - "unicode/utf8" -) - -// Part is either a string or an int. A string is raw SQL. An int is a -// argument placeholder. -type Part interface{} - -type Query struct { - Parts []Part -} - -// utf.DecodeRune returns the utf8.RuneError for errors. But that is actually rune U+FFFD -- the unicode replacement -// character. utf8.RuneError is not an error if it is also width 3. -// -// https://github.com/jackc/pgx/issues/1380 -const replacementcharacterwidth = 3 - -func (q *Query) Sanitize(args ...interface{}) (string, error) { - argUse := make([]bool, len(args)) - buf := &bytes.Buffer{} - - for _, part := range q.Parts { - var str string - switch part := part.(type) { - case string: - str = part - case int: - argIdx := part - 1 - if argIdx >= len(args) { - return "", fmt.Errorf("insufficient arguments") - } - arg := args[argIdx] - switch arg := arg.(type) { - case nil: - str = "null" - case int64: - str = strconv.FormatInt(arg, 10) - case float64: - str = strconv.FormatFloat(arg, 'f', -1, 64) - case bool: - str = strconv.FormatBool(arg) - case []byte: - str = QuoteBytes(arg) - case string: - str = QuoteString(arg) - case time.Time: - str = arg.Truncate(time.Microsecond).Format("'2006-01-02 15:04:05.999999999Z07:00:00'") - default: - return "", fmt.Errorf("invalid arg type: %T", arg) - } - argUse[argIdx] = true - default: - return "", fmt.Errorf("invalid Part type: %T", part) - } - buf.WriteString(str) - } - - for i, used := range argUse { - if !used { - return "", fmt.Errorf("unused argument: %d", i) - } - } - return buf.String(), nil -} - -func NewQuery(sql string) (*Query, error) { - l := &sqlLexer{ - src: sql, - stateFn: rawState, - } - - for l.stateFn != nil { - l.stateFn = l.stateFn(l) - } - - query := &Query{Parts: l.parts} - - return query, nil -} - -func QuoteString(str string) string { - return "'" + strings.ReplaceAll(str, "'", "''") + "'" -} - -func QuoteBytes(buf []byte) string { - return `'\x` + hex.EncodeToString(buf) + "'" -} - -type sqlLexer struct { - src string - start int - pos int - nested int // multiline comment nesting level. - stateFn stateFn - parts []Part -} - -type stateFn func(*sqlLexer) stateFn - -func rawState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case 'e', 'E': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune == '\'' { - l.pos += width - return escapeStringState - } - case '\'': - return singleQuoteState - case '"': - return doubleQuoteState - case '$': - nextRune, _ := utf8.DecodeRuneInString(l.src[l.pos:]) - if '0' <= nextRune && nextRune <= '9' { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos-width]) - } - l.start = l.pos - return placeholderState - } - case '-': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune == '-' { - l.pos += width - return oneLineCommentState - } - case '/': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune == '*' { - l.pos += width - return multilineCommentState - } - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -func singleQuoteState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case '\'': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune != '\'' { - return rawState - } - l.pos += width - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -func doubleQuoteState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case '"': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune != '"' { - return rawState - } - l.pos += width - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -// placeholderState consumes a placeholder value. The $ must have already has -// already been consumed. The first rune must be a digit. -func placeholderState(l *sqlLexer) stateFn { - num := 0 - - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - if '0' <= r && r <= '9' { - num *= 10 - num += int(r - '0') - } else { - l.parts = append(l.parts, num) - l.pos -= width - l.start = l.pos - return rawState - } - } -} - -func escapeStringState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case '\\': - _, width = utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - case '\'': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune != '\'' { - return rawState - } - l.pos += width - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -func oneLineCommentState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case '\\': - _, width = utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - case '\n', '\r': - return rawState - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -func multilineCommentState(l *sqlLexer) stateFn { - for { - r, width := utf8.DecodeRuneInString(l.src[l.pos:]) - l.pos += width - - switch r { - case '/': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune == '*' { - l.pos += width - l.nested++ - } - case '*': - nextRune, width := utf8.DecodeRuneInString(l.src[l.pos:]) - if nextRune != '/' { - continue - } - - l.pos += width - if l.nested == 0 { - return rawState - } - l.nested-- - - case utf8.RuneError: - if width != replacementcharacterwidth { - if l.pos-l.start > 0 { - l.parts = append(l.parts, l.src[l.start:l.pos]) - l.start = l.pos - } - return nil - } - } - } -} - -// SanitizeSQL replaces placeholder values with args. It quotes and escapes args -// as necessary. This function is only safe when standard_conforming_strings is -// on. -func SanitizeSQL(sql string, args ...interface{}) (string, error) { - query, err := NewQuery(sql) - if err != nil { - return "", err - } - return query.Sanitize(args...) -} diff --git a/vendor/github.com/jackc/pgx/v4/large_objects.go b/vendor/github.com/jackc/pgx/v4/large_objects.go deleted file mode 100644 index c238ab9c..00000000 --- a/vendor/github.com/jackc/pgx/v4/large_objects.go +++ /dev/null @@ -1,121 +0,0 @@ -package pgx - -import ( - "context" - "errors" - "io" -) - -// LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it -// was created. -// -// For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html -type LargeObjects struct { - tx Tx -} - -type LargeObjectMode int32 - -const ( - LargeObjectModeWrite LargeObjectMode = 0x20000 - LargeObjectModeRead LargeObjectMode = 0x40000 -) - -// Create creates a new large object. If oid is zero, the server assigns an unused OID. -func (o *LargeObjects) Create(ctx context.Context, oid uint32) (uint32, error) { - err := o.tx.QueryRow(ctx, "select lo_create($1)", oid).Scan(&oid) - return oid, err -} - -// Open opens an existing large object with the given mode. ctx will also be used for all operations on the opened large -// object. -func (o *LargeObjects) Open(ctx context.Context, oid uint32, mode LargeObjectMode) (*LargeObject, error) { - var fd int32 - err := o.tx.QueryRow(ctx, "select lo_open($1, $2)", oid, mode).Scan(&fd) - if err != nil { - return nil, err - } - return &LargeObject{fd: fd, tx: o.tx, ctx: ctx}, nil -} - -// Unlink removes a large object from the database. -func (o *LargeObjects) Unlink(ctx context.Context, oid uint32) error { - var result int32 - err := o.tx.QueryRow(ctx, "select lo_unlink($1)", oid).Scan(&result) - if err != nil { - return err - } - - if result != 1 { - return errors.New("failed to remove large object") - } - - return nil -} - -// A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized -// in. It uses the context it was initialized with for all operations. It implements these interfaces: -// -// io.Writer -// io.Reader -// io.Seeker -// io.Closer -type LargeObject struct { - ctx context.Context - tx Tx - fd int32 -} - -// Write writes p to the large object and returns the number of bytes written and an error if not all of p was written. -func (o *LargeObject) Write(p []byte) (int, error) { - var n int - err := o.tx.QueryRow(o.ctx, "select lowrite($1, $2)", o.fd, p).Scan(&n) - if err != nil { - return n, err - } - - if n < 0 { - return 0, errors.New("failed to write to large object") - } - - return n, nil -} - -// Read reads up to len(p) bytes into p returning the number of bytes read. -func (o *LargeObject) Read(p []byte) (int, error) { - var res []byte - err := o.tx.QueryRow(o.ctx, "select loread($1, $2)", o.fd, len(p)).Scan(&res) - copy(p, res) - if err != nil { - return len(res), err - } - - if len(res) < len(p) { - err = io.EOF - } - return len(res), err -} - -// Seek moves the current location pointer to the new location specified by offset. -func (o *LargeObject) Seek(offset int64, whence int) (n int64, err error) { - err = o.tx.QueryRow(o.ctx, "select lo_lseek64($1, $2, $3)", o.fd, offset, whence).Scan(&n) - return n, err -} - -// Tell returns the current read or write location of the large object descriptor. -func (o *LargeObject) Tell() (n int64, err error) { - err = o.tx.QueryRow(o.ctx, "select lo_tell64($1)", o.fd).Scan(&n) - return n, err -} - -// Truncate the large object to size. -func (o *LargeObject) Truncate(size int64) (err error) { - _, err = o.tx.Exec(o.ctx, "select lo_truncate64($1, $2)", o.fd, size) - return err -} - -// Close the large object descriptor. -func (o *LargeObject) Close() error { - _, err := o.tx.Exec(o.ctx, "select lo_close($1)", o.fd) - return err -} diff --git a/vendor/github.com/jackc/pgx/v4/logger.go b/vendor/github.com/jackc/pgx/v4/logger.go deleted file mode 100644 index 41f8b7e8..00000000 --- a/vendor/github.com/jackc/pgx/v4/logger.go +++ /dev/null @@ -1,107 +0,0 @@ -package pgx - -import ( - "context" - "encoding/hex" - "errors" - "fmt" -) - -// The values for log levels are chosen such that the zero value means that no -// log level was specified. -const ( - LogLevelTrace = 6 - LogLevelDebug = 5 - LogLevelInfo = 4 - LogLevelWarn = 3 - LogLevelError = 2 - LogLevelNone = 1 -) - -// LogLevel represents the pgx logging level. See LogLevel* constants for -// possible values. -type LogLevel int - -func (ll LogLevel) String() string { - switch ll { - case LogLevelTrace: - return "trace" - case LogLevelDebug: - return "debug" - case LogLevelInfo: - return "info" - case LogLevelWarn: - return "warn" - case LogLevelError: - return "error" - case LogLevelNone: - return "none" - default: - return fmt.Sprintf("invalid level %d", ll) - } -} - -// Logger is the interface used to get logging from pgx internals. -type Logger interface { - // Log a message at the given level with data key/value pairs. data may be nil. - Log(ctx context.Context, level LogLevel, msg string, data map[string]interface{}) -} - -// LoggerFunc is a wrapper around a function to satisfy the pgx.Logger interface -type LoggerFunc func(ctx context.Context, level LogLevel, msg string, data map[string]interface{}) - -// Log delegates the logging request to the wrapped function -func (f LoggerFunc) Log(ctx context.Context, level LogLevel, msg string, data map[string]interface{}) { - f(ctx, level, msg, data) -} - -// LogLevelFromString converts log level string to constant -// -// Valid levels: -// -// trace -// debug -// info -// warn -// error -// none -func LogLevelFromString(s string) (LogLevel, error) { - switch s { - case "trace": - return LogLevelTrace, nil - case "debug": - return LogLevelDebug, nil - case "info": - return LogLevelInfo, nil - case "warn": - return LogLevelWarn, nil - case "error": - return LogLevelError, nil - case "none": - return LogLevelNone, nil - default: - return 0, errors.New("invalid log level") - } -} - -func logQueryArgs(args []interface{}) []interface{} { - logArgs := make([]interface{}, 0, len(args)) - - for _, a := range args { - switch v := a.(type) { - case []byte: - if len(v) < 64 { - a = hex.EncodeToString(v) - } else { - a = fmt.Sprintf("%x (truncated %d bytes)", v[:64], len(v)-64) - } - case string: - if len(v) > 64 { - a = fmt.Sprintf("%s (truncated %d bytes)", v[:64], len(v)-64) - } - } - logArgs = append(logArgs, a) - } - - return logArgs -} diff --git a/vendor/github.com/jackc/pgx/v4/messages.go b/vendor/github.com/jackc/pgx/v4/messages.go deleted file mode 100644 index 5324cbb5..00000000 --- a/vendor/github.com/jackc/pgx/v4/messages.go +++ /dev/null @@ -1,23 +0,0 @@ -package pgx - -import ( - "database/sql/driver" - - "github.com/jackc/pgtype" -) - -func convertDriverValuers(args []interface{}) ([]interface{}, error) { - for i, arg := range args { - switch arg := arg.(type) { - case pgtype.BinaryEncoder: - case pgtype.TextEncoder: - case driver.Valuer: - v, err := callValuerValue(arg) - if err != nil { - return nil, err - } - args[i] = v - } - } - return args, nil -} diff --git a/vendor/github.com/jackc/pgx/v4/rows.go b/vendor/github.com/jackc/pgx/v4/rows.go deleted file mode 100644 index 4749ead9..00000000 --- a/vendor/github.com/jackc/pgx/v4/rows.go +++ /dev/null @@ -1,351 +0,0 @@ -package pgx - -import ( - "context" - "errors" - "fmt" - "time" - - "github.com/jackc/pgconn" - "github.com/jackc/pgproto3/v2" - "github.com/jackc/pgtype" -) - -// Rows is the result set returned from *Conn.Query. Rows must be closed before -// the *Conn can be used again. Rows are closed by explicitly calling Close(), -// calling Next() until it returns false, or when a fatal error occurs. -// -// Once a Rows is closed the only methods that may be called are Close(), Err(), and CommandTag(). -// -// Rows is an interface instead of a struct to allow tests to mock Query. However, -// adding a method to an interface is technically a breaking change. Because of this -// the Rows interface is partially excluded from semantic version requirements. -// Methods will not be removed or changed, but new methods may be added. -type Rows interface { - // Close closes the rows, making the connection ready for use again. It is safe - // to call Close after rows is already closed. - Close() - - // Err returns any error that occurred while reading. - Err() error - - // CommandTag returns the command tag from this query. It is only available after Rows is closed. - CommandTag() pgconn.CommandTag - - FieldDescriptions() []pgproto3.FieldDescription - - // Next prepares the next row for reading. It returns true if there is another - // row and false if no more rows are available. It automatically closes rows - // when all rows are read. - Next() bool - - // Scan reads the values from the current row into dest values positionally. - // dest can include pointers to core types, values implementing the Scanner - // interface, and nil. nil will skip the value entirely. It is an error to - // call Scan without first calling Next() and checking that it returned true. - Scan(dest ...interface{}) error - - // Values returns the decoded row values. As with Scan(), it is an error to - // call Values without first calling Next() and checking that it returned - // true. - Values() ([]interface{}, error) - - // RawValues returns the unparsed bytes of the row values. The returned [][]byte is only valid until the next Next - // call or the Rows is closed. However, the underlying byte data is safe to retain a reference to and mutate. - RawValues() [][]byte -} - -// Row is a convenience wrapper over Rows that is returned by QueryRow. -// -// Row is an interface instead of a struct to allow tests to mock QueryRow. However, -// adding a method to an interface is technically a breaking change. Because of this -// the Row interface is partially excluded from semantic version requirements. -// Methods will not be removed or changed, but new methods may be added. -type Row interface { - // Scan works the same as Rows. with the following exceptions. If no - // rows were found it returns ErrNoRows. If multiple rows are returned it - // ignores all but the first. - Scan(dest ...interface{}) error -} - -// connRow implements the Row interface for Conn.QueryRow. -type connRow connRows - -func (r *connRow) Scan(dest ...interface{}) (err error) { - rows := (*connRows)(r) - - if rows.Err() != nil { - return rows.Err() - } - - if !rows.Next() { - if rows.Err() == nil { - return ErrNoRows - } - return rows.Err() - } - - rows.Scan(dest...) - rows.Close() - return rows.Err() -} - -type rowLog interface { - shouldLog(lvl LogLevel) bool - log(ctx context.Context, lvl LogLevel, msg string, data map[string]interface{}) -} - -// connRows implements the Rows interface for Conn.Query. -type connRows struct { - ctx context.Context - logger rowLog - connInfo *pgtype.ConnInfo - values [][]byte - rowCount int - err error - commandTag pgconn.CommandTag - startTime time.Time - sql string - args []interface{} - closed bool - conn *Conn - - resultReader *pgconn.ResultReader - multiResultReader *pgconn.MultiResultReader - - scanPlans []pgtype.ScanPlan -} - -func (rows *connRows) FieldDescriptions() []pgproto3.FieldDescription { - return rows.resultReader.FieldDescriptions() -} - -func (rows *connRows) Close() { - if rows.closed { - return - } - - rows.closed = true - - if rows.resultReader != nil { - var closeErr error - rows.commandTag, closeErr = rows.resultReader.Close() - if rows.err == nil { - rows.err = closeErr - } - } - - if rows.multiResultReader != nil { - closeErr := rows.multiResultReader.Close() - if rows.err == nil { - rows.err = closeErr - } - } - - if rows.logger != nil { - endTime := time.Now() - - if rows.err == nil { - if rows.logger.shouldLog(LogLevelInfo) { - rows.logger.log(rows.ctx, LogLevelInfo, "Query", map[string]interface{}{"sql": rows.sql, "args": logQueryArgs(rows.args), "time": endTime.Sub(rows.startTime), "rowCount": rows.rowCount}) - } - } else { - if rows.logger.shouldLog(LogLevelError) { - rows.logger.log(rows.ctx, LogLevelError, "Query", map[string]interface{}{"err": rows.err, "sql": rows.sql, "time": endTime.Sub(rows.startTime), "args": logQueryArgs(rows.args)}) - } - if rows.err != nil && rows.conn.stmtcache != nil { - rows.conn.stmtcache.StatementErrored(rows.sql, rows.err) - } - } - } -} - -func (rows *connRows) CommandTag() pgconn.CommandTag { - return rows.commandTag -} - -func (rows *connRows) Err() error { - return rows.err -} - -// fatal signals an error occurred after the query was sent to the server. It -// closes the rows automatically. -func (rows *connRows) fatal(err error) { - if rows.err != nil { - return - } - - rows.err = err - rows.Close() -} - -func (rows *connRows) Next() bool { - if rows.closed { - return false - } - - if rows.resultReader.NextRow() { - rows.rowCount++ - rows.values = rows.resultReader.Values() - return true - } else { - rows.Close() - return false - } -} - -func (rows *connRows) Scan(dest ...interface{}) error { - ci := rows.connInfo - fieldDescriptions := rows.FieldDescriptions() - values := rows.values - - if len(fieldDescriptions) != len(values) { - err := fmt.Errorf("number of field descriptions must equal number of values, got %d and %d", len(fieldDescriptions), len(values)) - rows.fatal(err) - return err - } - if len(fieldDescriptions) != len(dest) { - err := fmt.Errorf("number of field descriptions must equal number of destinations, got %d and %d", len(fieldDescriptions), len(dest)) - rows.fatal(err) - return err - } - - if rows.scanPlans == nil { - rows.scanPlans = make([]pgtype.ScanPlan, len(values)) - for i := range dest { - rows.scanPlans[i] = ci.PlanScan(fieldDescriptions[i].DataTypeOID, fieldDescriptions[i].Format, dest[i]) - } - } - - for i, dst := range dest { - if dst == nil { - continue - } - - err := rows.scanPlans[i].Scan(ci, fieldDescriptions[i].DataTypeOID, fieldDescriptions[i].Format, values[i], dst) - if err != nil { - err = ScanArgError{ColumnIndex: i, Err: err} - rows.fatal(err) - return err - } - } - - return nil -} - -func (rows *connRows) Values() ([]interface{}, error) { - if rows.closed { - return nil, errors.New("rows is closed") - } - - values := make([]interface{}, 0, len(rows.FieldDescriptions())) - - for i := range rows.FieldDescriptions() { - buf := rows.values[i] - fd := &rows.FieldDescriptions()[i] - - if buf == nil { - values = append(values, nil) - continue - } - - if dt, ok := rows.connInfo.DataTypeForOID(fd.DataTypeOID); ok { - value := dt.Value - - switch fd.Format { - case TextFormatCode: - decoder, ok := value.(pgtype.TextDecoder) - if !ok { - decoder = &pgtype.GenericText{} - } - err := decoder.DecodeText(rows.connInfo, buf) - if err != nil { - rows.fatal(err) - } - values = append(values, decoder.(pgtype.Value).Get()) - case BinaryFormatCode: - decoder, ok := value.(pgtype.BinaryDecoder) - if !ok { - decoder = &pgtype.GenericBinary{} - } - err := decoder.DecodeBinary(rows.connInfo, buf) - if err != nil { - rows.fatal(err) - } - values = append(values, value.Get()) - default: - rows.fatal(errors.New("Unknown format code")) - } - } else { - switch fd.Format { - case TextFormatCode: - decoder := &pgtype.GenericText{} - err := decoder.DecodeText(rows.connInfo, buf) - if err != nil { - rows.fatal(err) - } - values = append(values, decoder.Get()) - case BinaryFormatCode: - decoder := &pgtype.GenericBinary{} - err := decoder.DecodeBinary(rows.connInfo, buf) - if err != nil { - rows.fatal(err) - } - values = append(values, decoder.Get()) - default: - rows.fatal(errors.New("Unknown format code")) - } - } - - if rows.Err() != nil { - return nil, rows.Err() - } - } - - return values, rows.Err() -} - -func (rows *connRows) RawValues() [][]byte { - return rows.values -} - -type ScanArgError struct { - ColumnIndex int - Err error -} - -func (e ScanArgError) Error() string { - return fmt.Sprintf("can't scan into dest[%d]: %v", e.ColumnIndex, e.Err) -} - -func (e ScanArgError) Unwrap() error { - return e.Err -} - -// ScanRow decodes raw row data into dest. It can be used to scan rows read from the lower level pgconn interface. -// -// connInfo - OID to Go type mapping. -// fieldDescriptions - OID and format of values -// values - the raw data as returned from the PostgreSQL server -// dest - the destination that values will be decoded into -func ScanRow(connInfo *pgtype.ConnInfo, fieldDescriptions []pgproto3.FieldDescription, values [][]byte, dest ...interface{}) error { - if len(fieldDescriptions) != len(values) { - return fmt.Errorf("number of field descriptions must equal number of values, got %d and %d", len(fieldDescriptions), len(values)) - } - if len(fieldDescriptions) != len(dest) { - return fmt.Errorf("number of field descriptions must equal number of destinations, got %d and %d", len(fieldDescriptions), len(dest)) - } - - for i, d := range dest { - if d == nil { - continue - } - - err := connInfo.Scan(fieldDescriptions[i].DataTypeOID, fieldDescriptions[i].Format, values[i], d) - if err != nil { - return ScanArgError{ColumnIndex: i, Err: err} - } - } - - return nil -} diff --git a/vendor/github.com/jackc/pgx/v4/stdlib/sql.go b/vendor/github.com/jackc/pgx/v4/stdlib/sql.go deleted file mode 100644 index 1c46e278..00000000 --- a/vendor/github.com/jackc/pgx/v4/stdlib/sql.go +++ /dev/null @@ -1,869 +0,0 @@ -// Package stdlib is the compatibility layer from pgx to database/sql. -// -// A database/sql connection can be established through sql.Open. -// -// db, err := sql.Open("pgx", "postgres://pgx_md5:secret@localhost:5432/pgx_test?sslmode=disable") -// if err != nil { -// return err -// } -// -// Or from a DSN string. -// -// db, err := sql.Open("pgx", "user=postgres password=secret host=localhost port=5432 database=pgx_test sslmode=disable") -// if err != nil { -// return err -// } -// -// Or a pgx.ConnConfig can be used to set configuration not accessible via connection string. In this case the -// pgx.ConnConfig must first be registered with the driver. This registration returns a connection string which is used -// with sql.Open. -// -// connConfig, _ := pgx.ParseConfig(os.Getenv("DATABASE_URL")) -// connConfig.Logger = myLogger -// connStr := stdlib.RegisterConnConfig(connConfig) -// db, _ := sql.Open("pgx", connStr) -// -// pgx uses standard PostgreSQL positional parameters in queries. e.g. $1, $2. -// It does not support named parameters. -// -// db.QueryRow("select * from users where id=$1", userID) -// -// In Go 1.13 and above (*sql.Conn) Raw() can be used to get a *pgx.Conn from the standard -// database/sql.DB connection pool. This allows operations that use pgx specific functionality. -// -// // Given db is a *sql.DB -// conn, err := db.Conn(context.Background()) -// if err != nil { -// // handle error from acquiring connection from DB pool -// } -// -// err = conn.Raw(func(driverConn interface{}) error { -// conn := driverConn.(*stdlib.Conn).Conn() // conn is a *pgx.Conn -// // Do pgx specific stuff with conn -// conn.CopyFrom(...) -// return nil -// }) -// if err != nil { -// // handle error that occurred while using *pgx.Conn -// } -package stdlib - -import ( - "context" - "database/sql" - "database/sql/driver" - "errors" - "fmt" - "io" - "math" - "math/rand" - "reflect" - "sort" - "strconv" - "strings" - "sync" - "time" - - "github.com/jackc/pgconn" - "github.com/jackc/pgtype" - "github.com/jackc/pgx/v4" -) - -// Only intrinsic types should be binary format with database/sql. -var databaseSQLResultFormats pgx.QueryResultFormatsByOID - -var pgxDriver *Driver - -type ctxKey int - -var ctxKeyFakeTx ctxKey = 0 - -var ErrNotPgx = errors.New("not pgx *sql.DB") - -func init() { - pgxDriver = &Driver{ - configs: make(map[string]*pgx.ConnConfig), - } - fakeTxConns = make(map[*pgx.Conn]*sql.Tx) - - drivers := sql.Drivers() - // if pgx driver was already registered by different pgx major version then we skip registration under the default name. - if i := sort.SearchStrings(sql.Drivers(), "pgx"); len(drivers) >= i || drivers[i] != "pgx" { - sql.Register("pgx", pgxDriver) - } - sql.Register("pgx/v4", pgxDriver) - - databaseSQLResultFormats = pgx.QueryResultFormatsByOID{ - pgtype.BoolOID: 1, - pgtype.ByteaOID: 1, - pgtype.CIDOID: 1, - pgtype.DateOID: 1, - pgtype.Float4OID: 1, - pgtype.Float8OID: 1, - pgtype.Int2OID: 1, - pgtype.Int4OID: 1, - pgtype.Int8OID: 1, - pgtype.OIDOID: 1, - pgtype.TimestampOID: 1, - pgtype.TimestamptzOID: 1, - pgtype.XIDOID: 1, - } -} - -var ( - fakeTxMutex sync.Mutex - fakeTxConns map[*pgx.Conn]*sql.Tx -) - -// OptionOpenDB options for configuring the driver when opening a new db pool. -type OptionOpenDB func(*connector) - -// OptionBeforeConnect provides a callback for before connect. It is passed a shallow copy of the ConnConfig that will -// be used to connect, so only its immediate members should be modified. -func OptionBeforeConnect(bc func(context.Context, *pgx.ConnConfig) error) OptionOpenDB { - return func(dc *connector) { - dc.BeforeConnect = bc - } -} - -// OptionAfterConnect provides a callback for after connect. -func OptionAfterConnect(ac func(context.Context, *pgx.Conn) error) OptionOpenDB { - return func(dc *connector) { - dc.AfterConnect = ac - } -} - -// OptionResetSession provides a callback that can be used to add custom logic prior to executing a query on the -// connection if the connection has been used before. -// If ResetSessionFunc returns ErrBadConn error the connection will be discarded. -func OptionResetSession(rs func(context.Context, *pgx.Conn) error) OptionOpenDB { - return func(dc *connector) { - dc.ResetSession = rs - } -} - -// RandomizeHostOrderFunc is a BeforeConnect hook that randomizes the host order in the provided connConfig, so that a -// new host becomes primary each time. This is useful to distribute connections for multi-master databases like -// CockroachDB. If you use this you likely should set https://golang.org/pkg/database/sql/#DB.SetConnMaxLifetime as well -// to ensure that connections are periodically rebalanced across your nodes. -func RandomizeHostOrderFunc(ctx context.Context, connConfig *pgx.ConnConfig) error { - if len(connConfig.Fallbacks) == 0 { - return nil - } - - newFallbacks := append([]*pgconn.FallbackConfig{&pgconn.FallbackConfig{ - Host: connConfig.Host, - Port: connConfig.Port, - TLSConfig: connConfig.TLSConfig, - }}, connConfig.Fallbacks...) - - rand.Shuffle(len(newFallbacks), func(i, j int) { - newFallbacks[i], newFallbacks[j] = newFallbacks[j], newFallbacks[i] - }) - - // Use the one that sorted last as the primary and keep the rest as the fallbacks - newPrimary := newFallbacks[len(newFallbacks)-1] - connConfig.Host = newPrimary.Host - connConfig.Port = newPrimary.Port - connConfig.TLSConfig = newPrimary.TLSConfig - connConfig.Fallbacks = newFallbacks[:len(newFallbacks)-1] - return nil -} - -func GetConnector(config pgx.ConnConfig, opts ...OptionOpenDB) driver.Connector { - c := connector{ - ConnConfig: config, - BeforeConnect: func(context.Context, *pgx.ConnConfig) error { return nil }, // noop before connect by default - AfterConnect: func(context.Context, *pgx.Conn) error { return nil }, // noop after connect by default - ResetSession: func(context.Context, *pgx.Conn) error { return nil }, // noop reset session by default - driver: pgxDriver, - } - - for _, opt := range opts { - opt(&c) - } - return c -} - -func OpenDB(config pgx.ConnConfig, opts ...OptionOpenDB) *sql.DB { - c := GetConnector(config, opts...) - return sql.OpenDB(c) -} - -type connector struct { - pgx.ConnConfig - BeforeConnect func(context.Context, *pgx.ConnConfig) error // function to call before creation of every new connection - AfterConnect func(context.Context, *pgx.Conn) error // function to call after creation of every new connection - ResetSession func(context.Context, *pgx.Conn) error // function is called before a connection is reused - driver *Driver -} - -// Connect implement driver.Connector interface -func (c connector) Connect(ctx context.Context) (driver.Conn, error) { - var ( - err error - conn *pgx.Conn - ) - - // Create a shallow copy of the config, so that BeforeConnect can safely modify it - connConfig := c.ConnConfig - if err = c.BeforeConnect(ctx, &connConfig); err != nil { - return nil, err - } - - if conn, err = pgx.ConnectConfig(ctx, &connConfig); err != nil { - return nil, err - } - - if err = c.AfterConnect(ctx, conn); err != nil { - return nil, err - } - - return &Conn{conn: conn, driver: c.driver, connConfig: connConfig, resetSessionFunc: c.ResetSession}, nil -} - -// Driver implement driver.Connector interface -func (c connector) Driver() driver.Driver { - return c.driver -} - -// GetDefaultDriver returns the driver initialized in the init function -// and used when the pgx driver is registered. -func GetDefaultDriver() driver.Driver { - return pgxDriver -} - -type Driver struct { - configMutex sync.Mutex - configs map[string]*pgx.ConnConfig - sequence int -} - -func (d *Driver) Open(name string) (driver.Conn, error) { - ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second) // Ensure eventual timeout - defer cancel() - - connector, err := d.OpenConnector(name) - if err != nil { - return nil, err - } - return connector.Connect(ctx) -} - -func (d *Driver) OpenConnector(name string) (driver.Connector, error) { - return &driverConnector{driver: d, name: name}, nil -} - -func (d *Driver) registerConnConfig(c *pgx.ConnConfig) string { - d.configMutex.Lock() - connStr := fmt.Sprintf("registeredConnConfig%d", d.sequence) - d.sequence++ - d.configs[connStr] = c - d.configMutex.Unlock() - return connStr -} - -func (d *Driver) unregisterConnConfig(connStr string) { - d.configMutex.Lock() - delete(d.configs, connStr) - d.configMutex.Unlock() -} - -type driverConnector struct { - driver *Driver - name string -} - -func (dc *driverConnector) Connect(ctx context.Context) (driver.Conn, error) { - var connConfig *pgx.ConnConfig - - dc.driver.configMutex.Lock() - connConfig = dc.driver.configs[dc.name] - dc.driver.configMutex.Unlock() - - if connConfig == nil { - var err error - connConfig, err = pgx.ParseConfig(dc.name) - if err != nil { - return nil, err - } - } - - conn, err := pgx.ConnectConfig(ctx, connConfig) - if err != nil { - return nil, err - } - - c := &Conn{ - conn: conn, - driver: dc.driver, - connConfig: *connConfig, - resetSessionFunc: func(context.Context, *pgx.Conn) error { return nil }, - } - - return c, nil -} - -func (dc *driverConnector) Driver() driver.Driver { - return dc.driver -} - -// RegisterConnConfig registers a ConnConfig and returns the connection string to use with Open. -func RegisterConnConfig(c *pgx.ConnConfig) string { - return pgxDriver.registerConnConfig(c) -} - -// UnregisterConnConfig removes the ConnConfig registration for connStr. -func UnregisterConnConfig(connStr string) { - pgxDriver.unregisterConnConfig(connStr) -} - -type Conn struct { - conn *pgx.Conn - psCount int64 // Counter used for creating unique prepared statement names - driver *Driver - connConfig pgx.ConnConfig - resetSessionFunc func(context.Context, *pgx.Conn) error // Function is called before a connection is reused -} - -// Conn returns the underlying *pgx.Conn -func (c *Conn) Conn() *pgx.Conn { - return c.conn -} - -func (c *Conn) Prepare(query string) (driver.Stmt, error) { - return c.PrepareContext(context.Background(), query) -} - -func (c *Conn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) { - if c.conn.IsClosed() { - return nil, driver.ErrBadConn - } - - name := fmt.Sprintf("pgx_%d", c.psCount) - c.psCount++ - - sd, err := c.conn.Prepare(ctx, name, query) - if err != nil { - return nil, err - } - - return &Stmt{sd: sd, conn: c}, nil -} - -func (c *Conn) Close() error { - ctx, cancel := context.WithTimeout(context.Background(), time.Second*5) - defer cancel() - return c.conn.Close(ctx) -} - -func (c *Conn) Begin() (driver.Tx, error) { - return c.BeginTx(context.Background(), driver.TxOptions{}) -} - -func (c *Conn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) { - if c.conn.IsClosed() { - return nil, driver.ErrBadConn - } - - if pconn, ok := ctx.Value(ctxKeyFakeTx).(**pgx.Conn); ok { - *pconn = c.conn - return fakeTx{}, nil - } - - var pgxOpts pgx.TxOptions - switch sql.IsolationLevel(opts.Isolation) { - case sql.LevelDefault: - case sql.LevelReadUncommitted: - pgxOpts.IsoLevel = pgx.ReadUncommitted - case sql.LevelReadCommitted: - pgxOpts.IsoLevel = pgx.ReadCommitted - case sql.LevelRepeatableRead, sql.LevelSnapshot: - pgxOpts.IsoLevel = pgx.RepeatableRead - case sql.LevelSerializable: - pgxOpts.IsoLevel = pgx.Serializable - default: - return nil, fmt.Errorf("unsupported isolation: %v", opts.Isolation) - } - - if opts.ReadOnly { - pgxOpts.AccessMode = pgx.ReadOnly - } - - tx, err := c.conn.BeginTx(ctx, pgxOpts) - if err != nil { - return nil, err - } - - return wrapTx{ctx: ctx, tx: tx}, nil -} - -func (c *Conn) ExecContext(ctx context.Context, query string, argsV []driver.NamedValue) (driver.Result, error) { - if c.conn.IsClosed() { - return nil, driver.ErrBadConn - } - - args := namedValueToInterface(argsV) - - commandTag, err := c.conn.Exec(ctx, query, args...) - // if we got a network error before we had a chance to send the query, retry - if err != nil { - if pgconn.SafeToRetry(err) { - return nil, driver.ErrBadConn - } - } - return driver.RowsAffected(commandTag.RowsAffected()), err -} - -func (c *Conn) QueryContext(ctx context.Context, query string, argsV []driver.NamedValue) (driver.Rows, error) { - if c.conn.IsClosed() { - return nil, driver.ErrBadConn - } - - args := []interface{}{databaseSQLResultFormats} - args = append(args, namedValueToInterface(argsV)...) - - rows, err := c.conn.Query(ctx, query, args...) - if err != nil { - if pgconn.SafeToRetry(err) { - return nil, driver.ErrBadConn - } - return nil, err - } - - // Preload first row because otherwise we won't know what columns are available when database/sql asks. - more := rows.Next() - if err = rows.Err(); err != nil { - rows.Close() - return nil, err - } - return &Rows{conn: c, rows: rows, skipNext: true, skipNextMore: more}, nil -} - -func (c *Conn) Ping(ctx context.Context) error { - if c.conn.IsClosed() { - return driver.ErrBadConn - } - - err := c.conn.Ping(ctx) - if err != nil { - // A Ping failure implies some sort of fatal state. The connection is almost certainly already closed by the - // failure, but manually close it just to be sure. - c.Close() - return driver.ErrBadConn - } - - return nil -} - -func (c *Conn) CheckNamedValue(*driver.NamedValue) error { - // Underlying pgx supports sql.Scanner and driver.Valuer interfaces natively. So everything can be passed through directly. - return nil -} - -func (c *Conn) ResetSession(ctx context.Context) error { - if c.conn.IsClosed() { - return driver.ErrBadConn - } - - return c.resetSessionFunc(ctx, c.conn) -} - -type Stmt struct { - sd *pgconn.StatementDescription - conn *Conn -} - -func (s *Stmt) Close() error { - ctx, cancel := context.WithTimeout(context.Background(), time.Second*5) - defer cancel() - return s.conn.conn.Deallocate(ctx, s.sd.Name) -} - -func (s *Stmt) NumInput() int { - return len(s.sd.ParamOIDs) -} - -func (s *Stmt) Exec(argsV []driver.Value) (driver.Result, error) { - return nil, errors.New("Stmt.Exec deprecated and not implemented") -} - -func (s *Stmt) ExecContext(ctx context.Context, argsV []driver.NamedValue) (driver.Result, error) { - return s.conn.ExecContext(ctx, s.sd.Name, argsV) -} - -func (s *Stmt) Query(argsV []driver.Value) (driver.Rows, error) { - return nil, errors.New("Stmt.Query deprecated and not implemented") -} - -func (s *Stmt) QueryContext(ctx context.Context, argsV []driver.NamedValue) (driver.Rows, error) { - return s.conn.QueryContext(ctx, s.sd.Name, argsV) -} - -type rowValueFunc func(src []byte) (driver.Value, error) - -type Rows struct { - conn *Conn - rows pgx.Rows - valueFuncs []rowValueFunc - skipNext bool - skipNextMore bool - - columnNames []string -} - -func (r *Rows) Columns() []string { - if r.columnNames == nil { - fields := r.rows.FieldDescriptions() - r.columnNames = make([]string, len(fields)) - for i, fd := range fields { - r.columnNames[i] = string(fd.Name) - } - } - - return r.columnNames -} - -// ColumnTypeDatabaseTypeName returns the database system type name. If the name is unknown the OID is returned. -func (r *Rows) ColumnTypeDatabaseTypeName(index int) string { - if dt, ok := r.conn.conn.ConnInfo().DataTypeForOID(r.rows.FieldDescriptions()[index].DataTypeOID); ok { - return strings.ToUpper(dt.Name) - } - - return strconv.FormatInt(int64(r.rows.FieldDescriptions()[index].DataTypeOID), 10) -} - -const varHeaderSize = 4 - -// ColumnTypeLength returns the length of the column type if the column is a -// variable length type. If the column is not a variable length type ok -// should return false. -func (r *Rows) ColumnTypeLength(index int) (int64, bool) { - fd := r.rows.FieldDescriptions()[index] - - switch fd.DataTypeOID { - case pgtype.TextOID, pgtype.ByteaOID: - return math.MaxInt64, true - case pgtype.VarcharOID, pgtype.BPCharArrayOID: - return int64(fd.TypeModifier - varHeaderSize), true - default: - return 0, false - } -} - -// ColumnTypePrecisionScale should return the precision and scale for decimal -// types. If not applicable, ok should be false. -func (r *Rows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) { - fd := r.rows.FieldDescriptions()[index] - - switch fd.DataTypeOID { - case pgtype.NumericOID: - mod := fd.TypeModifier - varHeaderSize - precision = int64((mod >> 16) & 0xffff) - scale = int64(mod & 0xffff) - return precision, scale, true - default: - return 0, 0, false - } -} - -// ColumnTypeScanType returns the value type that can be used to scan types into. -func (r *Rows) ColumnTypeScanType(index int) reflect.Type { - fd := r.rows.FieldDescriptions()[index] - - switch fd.DataTypeOID { - case pgtype.Float8OID: - return reflect.TypeOf(float64(0)) - case pgtype.Float4OID: - return reflect.TypeOf(float32(0)) - case pgtype.Int8OID: - return reflect.TypeOf(int64(0)) - case pgtype.Int4OID: - return reflect.TypeOf(int32(0)) - case pgtype.Int2OID: - return reflect.TypeOf(int16(0)) - case pgtype.BoolOID: - return reflect.TypeOf(false) - case pgtype.NumericOID: - return reflect.TypeOf(float64(0)) - case pgtype.DateOID, pgtype.TimestampOID, pgtype.TimestamptzOID: - return reflect.TypeOf(time.Time{}) - case pgtype.ByteaOID: - return reflect.TypeOf([]byte(nil)) - default: - return reflect.TypeOf("") - } -} - -func (r *Rows) Close() error { - r.rows.Close() - return r.rows.Err() -} - -func (r *Rows) Next(dest []driver.Value) error { - ci := r.conn.conn.ConnInfo() - fieldDescriptions := r.rows.FieldDescriptions() - - if r.valueFuncs == nil { - r.valueFuncs = make([]rowValueFunc, len(fieldDescriptions)) - - for i, fd := range fieldDescriptions { - dataTypeOID := fd.DataTypeOID - format := fd.Format - - switch fd.DataTypeOID { - case pgtype.BoolOID: - var d bool - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return d, err - } - case pgtype.ByteaOID: - var d []byte - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return d, err - } - case pgtype.CIDOID: - var d pgtype.CID - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.DateOID: - var d pgtype.Date - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.Float4OID: - var d float32 - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return float64(d), err - } - case pgtype.Float8OID: - var d float64 - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return d, err - } - case pgtype.Int2OID: - var d int16 - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return int64(d), err - } - case pgtype.Int4OID: - var d int32 - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return int64(d), err - } - case pgtype.Int8OID: - var d int64 - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return d, err - } - case pgtype.JSONOID: - var d pgtype.JSON - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.JSONBOID: - var d pgtype.JSONB - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.OIDOID: - var d pgtype.OIDValue - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.TimestampOID: - var d pgtype.Timestamp - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.TimestamptzOID: - var d pgtype.Timestamptz - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - case pgtype.XIDOID: - var d pgtype.XID - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - if err != nil { - return nil, err - } - return d.Value() - } - default: - var d string - scanPlan := ci.PlanScan(dataTypeOID, format, &d) - r.valueFuncs[i] = func(src []byte) (driver.Value, error) { - err := scanPlan.Scan(ci, dataTypeOID, format, src, &d) - return d, err - } - } - } - } - - var more bool - if r.skipNext { - more = r.skipNextMore - r.skipNext = false - } else { - more = r.rows.Next() - } - - if !more { - if r.rows.Err() == nil { - return io.EOF - } else { - return r.rows.Err() - } - } - - for i, rv := range r.rows.RawValues() { - if rv != nil { - var err error - dest[i], err = r.valueFuncs[i](rv) - if err != nil { - return fmt.Errorf("convert field %d failed: %v", i, err) - } - } else { - dest[i] = nil - } - } - - return nil -} - -func valueToInterface(argsV []driver.Value) []interface{} { - args := make([]interface{}, 0, len(argsV)) - for _, v := range argsV { - if v != nil { - args = append(args, v.(interface{})) - } else { - args = append(args, nil) - } - } - return args -} - -func namedValueToInterface(argsV []driver.NamedValue) []interface{} { - args := make([]interface{}, 0, len(argsV)) - for _, v := range argsV { - if v.Value != nil { - args = append(args, v.Value.(interface{})) - } else { - args = append(args, nil) - } - } - return args -} - -type wrapTx struct { - ctx context.Context - tx pgx.Tx -} - -func (wtx wrapTx) Commit() error { return wtx.tx.Commit(wtx.ctx) } - -func (wtx wrapTx) Rollback() error { return wtx.tx.Rollback(wtx.ctx) } - -type fakeTx struct{} - -func (fakeTx) Commit() error { return nil } - -func (fakeTx) Rollback() error { return nil } - -// AcquireConn acquires a *pgx.Conn from database/sql connection pool. It must be released with ReleaseConn. -// -// In Go 1.13 this functionality has been incorporated into the standard library in the db.Conn.Raw() method. -func AcquireConn(db *sql.DB) (*pgx.Conn, error) { - var conn *pgx.Conn - ctx := context.WithValue(context.Background(), ctxKeyFakeTx, &conn) - tx, err := db.BeginTx(ctx, nil) - if err != nil { - return nil, err - } - if conn == nil { - tx.Rollback() - return nil, ErrNotPgx - } - - fakeTxMutex.Lock() - fakeTxConns[conn] = tx - fakeTxMutex.Unlock() - - return conn, nil -} - -// ReleaseConn releases a *pgx.Conn acquired with AcquireConn. -func ReleaseConn(db *sql.DB, conn *pgx.Conn) error { - var tx *sql.Tx - var ok bool - - if conn.PgConn().IsBusy() || conn.PgConn().TxStatus() != 'I' { - ctx, cancel := context.WithTimeout(context.Background(), time.Second) - defer cancel() - conn.Close(ctx) - } - - fakeTxMutex.Lock() - tx, ok = fakeTxConns[conn] - if ok { - delete(fakeTxConns, conn) - fakeTxMutex.Unlock() - } else { - fakeTxMutex.Unlock() - return fmt.Errorf("can't release conn that is not acquired") - } - - return tx.Rollback() -} diff --git a/vendor/github.com/jackc/pgx/v4/tx.go b/vendor/github.com/jackc/pgx/v4/tx.go deleted file mode 100644 index 2914ada7..00000000 --- a/vendor/github.com/jackc/pgx/v4/tx.go +++ /dev/null @@ -1,448 +0,0 @@ -package pgx - -import ( - "bytes" - "context" - "errors" - "fmt" - "strconv" - - "github.com/jackc/pgconn" -) - -// TxIsoLevel is the transaction isolation level (serializable, repeatable read, read committed or read uncommitted) -type TxIsoLevel string - -// Transaction isolation levels -const ( - Serializable TxIsoLevel = "serializable" - RepeatableRead TxIsoLevel = "repeatable read" - ReadCommitted TxIsoLevel = "read committed" - ReadUncommitted TxIsoLevel = "read uncommitted" -) - -// TxAccessMode is the transaction access mode (read write or read only) -type TxAccessMode string - -// Transaction access modes -const ( - ReadWrite TxAccessMode = "read write" - ReadOnly TxAccessMode = "read only" -) - -// TxDeferrableMode is the transaction deferrable mode (deferrable or not deferrable) -type TxDeferrableMode string - -// Transaction deferrable modes -const ( - Deferrable TxDeferrableMode = "deferrable" - NotDeferrable TxDeferrableMode = "not deferrable" -) - -// TxOptions are transaction modes within a transaction block -type TxOptions struct { - IsoLevel TxIsoLevel - AccessMode TxAccessMode - DeferrableMode TxDeferrableMode -} - -var emptyTxOptions TxOptions - -func (txOptions TxOptions) beginSQL() string { - if txOptions == emptyTxOptions { - return "begin" - } - buf := &bytes.Buffer{} - buf.WriteString("begin") - if txOptions.IsoLevel != "" { - fmt.Fprintf(buf, " isolation level %s", txOptions.IsoLevel) - } - if txOptions.AccessMode != "" { - fmt.Fprintf(buf, " %s", txOptions.AccessMode) - } - if txOptions.DeferrableMode != "" { - fmt.Fprintf(buf, " %s", txOptions.DeferrableMode) - } - - return buf.String() -} - -var ErrTxClosed = errors.New("tx is closed") - -// ErrTxCommitRollback occurs when an error has occurred in a transaction and -// Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but -// it is treated as ROLLBACK. -var ErrTxCommitRollback = errors.New("commit unexpectedly resulted in rollback") - -// Begin starts a transaction. Unlike database/sql, the context only affects the begin command. i.e. there is no -// auto-rollback on context cancellation. -func (c *Conn) Begin(ctx context.Context) (Tx, error) { - return c.BeginTx(ctx, TxOptions{}) -} - -// BeginTx starts a transaction with txOptions determining the transaction mode. Unlike database/sql, the context only -// affects the begin command. i.e. there is no auto-rollback on context cancellation. -func (c *Conn) BeginTx(ctx context.Context, txOptions TxOptions) (Tx, error) { - _, err := c.Exec(ctx, txOptions.beginSQL()) - if err != nil { - // begin should never fail unless there is an underlying connection issue or - // a context timeout. In either case, the connection is possibly broken. - c.die(errors.New("failed to begin transaction")) - return nil, err - } - - return &dbTx{conn: c}, nil -} - -// BeginFunc starts a transaction and calls f. If f does not return an error the transaction is committed. If f returns -// an error the transaction is rolled back. The context will be used when executing the transaction control statements -// (BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of f. -func (c *Conn) BeginFunc(ctx context.Context, f func(Tx) error) (err error) { - return c.BeginTxFunc(ctx, TxOptions{}, f) -} - -// BeginTxFunc starts a transaction with txOptions determining the transaction mode and calls f. If f does not return -// an error the transaction is committed. If f returns an error the transaction is rolled back. The context will be -// used when executing the transaction control statements (BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect -// the execution of f. -func (c *Conn) BeginTxFunc(ctx context.Context, txOptions TxOptions, f func(Tx) error) (err error) { - var tx Tx - tx, err = c.BeginTx(ctx, txOptions) - if err != nil { - return err - } - defer func() { - rollbackErr := tx.Rollback(ctx) - if rollbackErr != nil && !errors.Is(rollbackErr, ErrTxClosed) { - err = rollbackErr - } - }() - - fErr := f(tx) - if fErr != nil { - _ = tx.Rollback(ctx) // ignore rollback error as there is already an error to return - return fErr - } - - return tx.Commit(ctx) -} - -// Tx represents a database transaction. -// -// Tx is an interface instead of a struct to enable connection pools to be implemented without relying on internal pgx -// state, to support pseudo-nested transactions with savepoints, and to allow tests to mock transactions. However, -// adding a method to an interface is technically a breaking change. If new methods are added to Conn it may be -// desirable to add them to Tx as well. Because of this the Tx interface is partially excluded from semantic version -// requirements. Methods will not be removed or changed, but new methods may be added. -type Tx interface { - // Begin starts a pseudo nested transaction. - Begin(ctx context.Context) (Tx, error) - - // BeginFunc starts a pseudo nested transaction and executes f. If f does not return an err the pseudo nested - // transaction will be committed. If it does then it will be rolled back. - BeginFunc(ctx context.Context, f func(Tx) error) (err error) - - // Commit commits the transaction if this is a real transaction or releases the savepoint if this is a pseudo nested - // transaction. Commit will return ErrTxClosed if the Tx is already closed, but is otherwise safe to call multiple - // times. If the commit fails with a rollback status (e.g. the transaction was already in a broken state) then - // ErrTxCommitRollback will be returned. - Commit(ctx context.Context) error - - // Rollback rolls back the transaction if this is a real transaction or rolls back to the savepoint if this is a - // pseudo nested transaction. Rollback will return ErrTxClosed if the Tx is already closed, but is otherwise safe to - // call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will be called first in a non-error - // condition. Any other failure of a real transaction will result in the connection being closed. - Rollback(ctx context.Context) error - - CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error) - SendBatch(ctx context.Context, b *Batch) BatchResults - LargeObjects() LargeObjects - - Prepare(ctx context.Context, name, sql string) (*pgconn.StatementDescription, error) - - Exec(ctx context.Context, sql string, arguments ...interface{}) (commandTag pgconn.CommandTag, err error) - Query(ctx context.Context, sql string, args ...interface{}) (Rows, error) - QueryRow(ctx context.Context, sql string, args ...interface{}) Row - QueryFunc(ctx context.Context, sql string, args []interface{}, scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) - - // Conn returns the underlying *Conn that on which this transaction is executing. - Conn() *Conn -} - -// dbTx represents a database transaction. -// -// All dbTx methods return ErrTxClosed if Commit or Rollback has already been -// called on the dbTx. -type dbTx struct { - conn *Conn - err error - savepointNum int64 - closed bool -} - -// Begin starts a pseudo nested transaction implemented with a savepoint. -func (tx *dbTx) Begin(ctx context.Context) (Tx, error) { - if tx.closed { - return nil, ErrTxClosed - } - - tx.savepointNum++ - _, err := tx.conn.Exec(ctx, "savepoint sp_"+strconv.FormatInt(tx.savepointNum, 10)) - if err != nil { - return nil, err - } - - return &dbSimulatedNestedTx{tx: tx, savepointNum: tx.savepointNum}, nil -} - -func (tx *dbTx) BeginFunc(ctx context.Context, f func(Tx) error) (err error) { - if tx.closed { - return ErrTxClosed - } - - var savepoint Tx - savepoint, err = tx.Begin(ctx) - if err != nil { - return err - } - defer func() { - rollbackErr := savepoint.Rollback(ctx) - if rollbackErr != nil && !errors.Is(rollbackErr, ErrTxClosed) { - err = rollbackErr - } - }() - - fErr := f(savepoint) - if fErr != nil { - _ = savepoint.Rollback(ctx) // ignore rollback error as there is already an error to return - return fErr - } - - return savepoint.Commit(ctx) -} - -// Commit commits the transaction. -func (tx *dbTx) Commit(ctx context.Context) error { - if tx.closed { - return ErrTxClosed - } - - commandTag, err := tx.conn.Exec(ctx, "commit") - tx.closed = true - if err != nil { - if tx.conn.PgConn().TxStatus() != 'I' { - _ = tx.conn.Close(ctx) // already have error to return - } - return err - } - if string(commandTag) == "ROLLBACK" { - return ErrTxCommitRollback - } - - return nil -} - -// Rollback rolls back the transaction. Rollback will return ErrTxClosed if the -// Tx is already closed, but is otherwise safe to call multiple times. Hence, a -// defer tx.Rollback() is safe even if tx.Commit() will be called first in a -// non-error condition. -func (tx *dbTx) Rollback(ctx context.Context) error { - if tx.closed { - return ErrTxClosed - } - - _, err := tx.conn.Exec(ctx, "rollback") - tx.closed = true - if err != nil { - // A rollback failure leaves the connection in an undefined state - tx.conn.die(fmt.Errorf("rollback failed: %w", err)) - return err - } - - return nil -} - -// Exec delegates to the underlying *Conn -func (tx *dbTx) Exec(ctx context.Context, sql string, arguments ...interface{}) (commandTag pgconn.CommandTag, err error) { - return tx.conn.Exec(ctx, sql, arguments...) -} - -// Prepare delegates to the underlying *Conn -func (tx *dbTx) Prepare(ctx context.Context, name, sql string) (*pgconn.StatementDescription, error) { - if tx.closed { - return nil, ErrTxClosed - } - - return tx.conn.Prepare(ctx, name, sql) -} - -// Query delegates to the underlying *Conn -func (tx *dbTx) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error) { - if tx.closed { - // Because checking for errors can be deferred to the *Rows, build one with the error - err := ErrTxClosed - return &connRows{closed: true, err: err}, err - } - - return tx.conn.Query(ctx, sql, args...) -} - -// QueryRow delegates to the underlying *Conn -func (tx *dbTx) QueryRow(ctx context.Context, sql string, args ...interface{}) Row { - rows, _ := tx.Query(ctx, sql, args...) - return (*connRow)(rows.(*connRows)) -} - -// QueryFunc delegates to the underlying *Conn. -func (tx *dbTx) QueryFunc(ctx context.Context, sql string, args []interface{}, scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) { - if tx.closed { - return nil, ErrTxClosed - } - - return tx.conn.QueryFunc(ctx, sql, args, scans, f) -} - -// CopyFrom delegates to the underlying *Conn -func (tx *dbTx) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error) { - if tx.closed { - return 0, ErrTxClosed - } - - return tx.conn.CopyFrom(ctx, tableName, columnNames, rowSrc) -} - -// SendBatch delegates to the underlying *Conn -func (tx *dbTx) SendBatch(ctx context.Context, b *Batch) BatchResults { - if tx.closed { - return &batchResults{err: ErrTxClosed} - } - - return tx.conn.SendBatch(ctx, b) -} - -// LargeObjects returns a LargeObjects instance for the transaction. -func (tx *dbTx) LargeObjects() LargeObjects { - return LargeObjects{tx: tx} -} - -func (tx *dbTx) Conn() *Conn { - return tx.conn -} - -// dbSimulatedNestedTx represents a simulated nested transaction implemented by a savepoint. -type dbSimulatedNestedTx struct { - tx Tx - savepointNum int64 - closed bool -} - -// Begin starts a pseudo nested transaction implemented with a savepoint. -func (sp *dbSimulatedNestedTx) Begin(ctx context.Context) (Tx, error) { - if sp.closed { - return nil, ErrTxClosed - } - - return sp.tx.Begin(ctx) -} - -func (sp *dbSimulatedNestedTx) BeginFunc(ctx context.Context, f func(Tx) error) (err error) { - if sp.closed { - return ErrTxClosed - } - - return sp.tx.BeginFunc(ctx, f) -} - -// Commit releases the savepoint essentially committing the pseudo nested transaction. -func (sp *dbSimulatedNestedTx) Commit(ctx context.Context) error { - if sp.closed { - return ErrTxClosed - } - - _, err := sp.Exec(ctx, "release savepoint sp_"+strconv.FormatInt(sp.savepointNum, 10)) - sp.closed = true - return err -} - -// Rollback rolls back to the savepoint essentially rolling back the pseudo nested transaction. Rollback will return -// ErrTxClosed if the dbSavepoint is already closed, but is otherwise safe to call multiple times. Hence, a defer sp.Rollback() -// is safe even if sp.Commit() will be called first in a non-error condition. -func (sp *dbSimulatedNestedTx) Rollback(ctx context.Context) error { - if sp.closed { - return ErrTxClosed - } - - _, err := sp.Exec(ctx, "rollback to savepoint sp_"+strconv.FormatInt(sp.savepointNum, 10)) - sp.closed = true - return err -} - -// Exec delegates to the underlying Tx -func (sp *dbSimulatedNestedTx) Exec(ctx context.Context, sql string, arguments ...interface{}) (commandTag pgconn.CommandTag, err error) { - if sp.closed { - return nil, ErrTxClosed - } - - return sp.tx.Exec(ctx, sql, arguments...) -} - -// Prepare delegates to the underlying Tx -func (sp *dbSimulatedNestedTx) Prepare(ctx context.Context, name, sql string) (*pgconn.StatementDescription, error) { - if sp.closed { - return nil, ErrTxClosed - } - - return sp.tx.Prepare(ctx, name, sql) -} - -// Query delegates to the underlying Tx -func (sp *dbSimulatedNestedTx) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error) { - if sp.closed { - // Because checking for errors can be deferred to the *Rows, build one with the error - err := ErrTxClosed - return &connRows{closed: true, err: err}, err - } - - return sp.tx.Query(ctx, sql, args...) -} - -// QueryRow delegates to the underlying Tx -func (sp *dbSimulatedNestedTx) QueryRow(ctx context.Context, sql string, args ...interface{}) Row { - rows, _ := sp.Query(ctx, sql, args...) - return (*connRow)(rows.(*connRows)) -} - -// QueryFunc delegates to the underlying Tx. -func (sp *dbSimulatedNestedTx) QueryFunc(ctx context.Context, sql string, args []interface{}, scans []interface{}, f func(QueryFuncRow) error) (pgconn.CommandTag, error) { - if sp.closed { - return nil, ErrTxClosed - } - - return sp.tx.QueryFunc(ctx, sql, args, scans, f) -} - -// CopyFrom delegates to the underlying *Conn -func (sp *dbSimulatedNestedTx) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error) { - if sp.closed { - return 0, ErrTxClosed - } - - return sp.tx.CopyFrom(ctx, tableName, columnNames, rowSrc) -} - -// SendBatch delegates to the underlying *Conn -func (sp *dbSimulatedNestedTx) SendBatch(ctx context.Context, b *Batch) BatchResults { - if sp.closed { - return &batchResults{err: ErrTxClosed} - } - - return sp.tx.SendBatch(ctx, b) -} - -func (sp *dbSimulatedNestedTx) LargeObjects() LargeObjects { - return LargeObjects{tx: sp} -} - -func (sp *dbSimulatedNestedTx) Conn() *Conn { - return sp.tx.Conn() -} diff --git a/vendor/github.com/jackc/pgx/v4/values.go b/vendor/github.com/jackc/pgx/v4/values.go deleted file mode 100644 index 1a945475..00000000 --- a/vendor/github.com/jackc/pgx/v4/values.go +++ /dev/null @@ -1,280 +0,0 @@ -package pgx - -import ( - "database/sql/driver" - "fmt" - "math" - "reflect" - "time" - - "github.com/jackc/pgio" - "github.com/jackc/pgtype" -) - -// PostgreSQL format codes -const ( - TextFormatCode = 0 - BinaryFormatCode = 1 -) - -// SerializationError occurs on failure to encode or decode a value -type SerializationError string - -func (e SerializationError) Error() string { - return string(e) -} - -func convertSimpleArgument(ci *pgtype.ConnInfo, arg interface{}) (interface{}, error) { - if arg == nil { - return nil, nil - } - - refVal := reflect.ValueOf(arg) - if refVal.Kind() == reflect.Ptr && refVal.IsNil() { - return nil, nil - } - - switch arg := arg.(type) { - - // https://github.com/jackc/pgx/issues/409 Changed JSON and JSONB to surface - // []byte to database/sql instead of string. But that caused problems with the - // simple protocol because the driver.Valuer case got taken before the - // pgtype.TextEncoder case. And driver.Valuer needed to be first in the usual - // case because of https://github.com/jackc/pgx/issues/339. So instead we - // special case JSON and JSONB. - case *pgtype.JSON: - buf, err := arg.EncodeText(ci, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - return string(buf), nil - case *pgtype.JSONB: - buf, err := arg.EncodeText(ci, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - return string(buf), nil - - case driver.Valuer: - return callValuerValue(arg) - case pgtype.TextEncoder: - buf, err := arg.EncodeText(ci, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - return string(buf), nil - case float32: - return float64(arg), nil - case float64: - return arg, nil - case bool: - return arg, nil - case time.Duration: - return fmt.Sprintf("%d microsecond", int64(arg)/1000), nil - case time.Time: - return arg, nil - case string: - return arg, nil - case []byte: - return arg, nil - case int8: - return int64(arg), nil - case int16: - return int64(arg), nil - case int32: - return int64(arg), nil - case int64: - return arg, nil - case int: - return int64(arg), nil - case uint8: - return int64(arg), nil - case uint16: - return int64(arg), nil - case uint32: - return int64(arg), nil - case uint64: - if arg > math.MaxInt64 { - return nil, fmt.Errorf("arg too big for int64: %v", arg) - } - return int64(arg), nil - case uint: - if uint64(arg) > math.MaxInt64 { - return nil, fmt.Errorf("arg too big for int64: %v", arg) - } - return int64(arg), nil - } - - if dt, found := ci.DataTypeForValue(arg); found { - v := dt.Value - err := v.Set(arg) - if err != nil { - return nil, err - } - buf, err := v.(pgtype.TextEncoder).EncodeText(ci, nil) - if err != nil { - return nil, err - } - if buf == nil { - return nil, nil - } - return string(buf), nil - } - - if refVal.Kind() == reflect.Ptr { - arg = refVal.Elem().Interface() - return convertSimpleArgument(ci, arg) - } - - if strippedArg, ok := stripNamedType(&refVal); ok { - return convertSimpleArgument(ci, strippedArg) - } - return nil, SerializationError(fmt.Sprintf("Cannot encode %T in simple protocol - %T must implement driver.Valuer, pgtype.TextEncoder, or be a native type", arg, arg)) -} - -func encodePreparedStatementArgument(ci *pgtype.ConnInfo, buf []byte, oid uint32, arg interface{}) ([]byte, error) { - if arg == nil { - return pgio.AppendInt32(buf, -1), nil - } - - switch arg := arg.(type) { - case pgtype.BinaryEncoder: - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - argBuf, err := arg.EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if argBuf != nil { - buf = argBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - return buf, nil - case pgtype.TextEncoder: - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - argBuf, err := arg.EncodeText(ci, buf) - if err != nil { - return nil, err - } - if argBuf != nil { - buf = argBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - return buf, nil - case string: - buf = pgio.AppendInt32(buf, int32(len(arg))) - buf = append(buf, arg...) - return buf, nil - } - - refVal := reflect.ValueOf(arg) - - if refVal.Kind() == reflect.Ptr { - if refVal.IsNil() { - return pgio.AppendInt32(buf, -1), nil - } - arg = refVal.Elem().Interface() - return encodePreparedStatementArgument(ci, buf, oid, arg) - } - - if dt, ok := ci.DataTypeForOID(oid); ok { - value := dt.Value - err := value.Set(arg) - if err != nil { - { - if arg, ok := arg.(driver.Valuer); ok { - v, err := callValuerValue(arg) - if err != nil { - return nil, err - } - return encodePreparedStatementArgument(ci, buf, oid, v) - } - } - - return nil, err - } - - sp := len(buf) - buf = pgio.AppendInt32(buf, -1) - argBuf, err := value.(pgtype.BinaryEncoder).EncodeBinary(ci, buf) - if err != nil { - return nil, err - } - if argBuf != nil { - buf = argBuf - pgio.SetInt32(buf[sp:], int32(len(buf[sp:])-4)) - } - return buf, nil - } - - if strippedArg, ok := stripNamedType(&refVal); ok { - return encodePreparedStatementArgument(ci, buf, oid, strippedArg) - } - return nil, SerializationError(fmt.Sprintf("Cannot encode %T into oid %v - %T must implement Encoder or be converted to a string", arg, oid, arg)) -} - -// chooseParameterFormatCode determines the correct format code for an -// argument to a prepared statement. It defaults to TextFormatCode if no -// determination can be made. -func chooseParameterFormatCode(ci *pgtype.ConnInfo, oid uint32, arg interface{}) int16 { - switch arg := arg.(type) { - case pgtype.ParamFormatPreferrer: - return arg.PreferredParamFormat() - case pgtype.BinaryEncoder: - return BinaryFormatCode - case string, *string, pgtype.TextEncoder: - return TextFormatCode - } - - return ci.ParamFormatCodeForOID(oid) -} - -func stripNamedType(val *reflect.Value) (interface{}, bool) { - switch val.Kind() { - case reflect.Int: - convVal := int(val.Int()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Int8: - convVal := int8(val.Int()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Int16: - convVal := int16(val.Int()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Int32: - convVal := int32(val.Int()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Int64: - convVal := int64(val.Int()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Uint: - convVal := uint(val.Uint()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Uint8: - convVal := uint8(val.Uint()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Uint16: - convVal := uint16(val.Uint()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Uint32: - convVal := uint32(val.Uint()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.Uint64: - convVal := uint64(val.Uint()) - return convVal, reflect.TypeOf(convVal) != val.Type() - case reflect.String: - convVal := val.String() - return convVal, reflect.TypeOf(convVal) != val.Type() - } - - return nil, false -} diff --git a/vendor/github.com/segmentio/fasthash/LICENSE b/vendor/github.com/segmentio/fasthash/LICENSE deleted file mode 100644 index 09e136c5..00000000 --- a/vendor/github.com/segmentio/fasthash/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2017 Segment - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/vendor/github.com/segmentio/fasthash/fnv1a/hash.go b/vendor/github.com/segmentio/fasthash/fnv1a/hash.go deleted file mode 100644 index 92849b11..00000000 --- a/vendor/github.com/segmentio/fasthash/fnv1a/hash.go +++ /dev/null @@ -1,121 +0,0 @@ -package fnv1a - -const ( - // FNV-1a - offset64 = uint64(14695981039346656037) - prime64 = uint64(1099511628211) - - // Init64 is what 64 bits hash values should be initialized with. - Init64 = offset64 -) - -// HashString64 returns the hash of s. -func HashString64(s string) uint64 { - return AddString64(Init64, s) -} - -// HashBytes64 returns the hash of u. -func HashBytes64(b []byte) uint64 { - return AddBytes64(Init64, b) -} - -// HashUint64 returns the hash of u. -func HashUint64(u uint64) uint64 { - return AddUint64(Init64, u) -} - -// AddString64 adds the hash of s to the precomputed hash value h. -func AddString64(h uint64, s string) uint64 { - /* - This is an unrolled version of this algorithm: - - for _, c := range s { - h = (h ^ uint64(c)) * prime64 - } - - It seems to be ~1.5x faster than the simple loop in BenchmarkHash64: - - - BenchmarkHash64/hash_function-4 30000000 56.1 ns/op 642.15 MB/s 0 B/op 0 allocs/op - - BenchmarkHash64/hash_function-4 50000000 38.6 ns/op 932.35 MB/s 0 B/op 0 allocs/op - - */ - for len(s) >= 8 { - h = (h ^ uint64(s[0])) * prime64 - h = (h ^ uint64(s[1])) * prime64 - h = (h ^ uint64(s[2])) * prime64 - h = (h ^ uint64(s[3])) * prime64 - h = (h ^ uint64(s[4])) * prime64 - h = (h ^ uint64(s[5])) * prime64 - h = (h ^ uint64(s[6])) * prime64 - h = (h ^ uint64(s[7])) * prime64 - s = s[8:] - } - - if len(s) >= 4 { - h = (h ^ uint64(s[0])) * prime64 - h = (h ^ uint64(s[1])) * prime64 - h = (h ^ uint64(s[2])) * prime64 - h = (h ^ uint64(s[3])) * prime64 - s = s[4:] - } - - if len(s) >= 2 { - h = (h ^ uint64(s[0])) * prime64 - h = (h ^ uint64(s[1])) * prime64 - s = s[2:] - } - - if len(s) > 0 { - h = (h ^ uint64(s[0])) * prime64 - } - - return h -} - -// AddBytes64 adds the hash of b to the precomputed hash value h. -func AddBytes64(h uint64, b []byte) uint64 { - for len(b) >= 8 { - h = (h ^ uint64(b[0])) * prime64 - h = (h ^ uint64(b[1])) * prime64 - h = (h ^ uint64(b[2])) * prime64 - h = (h ^ uint64(b[3])) * prime64 - h = (h ^ uint64(b[4])) * prime64 - h = (h ^ uint64(b[5])) * prime64 - h = (h ^ uint64(b[6])) * prime64 - h = (h ^ uint64(b[7])) * prime64 - b = b[8:] - } - - if len(b) >= 4 { - h = (h ^ uint64(b[0])) * prime64 - h = (h ^ uint64(b[1])) * prime64 - h = (h ^ uint64(b[2])) * prime64 - h = (h ^ uint64(b[3])) * prime64 - b = b[4:] - } - - if len(b) >= 2 { - h = (h ^ uint64(b[0])) * prime64 - h = (h ^ uint64(b[1])) * prime64 - b = b[2:] - } - - if len(b) > 0 { - h = (h ^ uint64(b[0])) * prime64 - } - - return h -} - -// AddUint64 adds the hash value of the 8 bytes of u to h. -func AddUint64(h uint64, u uint64) uint64 { - h = (h ^ ((u >> 56) & 0xFF)) * prime64 - h = (h ^ ((u >> 48) & 0xFF)) * prime64 - h = (h ^ ((u >> 40) & 0xFF)) * prime64 - h = (h ^ ((u >> 32) & 0xFF)) * prime64 - h = (h ^ ((u >> 24) & 0xFF)) * prime64 - h = (h ^ ((u >> 16) & 0xFF)) * prime64 - h = (h ^ ((u >> 8) & 0xFF)) * prime64 - h = (h ^ ((u >> 0) & 0xFF)) * prime64 - return h -} diff --git a/vendor/github.com/segmentio/fasthash/fnv1a/hash32.go b/vendor/github.com/segmentio/fasthash/fnv1a/hash32.go deleted file mode 100644 index ac91e247..00000000 --- a/vendor/github.com/segmentio/fasthash/fnv1a/hash32.go +++ /dev/null @@ -1,104 +0,0 @@ -package fnv1a - -const ( - // FNV-1a - offset32 = uint32(2166136261) - prime32 = uint32(16777619) - - // Init32 is what 32 bits hash values should be initialized with. - Init32 = offset32 -) - -// HashString32 returns the hash of s. -func HashString32(s string) uint32 { - return AddString32(Init32, s) -} - -// HashBytes32 returns the hash of u. -func HashBytes32(b []byte) uint32 { - return AddBytes32(Init32, b) -} - -// HashUint32 returns the hash of u. -func HashUint32(u uint32) uint32 { - return AddUint32(Init32, u) -} - -// AddString32 adds the hash of s to the precomputed hash value h. -func AddString32(h uint32, s string) uint32 { - for len(s) >= 8 { - h = (h ^ uint32(s[0])) * prime32 - h = (h ^ uint32(s[1])) * prime32 - h = (h ^ uint32(s[2])) * prime32 - h = (h ^ uint32(s[3])) * prime32 - h = (h ^ uint32(s[4])) * prime32 - h = (h ^ uint32(s[5])) * prime32 - h = (h ^ uint32(s[6])) * prime32 - h = (h ^ uint32(s[7])) * prime32 - s = s[8:] - } - - if len(s) >= 4 { - h = (h ^ uint32(s[0])) * prime32 - h = (h ^ uint32(s[1])) * prime32 - h = (h ^ uint32(s[2])) * prime32 - h = (h ^ uint32(s[3])) * prime32 - s = s[4:] - } - - if len(s) >= 2 { - h = (h ^ uint32(s[0])) * prime32 - h = (h ^ uint32(s[1])) * prime32 - s = s[2:] - } - - if len(s) > 0 { - h = (h ^ uint32(s[0])) * prime32 - } - - return h -} - -// AddBytes32 adds the hash of b to the precomputed hash value h. -func AddBytes32(h uint32, b []byte) uint32 { - for len(b) >= 8 { - h = (h ^ uint32(b[0])) * prime32 - h = (h ^ uint32(b[1])) * prime32 - h = (h ^ uint32(b[2])) * prime32 - h = (h ^ uint32(b[3])) * prime32 - h = (h ^ uint32(b[4])) * prime32 - h = (h ^ uint32(b[5])) * prime32 - h = (h ^ uint32(b[6])) * prime32 - h = (h ^ uint32(b[7])) * prime32 - b = b[8:] - } - - if len(b) >= 4 { - h = (h ^ uint32(b[0])) * prime32 - h = (h ^ uint32(b[1])) * prime32 - h = (h ^ uint32(b[2])) * prime32 - h = (h ^ uint32(b[3])) * prime32 - b = b[4:] - } - - if len(b) >= 2 { - h = (h ^ uint32(b[0])) * prime32 - h = (h ^ uint32(b[1])) * prime32 - b = b[2:] - } - - if len(b) > 0 { - h = (h ^ uint32(b[0])) * prime32 - } - - return h -} - -// AddUint32 adds the hash value of the 8 bytes of u to h. -func AddUint32(h, u uint32) uint32 { - h = (h ^ ((u >> 24) & 0xFF)) * prime32 - h = (h ^ ((u >> 16) & 0xFF)) * prime32 - h = (h ^ ((u >> 8) & 0xFF)) * prime32 - h = (h ^ ((u >> 0) & 0xFF)) * prime32 - return h -} diff --git a/vendor/github.com/tmthrgd/go-hex/.travis.yml b/vendor/github.com/tmthrgd/go-hex/.travis.yml deleted file mode 100644 index b73e2f33..00000000 --- a/vendor/github.com/tmthrgd/go-hex/.travis.yml +++ /dev/null @@ -1,11 +0,0 @@ -language: go -go: - - 1.10.x - - 1.11.x - - 1.12.x - - 1.13.x - - tip -matrix: - fast_finish: true - allow_failures: - - go: tip diff --git a/vendor/github.com/tmthrgd/go-hex/LICENSE b/vendor/github.com/tmthrgd/go-hex/LICENSE deleted file mode 100644 index 1163cdf2..00000000 --- a/vendor/github.com/tmthrgd/go-hex/LICENSE +++ /dev/null @@ -1,82 +0,0 @@ -Copyright (c) 2016, Tom Thorogood. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - * Neither the name of the Tom Thorogood nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY -DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - ----- Portions of the source code are also covered by the following license: ---- - -Copyright (c) 2012 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - ----- Portions of the source code are also covered by the following license: ---- - -Copyright (c) 2005-2016, Wojciech Muła -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - -1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS -IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED -TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A -PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED -TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/tmthrgd/go-hex/README.md b/vendor/github.com/tmthrgd/go-hex/README.md deleted file mode 100644 index 565411fc..00000000 --- a/vendor/github.com/tmthrgd/go-hex/README.md +++ /dev/null @@ -1,108 +0,0 @@ -# go-hex - -[![GoDoc](https://godoc.org/github.com/tmthrgd/go-hex?status.svg)](https://godoc.org/github.com/tmthrgd/go-hex) -[![Build Status](https://travis-ci.org/tmthrgd/go-hex.svg?branch=master)](https://travis-ci.org/tmthrgd/go-hex) - -An efficient hexadecimal implementation for Golang. - -go-hex provides hex encoding and decoding using SSE/AVX instructions on x86-64. - -## Download - -``` -go get github.com/tmthrgd/go-hex -``` - -## Benchmark - -go-hex: -``` -BenchmarkEncode/15-8 100000000 17.4 ns/op 863.43 MB/s -BenchmarkEncode/32-8 100000000 11.9 ns/op 2690.43 MB/s -BenchmarkEncode/128-8 100000000 21.4 ns/op 5982.92 MB/s -BenchmarkEncode/1k-8 20000000 88.5 ns/op 11572.80 MB/s -BenchmarkEncode/16k-8 1000000 1254 ns/op 13058.10 MB/s -BenchmarkEncode/128k-8 100000 12965 ns/op 10109.53 MB/s -BenchmarkEncode/1M-8 10000 119465 ns/op 8777.23 MB/s -BenchmarkEncode/16M-8 500 3530380 ns/op 4752.24 MB/s -BenchmarkEncode/128M-8 50 28001913 ns/op 4793.16 MB/s -BenchmarkDecode/14-8 100000000 12.6 ns/op 1110.01 MB/s -BenchmarkDecode/32-8 100000000 12.5 ns/op 2558.10 MB/s -BenchmarkDecode/128-8 50000000 27.2 ns/op 4697.66 MB/s -BenchmarkDecode/1k-8 10000000 168 ns/op 6093.43 MB/s -BenchmarkDecode/16k-8 500000 2543 ns/op 6442.09 MB/s -BenchmarkDecode/128k-8 100000 20339 ns/op 6444.24 MB/s -BenchmarkDecode/1M-8 10000 164313 ns/op 6381.57 MB/s -BenchmarkDecode/16M-8 500 3099822 ns/op 5412.31 MB/s -BenchmarkDecode/128M-8 50 24865822 ns/op 5397.68 MB/s -``` - -[encoding/hex](https://golang.org/pkg/encoding/hex/): -``` -BenchmarkRefEncode/15-8 50000000 36.1 ns/op 415.07 MB/s -BenchmarkRefEncode/32-8 20000000 72.9 ns/op 439.14 MB/s -BenchmarkRefEncode/128-8 5000000 289 ns/op 441.54 MB/s -BenchmarkRefEncode/1k-8 1000000 2268 ns/op 451.49 MB/s -BenchmarkRefEncode/16k-8 30000 39110 ns/op 418.91 MB/s -BenchmarkRefEncode/128k-8 5000 291260 ns/op 450.02 MB/s -BenchmarkRefEncode/1M-8 1000 2277578 ns/op 460.39 MB/s -BenchmarkRefEncode/16M-8 30 37087543 ns/op 452.37 MB/s -BenchmarkRefEncode/128M-8 5 293611713 ns/op 457.13 MB/s -BenchmarkRefDecode/14-8 30000000 53.7 ns/op 260.49 MB/s -BenchmarkRefDecode/32-8 10000000 128 ns/op 248.44 MB/s -BenchmarkRefDecode/128-8 3000000 481 ns/op 265.95 MB/s -BenchmarkRefDecode/1k-8 300000 4172 ns/op 245.43 MB/s -BenchmarkRefDecode/16k-8 10000 111989 ns/op 146.30 MB/s -BenchmarkRefDecode/128k-8 2000 909077 ns/op 144.18 MB/s -BenchmarkRefDecode/1M-8 200 7275779 ns/op 144.12 MB/s -BenchmarkRefDecode/16M-8 10 116574839 ns/op 143.92 MB/s -BenchmarkRefDecode/128M-8 2 933871637 ns/op 143.72 MB/s -``` - -[encoding/hex](https://golang.org/pkg/encoding/hex/) -> go-hex: -``` -benchmark old ns/op new ns/op delta -BenchmarkEncode/15-8 36.1 17.4 -51.80% -BenchmarkEncode/32-8 72.9 11.9 -83.68% -BenchmarkEncode/128-8 289 21.4 -92.60% -BenchmarkEncode/1k-8 2268 88.5 -96.10% -BenchmarkEncode/16k-8 39110 1254 -96.79% -BenchmarkEncode/128k-8 291260 12965 -95.55% -BenchmarkEncode/1M-8 2277578 119465 -94.75% -BenchmarkEncode/16M-8 37087543 3530380 -90.48% -BenchmarkEncode/128M-8 293611713 28001913 -90.46% -BenchmarkDecode/14-8 53.7 12.6 -76.54% -BenchmarkDecode/32-8 128 12.5 -90.23% -BenchmarkDecode/128-8 481 27.2 -94.35% -BenchmarkDecode/1k-8 4172 168 -95.97% -BenchmarkDecode/16k-8 111989 2543 -97.73% -BenchmarkDecode/128k-8 909077 20339 -97.76% -BenchmarkDecode/1M-8 7275779 164313 -97.74% -BenchmarkDecode/16M-8 116574839 3099822 -97.34% -BenchmarkDecode/128M-8 933871637 24865822 -97.34% - -benchmark old MB/s new MB/s speedup -BenchmarkEncode/15-8 415.07 863.43 2.08x -BenchmarkEncode/32-8 439.14 2690.43 6.13x -BenchmarkEncode/128-8 441.54 5982.92 13.55x -BenchmarkEncode/1k-8 451.49 11572.80 25.63x -BenchmarkEncode/16k-8 418.91 13058.10 31.17x -BenchmarkEncode/128k-8 450.02 10109.53 22.46x -BenchmarkEncode/1M-8 460.39 8777.23 19.06x -BenchmarkEncode/16M-8 452.37 4752.24 10.51x -BenchmarkEncode/128M-8 457.13 4793.16 10.49x -BenchmarkDecode/14-8 260.49 1110.01 4.26x -BenchmarkDecode/32-8 248.44 2558.10 10.30x -BenchmarkDecode/128-8 265.95 4697.66 17.66x -BenchmarkDecode/1k-8 245.43 6093.43 24.83x -BenchmarkDecode/16k-8 146.30 6442.09 44.03x -BenchmarkDecode/128k-8 144.18 6444.24 44.70x -BenchmarkDecode/1M-8 144.12 6381.57 44.28x -BenchmarkDecode/16M-8 143.92 5412.31 37.61x -BenchmarkDecode/128M-8 143.72 5397.68 37.56x -``` - -## License - -Unless otherwise noted, the go-hex source files are distributed under the Modified BSD License -found in the LICENSE file. diff --git a/vendor/github.com/tmthrgd/go-hex/hex.go b/vendor/github.com/tmthrgd/go-hex/hex.go deleted file mode 100644 index f4eca0e8..00000000 --- a/vendor/github.com/tmthrgd/go-hex/hex.go +++ /dev/null @@ -1,137 +0,0 @@ -// Copyright 2016 Tom Thorogood. All rights reserved. -// Use of this source code is governed by a -// Modified BSD License license that can be found in -// the LICENSE file. -// -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Package hex is an efficient hexadecimal implementation for Golang. -package hex - -import ( - "errors" - "fmt" -) - -var errLength = errors.New("go-hex: odd length hex string") - -var ( - lower = []byte("0123456789abcdef") - upper = []byte("0123456789ABCDEF") -) - -// InvalidByteError values describe errors resulting from an invalid byte in a hex string. -type InvalidByteError byte - -func (e InvalidByteError) Error() string { - return fmt.Sprintf("go-hex: invalid byte: %#U", rune(e)) -} - -// EncodedLen returns the length of an encoding of n source bytes. -func EncodedLen(n int) int { - return n * 2 -} - -// DecodedLen returns the length of a decoding of n source bytes. -func DecodedLen(n int) int { - return n / 2 -} - -// Encode encodes src into EncodedLen(len(src)) -// bytes of dst. As a convenience, it returns the number -// of bytes written to dst, but this value is always EncodedLen(len(src)). -// Encode implements lowercase hexadecimal encoding. -func Encode(dst, src []byte) int { - return RawEncode(dst, src, lower) -} - -// EncodeUpper encodes src into EncodedLen(len(src)) -// bytes of dst. As a convenience, it returns the number -// of bytes written to dst, but this value is always EncodedLen(len(src)). -// EncodeUpper implements uppercase hexadecimal encoding. -func EncodeUpper(dst, src []byte) int { - return RawEncode(dst, src, upper) -} - -// EncodeToString returns the lowercase hexadecimal encoding of src. -func EncodeToString(src []byte) string { - return RawEncodeToString(src, lower) -} - -// EncodeUpperToString returns the uppercase hexadecimal encoding of src. -func EncodeUpperToString(src []byte) string { - return RawEncodeToString(src, upper) -} - -// RawEncodeToString returns the hexadecimal encoding of src for a given -// alphabet. -func RawEncodeToString(src, alpha []byte) string { - dst := make([]byte, EncodedLen(len(src))) - RawEncode(dst, src, alpha) - return string(dst) -} - -// DecodeString returns the bytes represented by the hexadecimal string s. -func DecodeString(s string) ([]byte, error) { - src := []byte(s) - dst := make([]byte, DecodedLen(len(src))) - - if _, err := Decode(dst, src); err != nil { - return nil, err - } - - return dst, nil -} - -// MustDecodeString is like DecodeString but panics if the string cannot be -// parsed. It simplifies safe initialization of global variables holding -// binary data. -func MustDecodeString(str string) []byte { - dst, err := DecodeString(str) - if err != nil { - panic(err) - } - - return dst -} - -func encodeGeneric(dst, src, alpha []byte) { - for i, v := range src { - dst[i*2] = alpha[v>>4] - dst[i*2+1] = alpha[v&0x0f] - } -} - -func decodeGeneric(dst, src []byte) (uint64, bool) { - for i := 0; i < len(src)/2; i++ { - a, ok := fromHexChar(src[i*2]) - if !ok { - return uint64(i * 2), false - } - - b, ok := fromHexChar(src[i*2+1]) - if !ok { - return uint64(i*2 + 1), false - } - - dst[i] = (a << 4) | b - } - - return 0, true -} - -// fromHexChar converts a hex character into its value and a success flag. -func fromHexChar(c byte) (byte, bool) { - switch { - case '0' <= c && c <= '9': - return c - '0', true - case 'a' <= c && c <= 'f': - return c - 'a' + 10, true - case 'A' <= c && c <= 'F': - return c - 'A' + 10, true - } - - return 0, false -} diff --git a/vendor/github.com/tmthrgd/go-hex/hex_amd64.go b/vendor/github.com/tmthrgd/go-hex/hex_amd64.go deleted file mode 100644 index 0f9f9a5c..00000000 --- a/vendor/github.com/tmthrgd/go-hex/hex_amd64.go +++ /dev/null @@ -1,94 +0,0 @@ -// Copyright 2016 Tom Thorogood. All rights reserved. -// Use of this source code is governed by a -// Modified BSD License license that can be found in -// the LICENSE file. - -// +build amd64,!gccgo,!appengine - -package hex - -import "golang.org/x/sys/cpu" - -// RawEncode encodes src into EncodedLen(len(src)) -// bytes of dst. As a convenience, it returns the number -// of bytes written to dst, but this value is always EncodedLen(len(src)). -// RawEncode implements hexadecimal encoding for a given alphabet. -func RawEncode(dst, src, alpha []byte) int { - if len(alpha) != 16 { - panic("invalid alphabet") - } - - if len(dst) < len(src)*2 { - panic("dst buffer is too small") - } - - if len(src) == 0 { - return 0 - } - - switch { - case cpu.X86.HasAVX: - encodeAVX(&dst[0], &src[0], uint64(len(src)), &alpha[0]) - case cpu.X86.HasSSE41: - encodeSSE(&dst[0], &src[0], uint64(len(src)), &alpha[0]) - default: - encodeGeneric(dst, src, alpha) - } - - return len(src) * 2 -} - -// Decode decodes src into DecodedLen(len(src)) bytes, returning the actual -// number of bytes written to dst. -// -// If Decode encounters invalid input, it returns an error describing the failure. -func Decode(dst, src []byte) (int, error) { - if len(src)%2 != 0 { - return 0, errLength - } - - if len(dst) < len(src)/2 { - panic("dst buffer is too small") - } - - if len(src) == 0 { - return 0, nil - } - - var ( - n uint64 - ok bool - ) - switch { - case cpu.X86.HasAVX: - n, ok = decodeAVX(&dst[0], &src[0], uint64(len(src))) - case cpu.X86.HasSSE41: - n, ok = decodeSSE(&dst[0], &src[0], uint64(len(src))) - default: - n, ok = decodeGeneric(dst, src) - } - - if !ok { - return 0, InvalidByteError(src[n]) - } - - return len(src) / 2, nil -} - -//go:generate go run asm_gen.go - -// This function is implemented in hex_encode_amd64.s -//go:noescape -func encodeAVX(dst *byte, src *byte, len uint64, alpha *byte) - -// This function is implemented in hex_encode_amd64.s -//go:noescape -func encodeSSE(dst *byte, src *byte, len uint64, alpha *byte) - -// This function is implemented in hex_decode_amd64.s -//go:noescape -func decodeAVX(dst *byte, src *byte, len uint64) (n uint64, ok bool) - -// This function is implemented in hex_decode_amd64.s -//go:noescape -func decodeSSE(dst *byte, src *byte, len uint64) (n uint64, ok bool) diff --git a/vendor/github.com/tmthrgd/go-hex/hex_decode_amd64.s b/vendor/github.com/tmthrgd/go-hex/hex_decode_amd64.s deleted file mode 100644 index 25d9cefb..00000000 --- a/vendor/github.com/tmthrgd/go-hex/hex_decode_amd64.s +++ /dev/null @@ -1,303 +0,0 @@ -// Copyright 2016 Tom Thorogood. All rights reserved. -// Use of this source code is governed by a -// Modified BSD License license that can be found in -// the LICENSE file. -// -// Copyright 2005-2016, Wojciech Muła. All rights reserved. -// Use of this source code is governed by a -// Simplified BSD License license that can be found in -// the LICENSE file. -// -// This file is auto-generated - do not modify - -// +build amd64,!gccgo,!appengine - -#include "textflag.h" - -DATA decodeBase<>+0x00(SB)/8, $0x3030303030303030 -DATA decodeBase<>+0x08(SB)/8, $0x3030303030303030 -DATA decodeBase<>+0x10(SB)/8, $0x2727272727272727 -DATA decodeBase<>+0x18(SB)/8, $0x2727272727272727 -GLOBL decodeBase<>(SB),RODATA,$32 - -DATA decodeToLower<>+0x00(SB)/8, $0x2020202020202020 -DATA decodeToLower<>+0x08(SB)/8, $0x2020202020202020 -GLOBL decodeToLower<>(SB),RODATA,$16 - -DATA decodeHigh<>+0x00(SB)/8, $0x0e0c0a0806040200 -DATA decodeHigh<>+0x08(SB)/8, $0xffffffffffffffff -GLOBL decodeHigh<>(SB),RODATA,$16 - -DATA decodeLow<>+0x00(SB)/8, $0x0f0d0b0907050301 -DATA decodeLow<>+0x08(SB)/8, $0xffffffffffffffff -GLOBL decodeLow<>(SB),RODATA,$16 - -DATA decodeValid<>+0x00(SB)/8, $0xb0b0b0b0b0b0b0b0 -DATA decodeValid<>+0x08(SB)/8, $0xb0b0b0b0b0b0b0b0 -DATA decodeValid<>+0x10(SB)/8, $0xb9b9b9b9b9b9b9b9 -DATA decodeValid<>+0x18(SB)/8, $0xb9b9b9b9b9b9b9b9 -DATA decodeValid<>+0x20(SB)/8, $0xe1e1e1e1e1e1e1e1 -DATA decodeValid<>+0x28(SB)/8, $0xe1e1e1e1e1e1e1e1 -DATA decodeValid<>+0x30(SB)/8, $0xe6e6e6e6e6e6e6e6 -DATA decodeValid<>+0x38(SB)/8, $0xe6e6e6e6e6e6e6e6 -GLOBL decodeValid<>(SB),RODATA,$64 - -DATA decodeToSigned<>+0x00(SB)/8, $0x8080808080808080 -DATA decodeToSigned<>+0x08(SB)/8, $0x8080808080808080 -GLOBL decodeToSigned<>(SB),RODATA,$16 - -TEXT ·decodeAVX(SB),NOSPLIT,$0 - MOVQ dst+0(FP), DI - MOVQ src+8(FP), SI - MOVQ len+16(FP), BX - MOVQ SI, R15 - MOVOU decodeValid<>(SB), X14 - MOVOU decodeValid<>+0x20(SB), X15 - MOVW $65535, DX - CMPQ BX, $16 - JB tail -bigloop: - MOVOU (SI), X0 - VPXOR decodeToSigned<>(SB), X0, X1 - POR decodeToLower<>(SB), X0 - VPXOR decodeToSigned<>(SB), X0, X2 - VPCMPGTB X1, X14, X3 - PCMPGTB decodeValid<>+0x10(SB), X1 - VPCMPGTB X2, X15, X4 - PCMPGTB decodeValid<>+0x30(SB), X2 - PAND X4, X1 - POR X2, X3 - POR X1, X3 - PMOVMSKB X3, AX - TESTW AX, DX - JNZ invalid - PSUBB decodeBase<>(SB), X0 - PANDN decodeBase<>+0x10(SB), X4 - PSUBB X4, X0 - VPSHUFB decodeLow<>(SB), X0, X3 - PSHUFB decodeHigh<>(SB), X0 - PSLLW $4, X0 - POR X3, X0 - MOVQ X0, (DI) - SUBQ $16, BX - JZ ret - ADDQ $16, SI - ADDQ $8, DI - CMPQ BX, $16 - JAE bigloop -tail: - MOVQ $16, CX - SUBQ BX, CX - SHRW CX, DX - CMPQ BX, $4 - JB tail_in_2 - JE tail_in_4 - CMPQ BX, $8 - JB tail_in_6 - JE tail_in_8 - CMPQ BX, $12 - JB tail_in_10 - JE tail_in_12 -tail_in_14: - PINSRW $6, 12(SI), X0 -tail_in_12: - PINSRW $5, 10(SI), X0 -tail_in_10: - PINSRW $4, 8(SI), X0 -tail_in_8: - PINSRQ $0, (SI), X0 - JMP tail_conv -tail_in_6: - PINSRW $2, 4(SI), X0 -tail_in_4: - PINSRW $1, 2(SI), X0 -tail_in_2: - PINSRW $0, (SI), X0 -tail_conv: - VPXOR decodeToSigned<>(SB), X0, X1 - POR decodeToLower<>(SB), X0 - VPXOR decodeToSigned<>(SB), X0, X2 - VPCMPGTB X1, X14, X3 - PCMPGTB decodeValid<>+0x10(SB), X1 - VPCMPGTB X2, X15, X4 - PCMPGTB decodeValid<>+0x30(SB), X2 - PAND X4, X1 - POR X2, X3 - POR X1, X3 - PMOVMSKB X3, AX - TESTW AX, DX - JNZ invalid - PSUBB decodeBase<>(SB), X0 - PANDN decodeBase<>+0x10(SB), X4 - PSUBB X4, X0 - VPSHUFB decodeLow<>(SB), X0, X3 - PSHUFB decodeHigh<>(SB), X0 - PSLLW $4, X0 - POR X3, X0 - CMPQ BX, $4 - JB tail_out_2 - JE tail_out_4 - CMPQ BX, $8 - JB tail_out_6 - JE tail_out_8 - CMPQ BX, $12 - JB tail_out_10 - JE tail_out_12 -tail_out_14: - PEXTRB $6, X0, 6(DI) -tail_out_12: - PEXTRB $5, X0, 5(DI) -tail_out_10: - PEXTRB $4, X0, 4(DI) -tail_out_8: - MOVL X0, (DI) - JMP ret -tail_out_6: - PEXTRB $2, X0, 2(DI) -tail_out_4: - PEXTRB $1, X0, 1(DI) -tail_out_2: - PEXTRB $0, X0, (DI) -ret: - MOVB $1, ok+32(FP) - RET -invalid: - BSFW AX, AX - SUBQ R15, SI - ADDQ SI, AX - MOVQ AX, n+24(FP) - MOVB $0, ok+32(FP) - RET - -TEXT ·decodeSSE(SB),NOSPLIT,$0 - MOVQ dst+0(FP), DI - MOVQ src+8(FP), SI - MOVQ len+16(FP), BX - MOVQ SI, R15 - MOVOU decodeValid<>(SB), X14 - MOVOU decodeValid<>+0x20(SB), X15 - MOVW $65535, DX - CMPQ BX, $16 - JB tail -bigloop: - MOVOU (SI), X0 - MOVOU X0, X1 - PXOR decodeToSigned<>(SB), X1 - POR decodeToLower<>(SB), X0 - MOVOU X0, X2 - PXOR decodeToSigned<>(SB), X2 - MOVOU X14, X3 - PCMPGTB X1, X3 - PCMPGTB decodeValid<>+0x10(SB), X1 - MOVOU X15, X4 - PCMPGTB X2, X4 - PCMPGTB decodeValid<>+0x30(SB), X2 - PAND X4, X1 - POR X2, X3 - POR X1, X3 - PMOVMSKB X3, AX - TESTW AX, DX - JNZ invalid - PSUBB decodeBase<>(SB), X0 - PANDN decodeBase<>+0x10(SB), X4 - PSUBB X4, X0 - MOVOU X0, X3 - PSHUFB decodeLow<>(SB), X3 - PSHUFB decodeHigh<>(SB), X0 - PSLLW $4, X0 - POR X3, X0 - MOVQ X0, (DI) - SUBQ $16, BX - JZ ret - ADDQ $16, SI - ADDQ $8, DI - CMPQ BX, $16 - JAE bigloop -tail: - MOVQ $16, CX - SUBQ BX, CX - SHRW CX, DX - CMPQ BX, $4 - JB tail_in_2 - JE tail_in_4 - CMPQ BX, $8 - JB tail_in_6 - JE tail_in_8 - CMPQ BX, $12 - JB tail_in_10 - JE tail_in_12 -tail_in_14: - PINSRW $6, 12(SI), X0 -tail_in_12: - PINSRW $5, 10(SI), X0 -tail_in_10: - PINSRW $4, 8(SI), X0 -tail_in_8: - PINSRQ $0, (SI), X0 - JMP tail_conv -tail_in_6: - PINSRW $2, 4(SI), X0 -tail_in_4: - PINSRW $1, 2(SI), X0 -tail_in_2: - PINSRW $0, (SI), X0 -tail_conv: - MOVOU X0, X1 - PXOR decodeToSigned<>(SB), X1 - POR decodeToLower<>(SB), X0 - MOVOU X0, X2 - PXOR decodeToSigned<>(SB), X2 - MOVOU X14, X3 - PCMPGTB X1, X3 - PCMPGTB decodeValid<>+0x10(SB), X1 - MOVOU X15, X4 - PCMPGTB X2, X4 - PCMPGTB decodeValid<>+0x30(SB), X2 - PAND X4, X1 - POR X2, X3 - POR X1, X3 - PMOVMSKB X3, AX - TESTW AX, DX - JNZ invalid - PSUBB decodeBase<>(SB), X0 - PANDN decodeBase<>+0x10(SB), X4 - PSUBB X4, X0 - MOVOU X0, X3 - PSHUFB decodeLow<>(SB), X3 - PSHUFB decodeHigh<>(SB), X0 - PSLLW $4, X0 - POR X3, X0 - CMPQ BX, $4 - JB tail_out_2 - JE tail_out_4 - CMPQ BX, $8 - JB tail_out_6 - JE tail_out_8 - CMPQ BX, $12 - JB tail_out_10 - JE tail_out_12 -tail_out_14: - PEXTRB $6, X0, 6(DI) -tail_out_12: - PEXTRB $5, X0, 5(DI) -tail_out_10: - PEXTRB $4, X0, 4(DI) -tail_out_8: - MOVL X0, (DI) - JMP ret -tail_out_6: - PEXTRB $2, X0, 2(DI) -tail_out_4: - PEXTRB $1, X0, 1(DI) -tail_out_2: - PEXTRB $0, X0, (DI) -ret: - MOVB $1, ok+32(FP) - RET -invalid: - BSFW AX, AX - SUBQ R15, SI - ADDQ SI, AX - MOVQ AX, n+24(FP) - MOVB $0, ok+32(FP) - RET diff --git a/vendor/github.com/tmthrgd/go-hex/hex_encode_amd64.s b/vendor/github.com/tmthrgd/go-hex/hex_encode_amd64.s deleted file mode 100644 index 96e6e4ca..00000000 --- a/vendor/github.com/tmthrgd/go-hex/hex_encode_amd64.s +++ /dev/null @@ -1,227 +0,0 @@ -// Copyright 2016 Tom Thorogood. All rights reserved. -// Use of this source code is governed by a -// Modified BSD License license that can be found in -// the LICENSE file. -// -// Copyright 2005-2016, Wojciech Muła. All rights reserved. -// Use of this source code is governed by a -// Simplified BSD License license that can be found in -// the LICENSE file. -// -// This file is auto-generated - do not modify - -// +build amd64,!gccgo,!appengine - -#include "textflag.h" - -DATA encodeMask<>+0x00(SB)/8, $0x0f0f0f0f0f0f0f0f -DATA encodeMask<>+0x08(SB)/8, $0x0f0f0f0f0f0f0f0f -GLOBL encodeMask<>(SB),RODATA,$16 - -TEXT ·encodeAVX(SB),NOSPLIT,$0 - MOVQ dst+0(FP), DI - MOVQ src+8(FP), SI - MOVQ len+16(FP), BX - MOVQ alpha+24(FP), DX - MOVOU (DX), X15 - CMPQ BX, $16 - JB tail -bigloop: - MOVOU -16(SI)(BX*1), X0 - VPAND encodeMask<>(SB), X0, X1 - PSRLW $4, X0 - PAND encodeMask<>(SB), X0 - VPUNPCKHBW X1, X0, X3 - PUNPCKLBW X1, X0 - VPSHUFB X0, X15, X1 - VPSHUFB X3, X15, X2 - MOVOU X2, -16(DI)(BX*2) - MOVOU X1, -32(DI)(BX*2) - SUBQ $16, BX - JZ ret - CMPQ BX, $16 - JAE bigloop -tail: - CMPQ BX, $2 - JB tail_in_1 - JE tail_in_2 - CMPQ BX, $4 - JB tail_in_3 - JE tail_in_4 - CMPQ BX, $6 - JB tail_in_5 - JE tail_in_6 - CMPQ BX, $8 - JB tail_in_7 -tail_in_8: - MOVQ (SI), X0 - JMP tail_conv -tail_in_7: - PINSRB $6, 6(SI), X0 -tail_in_6: - PINSRB $5, 5(SI), X0 -tail_in_5: - PINSRB $4, 4(SI), X0 -tail_in_4: - PINSRD $0, (SI), X0 - JMP tail_conv -tail_in_3: - PINSRB $2, 2(SI), X0 -tail_in_2: - PINSRB $1, 1(SI), X0 -tail_in_1: - PINSRB $0, (SI), X0 -tail_conv: - VPAND encodeMask<>(SB), X0, X1 - PSRLW $4, X0 - PAND encodeMask<>(SB), X0 - PUNPCKLBW X1, X0 - VPSHUFB X0, X15, X1 - CMPQ BX, $2 - JB tail_out_1 - JE tail_out_2 - CMPQ BX, $4 - JB tail_out_3 - JE tail_out_4 - CMPQ BX, $6 - JB tail_out_5 - JE tail_out_6 - CMPQ BX, $8 - JB tail_out_7 -tail_out_8: - MOVOU X1, (DI) - SUBQ $8, BX - JZ ret - ADDQ $8, SI - ADDQ $16, DI - JMP tail -tail_out_7: - PEXTRB $13, X1, 13(DI) - PEXTRB $12, X1, 12(DI) -tail_out_6: - PEXTRB $11, X1, 11(DI) - PEXTRB $10, X1, 10(DI) -tail_out_5: - PEXTRB $9, X1, 9(DI) - PEXTRB $8, X1, 8(DI) -tail_out_4: - MOVQ X1, (DI) - RET -tail_out_3: - PEXTRB $5, X1, 5(DI) - PEXTRB $4, X1, 4(DI) -tail_out_2: - PEXTRB $3, X1, 3(DI) - PEXTRB $2, X1, 2(DI) -tail_out_1: - PEXTRB $1, X1, 1(DI) - PEXTRB $0, X1, (DI) -ret: - RET - -TEXT ·encodeSSE(SB),NOSPLIT,$0 - MOVQ dst+0(FP), DI - MOVQ src+8(FP), SI - MOVQ len+16(FP), BX - MOVQ alpha+24(FP), DX - MOVOU (DX), X15 - CMPQ BX, $16 - JB tail -bigloop: - MOVOU -16(SI)(BX*1), X0 - MOVOU X0, X1 - PAND encodeMask<>(SB), X1 - PSRLW $4, X0 - PAND encodeMask<>(SB), X0 - MOVOU X0, X3 - PUNPCKHBW X1, X3 - PUNPCKLBW X1, X0 - MOVOU X15, X1 - PSHUFB X0, X1 - MOVOU X15, X2 - PSHUFB X3, X2 - MOVOU X2, -16(DI)(BX*2) - MOVOU X1, -32(DI)(BX*2) - SUBQ $16, BX - JZ ret - CMPQ BX, $16 - JAE bigloop -tail: - CMPQ BX, $2 - JB tail_in_1 - JE tail_in_2 - CMPQ BX, $4 - JB tail_in_3 - JE tail_in_4 - CMPQ BX, $6 - JB tail_in_5 - JE tail_in_6 - CMPQ BX, $8 - JB tail_in_7 -tail_in_8: - MOVQ (SI), X0 - JMP tail_conv -tail_in_7: - PINSRB $6, 6(SI), X0 -tail_in_6: - PINSRB $5, 5(SI), X0 -tail_in_5: - PINSRB $4, 4(SI), X0 -tail_in_4: - PINSRD $0, (SI), X0 - JMP tail_conv -tail_in_3: - PINSRB $2, 2(SI), X0 -tail_in_2: - PINSRB $1, 1(SI), X0 -tail_in_1: - PINSRB $0, (SI), X0 -tail_conv: - MOVOU X0, X1 - PAND encodeMask<>(SB), X1 - PSRLW $4, X0 - PAND encodeMask<>(SB), X0 - PUNPCKLBW X1, X0 - MOVOU X15, X1 - PSHUFB X0, X1 - CMPQ BX, $2 - JB tail_out_1 - JE tail_out_2 - CMPQ BX, $4 - JB tail_out_3 - JE tail_out_4 - CMPQ BX, $6 - JB tail_out_5 - JE tail_out_6 - CMPQ BX, $8 - JB tail_out_7 -tail_out_8: - MOVOU X1, (DI) - SUBQ $8, BX - JZ ret - ADDQ $8, SI - ADDQ $16, DI - JMP tail -tail_out_7: - PEXTRB $13, X1, 13(DI) - PEXTRB $12, X1, 12(DI) -tail_out_6: - PEXTRB $11, X1, 11(DI) - PEXTRB $10, X1, 10(DI) -tail_out_5: - PEXTRB $9, X1, 9(DI) - PEXTRB $8, X1, 8(DI) -tail_out_4: - MOVQ X1, (DI) - RET -tail_out_3: - PEXTRB $5, X1, 5(DI) - PEXTRB $4, X1, 4(DI) -tail_out_2: - PEXTRB $3, X1, 3(DI) - PEXTRB $2, X1, 2(DI) -tail_out_1: - PEXTRB $1, X1, 1(DI) - PEXTRB $0, X1, (DI) -ret: - RET diff --git a/vendor/github.com/tmthrgd/go-hex/hex_other.go b/vendor/github.com/tmthrgd/go-hex/hex_other.go deleted file mode 100644 index fab23218..00000000 --- a/vendor/github.com/tmthrgd/go-hex/hex_other.go +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !amd64 gccgo appengine - -package hex - -// RawEncode encodes src into EncodedLen(len(src)) -// bytes of dst. As a convenience, it returns the number -// of bytes written to dst, but this value is always EncodedLen(len(src)). -// RawEncode implements hexadecimal encoding for a given alphabet. -func RawEncode(dst, src, alpha []byte) int { - if len(alpha) != 16 { - panic("invalid alphabet") - } - - encodeGeneric(dst, src, alpha) - return len(src) * 2 -} - -// Decode decodes src into DecodedLen(len(src)) bytes, returning the actual -// number of bytes written to dst. -// -// If Decode encounters invalid input, it returns an error describing the failure. -func Decode(dst, src []byte) (int, error) { - if len(src)%2 == 1 { - return 0, errLength - } - - if n, ok := decodeGeneric(dst, src); !ok { - return 0, InvalidByteError(src[n]) - } - - return len(src) / 2, nil -} diff --git a/vendor/github.com/upper/db/v4/.gitignore b/vendor/github.com/upper/db/v4/.gitignore deleted file mode 100644 index 29460701..00000000 --- a/vendor/github.com/upper/db/v4/.gitignore +++ /dev/null @@ -1,4 +0,0 @@ -*.sw? -*.db -*.tmp -generated_*.go diff --git a/vendor/github.com/upper/db/v4/LICENSE b/vendor/github.com/upper/db/v4/LICENSE deleted file mode 100644 index 4004d2ba..00000000 --- a/vendor/github.com/upper/db/v4/LICENSE +++ /dev/null @@ -1,20 +0,0 @@ -Copyright (c) 2012-present The upper.io/db authors. All rights reserved. - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of -the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS -FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR -COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER -IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/upper/db/v4/Makefile b/vendor/github.com/upper/db/v4/Makefile deleted file mode 100644 index adc3da8a..00000000 --- a/vendor/github.com/upper/db/v4/Makefile +++ /dev/null @@ -1,42 +0,0 @@ -SHELL ?= /bin/bash - -PARALLEL_FLAGS ?= --halt-on-error 2 --jobs=4 -v -u - -TEST_FLAGS ?= - -UPPER_DB_LOG ?= WARN - -export TEST_FLAGS -export PARALLEL_FLAGS -export UPPER_DB_LOG - -test: go-test-internal test-adapters - -benchmark: go-benchmark-internal - -go-benchmark-%: - go test -v -benchtime=500ms -bench=. ./$*/... - -go-test-%: - go test -v ./$*/... - -test-adapters: \ - test-adapter-postgresql \ - test-adapter-cockroachdb \ - test-adapter-mysql \ - test-adapter-mssql \ - test-adapter-sqlite \ - test-adapter-ql \ - test-adapter-mongo - -test-adapter-%: - ($(MAKE) -C adapter/$* test-extended || exit 1) - -test-generic: - export TEST_FLAGS="-run TestGeneric"; \ - $(MAKE) test-adapters - -goimports: - for FILE in $$(find -name "*.go" | grep -v vendor); do \ - goimports -w $$FILE; \ - done diff --git a/vendor/github.com/upper/db/v4/README.md b/vendor/github.com/upper/db/v4/README.md deleted file mode 100644 index 0fa8c6d3..00000000 --- a/vendor/github.com/upper/db/v4/README.md +++ /dev/null @@ -1,37 +0,0 @@ -

- -

- -

- upper/db unit tests status -

- -# upper/db - -`upper/db` is a productive data access layer (DAL) for [Go](https://golang.org) -that provides agnostic tools to work with different data sources, such as: - -* [PostgreSQL](https://upper.io/v4/adapter/postgresql) -* [MySQL](https://upper.io/v4/adapter/mysql) -* [MSSQL](https://upper.io/v4/adapter/mssql) -* [CockroachDB](https://upper.io/v4/adapter/cockroachdb) -* [MongoDB](https://upper.io/v4/adapter/mongo) -* [QL](https://upper.io/v4/adapter/ql) -* [SQLite](https://upper.io/v4/adapter/sqlite) - -See [upper.io/v4](//upper.io/v4) for documentation and code samples. - -## The tour - -![tour](https://user-images.githubusercontent.com/385670/91495824-c6fabb00-e880-11ea-925b-a30b94474610.png) - -Take the [tour](https://tour.upper.io) to see real live examples in your -browser. - -## License - -Licensed under [MIT License](./LICENSE) - -## Contributors - -See the [list of contributors](https://github.com/upper/db/graphs/contributors). diff --git a/vendor/github.com/upper/db/v4/adapter.go b/vendor/github.com/upper/db/v4/adapter.go deleted file mode 100644 index e5fc6df6..00000000 --- a/vendor/github.com/upper/db/v4/adapter.go +++ /dev/null @@ -1,75 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "fmt" - "sync" -) - -var ( - adapterMap = make(map[string]Adapter) - adapterMapMu sync.RWMutex -) - -// Adapter interface defines an adapter -type Adapter interface { - Open(ConnectionURL) (Session, error) -} - -type missingAdapter struct { - name string -} - -func (ma *missingAdapter) Open(ConnectionURL) (Session, error) { - return nil, fmt.Errorf("upper: Missing adapter %q, did you forget to import it?", ma.name) -} - -// RegisterAdapter registers a generic database adapter. -func RegisterAdapter(name string, adapter Adapter) { - adapterMapMu.Lock() - defer adapterMapMu.Unlock() - - if name == "" { - panic(`Missing adapter name`) - } - if _, ok := adapterMap[name]; ok { - panic(`db.RegisterAdapter() called twice for adapter: ` + name) - } - adapterMap[name] = adapter -} - -// LookupAdapter returns a previously registered adapter by name. -func LookupAdapter(name string) Adapter { - adapterMapMu.RLock() - defer adapterMapMu.RUnlock() - - if adapter, ok := adapterMap[name]; ok { - return adapter - } - return &missingAdapter{name: name} -} - -// Open attempts to stablish a connection with a database. -func Open(adapterName string, settings ConnectionURL) (Session, error) { - return LookupAdapter(adapterName).Open(settings) -} diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/Makefile b/vendor/github.com/upper/db/v4/adapter/mysql/Makefile deleted file mode 100644 index b1bc6e2e..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/Makefile +++ /dev/null @@ -1,43 +0,0 @@ -SHELL ?= bash - -MYSQL_VERSION ?= 8 -MYSQL_SUPPORTED ?= $(MYSQL_VERSION) 5.7 -PROJECT ?= upper_mysql_$(MYSQL_VERSION) - -DB_HOST ?= 127.0.0.1 -DB_PORT ?= 3306 - -DB_NAME ?= upperio -DB_USERNAME ?= upperio_user -DB_PASSWORD ?= upperio//s3cr37 - -TEST_FLAGS ?= -PARALLEL_FLAGS ?= --halt-on-error 2 --jobs 1 - -export MYSQL_VERSION - -export DB_HOST -export DB_NAME -export DB_PASSWORD -export DB_PORT -export DB_USERNAME - -export TEST_FLAGS - -test: - go test -v -failfast -race -timeout 20m $(TEST_FLAGS) - -test-no-race: - go test -v -failfast $(TEST_FLAGS) - -server-up: server-down - docker-compose -p $(PROJECT) up -d && \ - sleep 15 - -server-down: - docker-compose -p $(PROJECT) down - -test-extended: - parallel $(PARALLEL_FLAGS) \ - "MYSQL_VERSION={} DB_PORT=\$$((3306+{#})) $(MAKE) server-up test server-down" ::: \ - $(MYSQL_SUPPORTED) diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/README.md b/vendor/github.com/upper/db/v4/adapter/mysql/README.md deleted file mode 100644 index f427fee4..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# MySQL adapter for upper/db - -Please read the full docs, acknowledgements and examples at -[https://upper.io/v4/adapter/mysql/](https://upper.io/v4/adapter/mysql/). - diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/collection.go b/vendor/github.com/upper/db/v4/adapter/mysql/collection.go deleted file mode 100644 index 9c272837..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/collection.go +++ /dev/null @@ -1,77 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package mysql - -import ( - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -type collectionAdapter struct { -} - -func (*collectionAdapter) Insert(col sqladapter.Collection, item interface{}) (interface{}, error) { - columnNames, columnValues, err := sqlbuilder.Map(item, nil) - if err != nil { - return nil, err - } - - pKey, err := col.PrimaryKeys() - if err != nil { - return nil, err - } - - q := col.SQL().InsertInto(col.Name()). - Columns(columnNames...). - Values(columnValues...) - - res, err := q.Exec() - if err != nil { - return nil, err - } - - lastID, err := res.LastInsertId() - if err == nil && len(pKey) <= 1 { - return lastID, nil - } - - keyMap := db.Cond{} - for i := range columnNames { - for j := 0; j < len(pKey); j++ { - if pKey[j] == columnNames[i] { - keyMap[pKey[j]] = columnValues[i] - } - } - } - - // There was an auto column among primary keys, let's search for it. - if lastID > 0 { - for j := 0; j < len(pKey); j++ { - if keyMap[pKey[j]] == nil { - keyMap[pKey[j]] = lastID - } - } - } - - return keyMap, nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/connection.go b/vendor/github.com/upper/db/v4/adapter/mysql/connection.go deleted file mode 100644 index 65154145..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/connection.go +++ /dev/null @@ -1,265 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package mysql - -import ( - "errors" - "fmt" - "net" - "net/url" - "strings" -) - -// From https://github.com/go-sql-driver/mysql/blob/master/utils.go -var ( - errInvalidDSNUnescaped = errors.New("Invalid DSN: Did you forget to escape a param value?") - errInvalidDSNAddr = errors.New("Invalid DSN: Network Address not terminated (missing closing brace)") - errInvalidDSNNoSlash = errors.New("Invalid DSN: Missing the slash separating the database name") -) - -// From https://github.com/go-sql-driver/mysql/blob/master/utils.go -type config struct { - user string - passwd string - net string - addr string - dbname string - params map[string]string -} - -// ConnectionURL implements a MySQL connection struct. -type ConnectionURL struct { - User string - Password string - Database string - Host string - Socket string - Options map[string]string -} - -func (c ConnectionURL) String() (s string) { - - if c.Database == "" { - return "" - } - - // Adding username. - if c.User != "" { - s = s + c.User - // Adding password. - if c.Password != "" { - s = s + ":" + c.Password - } - s = s + "@" - } - - // Adding protocol and address - if c.Socket != "" { - s = s + fmt.Sprintf("unix(%s)", c.Socket) - } else if c.Host != "" { - host, port, err := net.SplitHostPort(c.Host) - if err != nil { - host = c.Host - port = "3306" - } - s = s + fmt.Sprintf("tcp(%s:%s)", host, port) - } - - // Adding database - s = s + "/" + c.Database - - // Do we have any options? - if c.Options == nil { - c.Options = map[string]string{} - } - - // Default options. - if _, ok := c.Options["charset"]; !ok { - c.Options["charset"] = "utf8" - } - - if _, ok := c.Options["parseTime"]; !ok { - c.Options["parseTime"] = "true" - } - - // Converting options into URL values. - vv := url.Values{} - - for k, v := range c.Options { - vv.Set(k, v) - } - - // Inserting options. - if p := vv.Encode(); p != "" { - s = s + "?" + p - } - - return s -} - -// ParseURL parses s into a ConnectionURL struct. -func ParseURL(s string) (conn ConnectionURL, err error) { - var cfg *config - - if cfg, err = parseDSN(s); err != nil { - return - } - - conn.User = cfg.user - conn.Password = cfg.passwd - - if cfg.net == "unix" { - conn.Socket = cfg.addr - } else if cfg.net == "tcp" { - conn.Host = cfg.addr - } - - conn.Database = cfg.dbname - - conn.Options = map[string]string{} - - for k, v := range cfg.params { - conn.Options[k] = v - } - - return -} - -// from https://github.com/go-sql-driver/mysql/blob/master/utils.go -// parseDSN parses the DSN string to a config -func parseDSN(dsn string) (cfg *config, err error) { - // New config with some default values - cfg = &config{} - - // TODO: use strings.IndexByte when we can depend on Go 1.2 - - // [user[:password]@][net[(addr)]]/dbname[?param1=value1¶mN=valueN] - // Find the last '/' (since the password or the net addr might contain a '/') - foundSlash := false - for i := len(dsn) - 1; i >= 0; i-- { - if dsn[i] == '/' { - foundSlash = true - var j, k int - - // left part is empty if i <= 0 - if i > 0 { - // [username[:password]@][protocol[(address)]] - // Find the last '@' in dsn[:i] - for j = i; j >= 0; j-- { - if dsn[j] == '@' { - // username[:password] - // Find the first ':' in dsn[:j] - for k = 0; k < j; k++ { - if dsn[k] == ':' { - cfg.passwd = dsn[k+1 : j] - break - } - } - cfg.user = dsn[:k] - - break - } - } - - // [protocol[(address)]] - // Find the first '(' in dsn[j+1:i] - for k = j + 1; k < i; k++ { - if dsn[k] == '(' { - // dsn[i-1] must be == ')' if an address is specified - if dsn[i-1] != ')' { - if strings.ContainsRune(dsn[k+1:i], ')') { - return nil, errInvalidDSNUnescaped - } - return nil, errInvalidDSNAddr - } - cfg.addr = dsn[k+1 : i-1] - break - } - } - cfg.net = dsn[j+1 : k] - } - - // dbname[?param1=value1&...¶mN=valueN] - // Find the first '?' in dsn[i+1:] - for j = i + 1; j < len(dsn); j++ { - if dsn[j] == '?' { - if err = parseDSNParams(cfg, dsn[j+1:]); err != nil { - return - } - break - } - } - cfg.dbname = dsn[i+1 : j] - - break - } - } - - if !foundSlash && len(dsn) > 0 { - return nil, errInvalidDSNNoSlash - } - - // Set default network if empty - if cfg.net == "" { - cfg.net = "tcp" - } - - // Set default address if empty - if cfg.addr == "" { - switch cfg.net { - case "tcp": - cfg.addr = "127.0.0.1:3306" - case "unix": - cfg.addr = "/tmp/mysql.sock" - default: - return nil, errors.New("Default addr for network '" + cfg.net + "' unknown") - } - - } - - return -} - -// From https://github.com/go-sql-driver/mysql/blob/master/utils.go -// parseDSNParams parses the DSN "query string" -// Values must be url.QueryEscape'ed -func parseDSNParams(cfg *config, params string) (err error) { - for _, v := range strings.Split(params, "&") { - param := strings.SplitN(v, "=", 2) - if len(param) != 2 { - continue - } - - value := param[1] - - // lazy init - if cfg.params == nil { - cfg.params = make(map[string]string) - } - - if cfg.params[param[0]], err = url.QueryUnescape(value); err != nil { - return - } - } - - return -} diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/custom_types.go b/vendor/github.com/upper/db/v4/adapter/mysql/custom_types.go deleted file mode 100644 index 4b78aff4..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/custom_types.go +++ /dev/null @@ -1,172 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package mysql - -import ( - "database/sql" - "database/sql/driver" - "encoding/json" - "errors" - "reflect" - - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// JSON represents a MySQL's JSON value: -// https://www.mysql.org/docs/9.6/static/datatype-json.html. JSON -// satisfies sqlbuilder.ScannerValuer. -type JSON struct { - V interface{} -} - -// MarshalJSON encodes the wrapper value as JSON. -func (j JSON) MarshalJSON() ([]byte, error) { - return json.Marshal(j.V) -} - -// UnmarshalJSON decodes the given JSON into the wrapped value. -func (j *JSON) UnmarshalJSON(b []byte) error { - var v interface{} - if err := json.Unmarshal(b, &v); err != nil { - return err - } - j.V = v - return nil -} - -// Scan satisfies the sql.Scanner interface. -func (j *JSON) Scan(src interface{}) error { - if j.V == nil { - return nil - } - if src == nil { - dv := reflect.Indirect(reflect.ValueOf(j.V)) - dv.Set(reflect.Zero(dv.Type())) - return nil - } - b, ok := src.([]byte) - if !ok { - return errors.New("Scan source was not []bytes") - } - - if err := json.Unmarshal(b, j.V); err != nil { - return err - } - return nil -} - -// Value satisfies the driver.Valuer interface. -func (j JSON) Value() (driver.Value, error) { - if j.V == nil { - return nil, nil - } - if v, ok := j.V.(json.RawMessage); ok { - return string(v), nil - } - b, err := json.Marshal(j.V) - if err != nil { - return nil, err - } - return string(b), nil -} - -// JSONMap represents a map of interfaces with string keys -// (`map[string]interface{}`) that is compatible with MySQL's JSON type. -// JSONMap satisfies sqlbuilder.ScannerValuer. -type JSONMap map[string]interface{} - -// Value satisfies the driver.Valuer interface. -func (m JSONMap) Value() (driver.Value, error) { - return JSONValue(m) -} - -// Scan satisfies the sql.Scanner interface. -func (m *JSONMap) Scan(src interface{}) error { - *m = map[string]interface{}(nil) - return ScanJSON(m, src) -} - -// JSONArray represents an array of any type (`[]interface{}`) that is -// compatible with MySQL's JSON type. JSONArray satisfies -// sqlbuilder.ScannerValuer. -type JSONArray []interface{} - -// Value satisfies the driver.Valuer interface. -func (a JSONArray) Value() (driver.Value, error) { - return JSONValue(a) -} - -// Scan satisfies the sql.Scanner interface. -func (a *JSONArray) Scan(src interface{}) error { - return ScanJSON(a, src) -} - -// JSONValue takes an interface and provides a driver.Value that can be -// stored as a JSON column. -func JSONValue(i interface{}) (driver.Value, error) { - v := JSON{i} - return v.Value() -} - -// ScanJSON decodes a JSON byte stream into the passed dst value. -func ScanJSON(dst interface{}, src interface{}) error { - v := JSON{dst} - return v.Scan(src) -} - -// EncodeJSON is deprecated and going to be removed. Use ScanJSON instead. -func EncodeJSON(i interface{}) (driver.Value, error) { - return JSONValue(i) -} - -// DecodeJSON is deprecated and going to be removed. Use JSONValue instead. -func DecodeJSON(dst interface{}, src interface{}) error { - return ScanJSON(dst, src) -} - -// JSONConverter provides a helper method WrapValue that satisfies -// sqlbuilder.ValueWrapper, can be used to encode Go structs into JSON -// MySQL types and vice versa. -// -// Example: -// -// type MyCustomStruct struct { -// ID int64 `db:"id" json:"id"` -// Name string `db:"name" json:"name"` -// ... -// mysql.JSONConverter -// } -type JSONConverter struct{} - -func (*JSONConverter) ConvertValue(in interface{}) interface { - sql.Scanner - driver.Valuer -} { - return &JSON{in} -} - -// Type checks. -var ( - _ sqlbuilder.ScannerValuer = &JSONMap{} - _ sqlbuilder.ScannerValuer = &JSONArray{} - _ sqlbuilder.ScannerValuer = &JSON{} -) diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/database.go b/vendor/github.com/upper/db/v4/adapter/mysql/database.go deleted file mode 100644 index c300f5f3..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/database.go +++ /dev/null @@ -1,189 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package mysql wraps the github.com/go-sql-driver/mysql MySQL driver. See -// https://github.com/upper/db/adapter/mysql for documentation, particularities and usage -// examples. -package mysql - -import ( - "reflect" - "strings" - - "database/sql" - - _ "github.com/go-sql-driver/mysql" // MySQL driver. - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -// database is the actual implementation of Database -type database struct { -} - -func (*database) Template() *exql.Template { - return template -} - -func (*database) OpenDSN(sess sqladapter.Session, dsn string) (*sql.DB, error) { - return sql.Open("mysql", dsn) -} - -func (*database) Collections(sess sqladapter.Session) (collections []string, err error) { - q := sess.SQL(). - Select("table_name"). - From("information_schema.tables"). - Where("table_schema = ?", sess.Name()) - - iter := q.Iterator() - defer iter.Close() - - for iter.Next() { - var tableName string - if err := iter.Scan(&tableName); err != nil { - return nil, err - } - collections = append(collections, tableName) - } - if err := iter.Err(); err != nil { - return nil, err - } - - return collections, nil -} - -func (d *database) ConvertValue(in interface{}) interface{} { - switch v := in.(type) { - case *map[string]interface{}: - return (*JSONMap)(v) - - case map[string]interface{}: - return (*JSONMap)(&v) - } - - dv := reflect.ValueOf(in) - if dv.IsValid() { - if dv.Type().Kind() == reflect.Ptr { - dv = dv.Elem() - } - - switch dv.Kind() { - case reflect.Map: - if reflect.TypeOf(in).Kind() == reflect.Ptr { - w := reflect.ValueOf(in) - z := reflect.New(w.Elem().Type()) - w.Elem().Set(z.Elem()) - } - return &JSON{in} - case reflect.Slice: - return &JSON{in} - } - } - - return in -} - -func (*database) Err(err error) error { - if err != nil { - // This error is not exported so we have to check it by its string value. - s := err.Error() - if strings.Contains(s, `many connections`) { - return db.ErrTooManyClients - } - } - return err -} - -func (*database) NewCollection() sqladapter.CollectionAdapter { - return &collectionAdapter{} -} - -func (*database) LookupName(sess sqladapter.Session) (string, error) { - q := sess.SQL(). - Select(db.Raw("DATABASE() AS name")) - - iter := q.Iterator() - defer iter.Close() - - if iter.Next() { - var name string - if err := iter.Scan(&name); err != nil { - return "", err - } - return name, nil - } - - return "", iter.Err() -} - -func (*database) TableExists(sess sqladapter.Session, name string) error { - q := sess.SQL(). - Select("table_name"). - From("information_schema.tables"). - Where("table_schema = ? AND table_name = ?", sess.Name(), name) - - iter := q.Iterator() - defer iter.Close() - - if iter.Next() { - var name string - if err := iter.Scan(&name); err != nil { - return err - } - return nil - } - if err := iter.Err(); err != nil { - return err - } - - return db.ErrCollectionDoesNotExist -} - -func (*database) PrimaryKeys(sess sqladapter.Session, tableName string) ([]string, error) { - q := sess.SQL(). - Select("k.column_name"). - From("information_schema.key_column_usage AS k"). - Where(` - k.constraint_name = 'PRIMARY' - AND k.table_schema = ? - AND k.table_name = ? - `, sess.Name(), tableName). - OrderBy("k.ordinal_position") - - iter := q.Iterator() - defer iter.Close() - - pk := []string{} - - for iter.Next() { - var k string - if err := iter.Scan(&k); err != nil { - return nil, err - } - pk = append(pk, k) - } - if err := iter.Err(); err != nil { - return nil, err - } - - return pk, nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/docker-compose.yml b/vendor/github.com/upper/db/v4/adapter/mysql/docker-compose.yml deleted file mode 100644 index 18ab3499..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/docker-compose.yml +++ /dev/null @@ -1,14 +0,0 @@ -version: '3' - -services: - - server: - image: mysql:${MYSQL_VERSION:-5} - environment: - MYSQL_USER: ${DB_USERNAME:-upperio_user} - MYSQL_PASSWORD: ${DB_PASSWORD:-upperio//s3cr37} - MYSQL_ALLOW_EMPTY_PASSWORD: 1 - MYSQL_DATABASE: ${DB_NAME:-upperio} - ports: - - '${DB_HOST:-127.0.0.1}:${DB_PORT:-3306}:3306' - diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/mysql.go b/vendor/github.com/upper/db/v4/adapter/mysql/mysql.go deleted file mode 100644 index 06c91f1d..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/mysql.go +++ /dev/null @@ -1,51 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package mysql - -import ( - "database/sql" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// Adapter is the public name of the adapter. -const Adapter = `mysql` - -var registeredAdapter = sqladapter.RegisterAdapter(Adapter, &database{}) - -// Open establishes a connection to the database server and returns a -// db.Session instance (which is compatible with db.Session). -func Open(connURL db.ConnectionURL) (db.Session, error) { - return registeredAdapter.OpenDSN(connURL) -} - -// NewTx creates a sqlbuilder.Tx instance by wrapping a *sql.Tx value. -func NewTx(sqlTx *sql.Tx) (sqlbuilder.Tx, error) { - return registeredAdapter.NewTx(sqlTx) -} - -// New creates a sqlbuilder.Sesion instance by wrapping a *sql.DB value. -func New(sqlDB *sql.DB) (db.Session, error) { - return registeredAdapter.New(sqlDB) -} diff --git a/vendor/github.com/upper/db/v4/adapter/mysql/template.go b/vendor/github.com/upper/db/v4/adapter/mysql/template.go deleted file mode 100644 index 93e00129..00000000 --- a/vendor/github.com/upper/db/v4/adapter/mysql/template.go +++ /dev/null @@ -1,219 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package mysql - -import ( - "github.com/upper/db/v4/internal/cache" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -const ( - adapterColumnSeparator = `.` - adapterIdentifierSeparator = `, ` - adapterIdentifierQuote = "`{{.Value}}`" - adapterValueSeparator = `, ` - adapterValueQuote = `'{{.}}'` - adapterAndKeyword = `AND` - adapterOrKeyword = `OR` - adapterDescKeyword = `DESC` - adapterAscKeyword = `ASC` - adapterAssignmentOperator = `=` - adapterClauseGroup = `({{.}})` - adapterClauseOperator = ` {{.}} ` - adapterColumnValue = `{{.Column}} {{.Operator}} {{.Value}}` - adapterTableAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - adapterColumnAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - adapterSortByColumnLayout = `{{.Column}} {{.Order}}` - - adapterOrderByLayout = ` - {{if .SortColumns}} - ORDER BY {{.SortColumns}} - {{end}} - ` - - adapterWhereLayout = ` - {{if .Conds}} - WHERE {{.Conds}} - {{end}} - ` - - adapterUsingLayout = ` - {{if .Columns}} - USING ({{.Columns}}) - {{end}} - ` - - adapterJoinLayout = ` - {{if .Table}} - {{ if .On }} - {{.Type}} JOIN {{.Table}} - {{.On}} - {{ else if .Using }} - {{.Type}} JOIN {{.Table}} - {{.Using}} - {{ else if .Type | eq "CROSS" }} - {{.Type}} JOIN {{.Table}} - {{else}} - NATURAL {{.Type}} JOIN {{.Table}} - {{end}} - {{end}} - ` - - adapterOnLayout = ` - {{if .Conds}} - ON {{.Conds}} - {{end}} - ` - - adapterSelectLayout = ` - SELECT - {{if .Distinct}} - DISTINCT - {{end}} - - {{if defined .Columns}} - {{.Columns | compile}} - {{else}} - * - {{end}} - - {{if defined .Table}} - FROM {{.Table | compile}} - {{end}} - - {{.Joins | compile}} - - {{.Where | compile}} - - {{if defined .GroupBy}} - {{.GroupBy | compile}} - {{end}} - - {{.OrderBy | compile}} - - {{if .Limit}} - LIMIT {{.Limit}} - {{end}} - ` + - // The argument for LIMIT when only OFFSET is specified is a pretty odd magic - // number; this comes directly from MySQL's manual, see: - // https://dev.mysql.com/doc/refman/5.7/en/select.html - // - // "To retrieve all rows from a certain offset up to the end of the result - // set, you can use some large number for the second parameter. This - // statement retrieves all rows from the 96th row to the last: - // SELECT * FROM tbl LIMIT 95,18446744073709551615; " - // - // ¯\_(ツ)_/¯ - ` - {{if .Offset}} - {{if not .Limit}} - LIMIT 18446744073709551615 - {{end}} - OFFSET {{.Offset}} - {{end}} - ` - adapterDeleteLayout = ` - DELETE - FROM {{.Table | compile}} - {{.Where | compile}} - ` - adapterUpdateLayout = ` - UPDATE - {{.Table | compile}} - SET {{.ColumnValues | compile}} - {{.Where | compile}} - ` - - adapterSelectCountLayout = ` - SELECT - COUNT(1) AS _t - FROM {{.Table | compile}} - {{.Where | compile}} - ` - - adapterInsertLayout = ` - INSERT INTO {{.Table | compile}} - {{if defined .Columns}}({{.Columns | compile}}){{end}} - VALUES - {{if defined .Values}} - {{.Values | compile}} - {{else}} - () - {{end}} - {{if defined .Returning}} - RETURNING {{.Returning | compile}} - {{end}} - ` - - adapterTruncateLayout = ` - TRUNCATE TABLE {{.Table | compile}} - ` - - adapterDropDatabaseLayout = ` - DROP DATABASE {{.Database | compile}} - ` - - adapterDropTableLayout = ` - DROP TABLE {{.Table | compile}} - ` - - adapterGroupByLayout = ` - {{if .GroupColumns}} - GROUP BY {{.GroupColumns}} - {{end}} - ` -) - -var template = &exql.Template{ - ColumnSeparator: adapterColumnSeparator, - IdentifierSeparator: adapterIdentifierSeparator, - IdentifierQuote: adapterIdentifierQuote, - ValueSeparator: adapterValueSeparator, - ValueQuote: adapterValueQuote, - AndKeyword: adapterAndKeyword, - OrKeyword: adapterOrKeyword, - DescKeyword: adapterDescKeyword, - AscKeyword: adapterAscKeyword, - AssignmentOperator: adapterAssignmentOperator, - ClauseGroup: adapterClauseGroup, - ClauseOperator: adapterClauseOperator, - ColumnValue: adapterColumnValue, - TableAliasLayout: adapterTableAliasLayout, - ColumnAliasLayout: adapterColumnAliasLayout, - SortByColumnLayout: adapterSortByColumnLayout, - WhereLayout: adapterWhereLayout, - JoinLayout: adapterJoinLayout, - OnLayout: adapterOnLayout, - UsingLayout: adapterUsingLayout, - OrderByLayout: adapterOrderByLayout, - InsertLayout: adapterInsertLayout, - SelectLayout: adapterSelectLayout, - UpdateLayout: adapterUpdateLayout, - DeleteLayout: adapterDeleteLayout, - TruncateLayout: adapterTruncateLayout, - DropDatabaseLayout: adapterDropDatabaseLayout, - DropTableLayout: adapterDropTableLayout, - CountLayout: adapterSelectCountLayout, - GroupByLayout: adapterGroupByLayout, - Cache: cache.NewCache(), -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/Makefile b/vendor/github.com/upper/db/v4/adapter/postgresql/Makefile deleted file mode 100644 index 0fea6d94..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/Makefile +++ /dev/null @@ -1,44 +0,0 @@ -SHELL ?= bash - -POSTGRES_VERSION ?= 14-alpine -POSTGRES_SUPPORTED ?= $(POSTGRES_VERSION) 13-alpine 11-alpine 12-alpine - -PROJECT ?= upper_postgres_$(POSTGRES_VERSION) - -DB_HOST ?= 127.0.0.1 -DB_PORT ?= 5432 - -DB_NAME ?= upperio -DB_USERNAME ?= upperio_user -DB_PASSWORD ?= upperio//s3cr37 - -TEST_FLAGS ?= -PARALLEL_FLAGS ?= --halt-on-error 2 --jobs 1 - -export POSTGRES_VERSION - -export DB_HOST -export DB_NAME -export DB_PASSWORD -export DB_PORT -export DB_USERNAME - -export TEST_FLAGS - -test: - go test -v -failfast -race -timeout 20m $(TEST_FLAGS) - -test-no-race: - go test -v -failfast $(TEST_FLAGS) - -server-up: server-down - docker-compose -p $(PROJECT) up -d && \ - sleep 10 - -server-down: - docker-compose -p $(PROJECT) down - -test-extended: - parallel $(PARALLEL_FLAGS) \ - "POSTGRES_VERSION={} DB_PORT=\$$((5432+{#})) $(MAKE) server-up test server-down" ::: \ - $(POSTGRES_SUPPORTED) diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/README.md b/vendor/github.com/upper/db/v4/adapter/postgresql/README.md deleted file mode 100644 index 7e726013..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# PostgreSQL adapter for upper/db - -Please read the full docs, acknowledgements and examples at -[https://upper.io/v4/adapter/postgresql/](https://upper.io/v4/adapter/postgresql/). - diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/collection.go b/vendor/github.com/upper/db/v4/adapter/postgresql/collection.go deleted file mode 100644 index 04c5005d..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/collection.go +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" -) - -type collectionAdapter struct { -} - -func (*collectionAdapter) Insert(col sqladapter.Collection, item interface{}) (interface{}, error) { - pKey, err := col.PrimaryKeys() - if err != nil { - return nil, err - } - - q := col.SQL().InsertInto(col.Name()).Values(item) - - if len(pKey) == 0 { - // There is no primary key. - res, err := q.Exec() - if err != nil { - return nil, err - } - - // Attempt to use LastInsertId() (probably won't work, but the Exec() - // succeeded, so we can safely ignore the error from LastInsertId()). - lastID, err := res.LastInsertId() - if err != nil { - return nil, nil - } - return lastID, nil - } - - // Asking the database to return the primary key after insertion. - q = q.Returning(pKey...) - - var keyMap db.Cond - if err := q.Iterator().One(&keyMap); err != nil { - return nil, err - } - - // The IDSetter interface does not match, look for another interface match. - if len(keyMap) == 1 { - return keyMap[pKey[0]], nil - } - - // This was a compound key and no interface matched it, let's return a map. - return keyMap, nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/connection.go b/vendor/github.com/upper/db/v4/adapter/postgresql/connection.go deleted file mode 100644 index 47699442..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/connection.go +++ /dev/null @@ -1,310 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "fmt" - "net" - "net/url" - "sort" - "strings" - "time" - "unicode" -) - -// scanner implements a tokenizer for libpq-style option strings. -type scanner struct { - s []rune - i int -} - -// Next returns the next rune. It returns 0, false if the end of the text has -// been reached. -func (s *scanner) Next() (rune, bool) { - if s.i >= len(s.s) { - return 0, false - } - r := s.s[s.i] - s.i++ - return r, true -} - -// SkipSpaces returns the next non-whitespace rune. It returns 0, false if the -// end of the text has been reached. -func (s *scanner) SkipSpaces() (rune, bool) { - r, ok := s.Next() - for unicode.IsSpace(r) && ok { - r, ok = s.Next() - } - return r, ok -} - -type values map[string]string - -func (vs values) Set(k, v string) { - vs[k] = v -} - -func (vs values) Get(k string) (v string) { - return vs[k] -} - -func (vs values) Isset(k string) bool { - _, ok := vs[k] - return ok -} - -// ConnectionURL represents a parsed PostgreSQL connection URL. -// -// You can use a ConnectionURL struct as an argument for Open: -// -// var settings = postgresql.ConnectionURL{ -// Host: "localhost", // PostgreSQL server IP or name. -// Database: "peanuts", // Database name. -// User: "cbrown", // Optional user name. -// Password: "snoopy", // Optional user password. -// } -// -// sess, err = postgresql.Open(settings) -// -// If you already have a valid DSN, you can use ParseURL to convert it into -// a ConnectionURL before passing it to Open. -type ConnectionURL struct { - User string - Password string - Host string - Socket string - Database string - Options map[string]string - - timezone *time.Location -} - -var escaper = strings.NewReplacer(` `, `\ `, `'`, `\'`, `\`, `\\`) - -// ParseURL parses the given DSN into a ConnectionURL struct. -// A typical PostgreSQL connection URL looks like: -// -// postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full -func ParseURL(s string) (u *ConnectionURL, err error) { - o := make(values) - - if strings.HasPrefix(s, "postgres://") || strings.HasPrefix(s, "postgresql://") { - s, err = parseURL(s) - if err != nil { - return u, err - } - } - - if err := parseOpts(s, o); err != nil { - return u, err - } - u = &ConnectionURL{} - - u.User = o.Get("user") - u.Password = o.Get("password") - - h := o.Get("host") - p := o.Get("port") - - if strings.HasPrefix(h, "/") { - u.Socket = h - } else { - if p == "" { - u.Host = h - } else { - u.Host = fmt.Sprintf("%s:%s", h, p) - } - } - - u.Database = o.Get("dbname") - - u.Options = make(map[string]string) - - for k := range o { - switch k { - case "user", "password", "host", "port", "dbname": - // Skip - default: - u.Options[k] = o[k] - } - } - - if timezone, ok := u.Options["timezone"]; ok { - u.timezone, _ = time.LoadLocation(timezone) - } - - return u, err -} - -// parseOpts parses the options from name and adds them to the values. -// -// The parsing code is based on conninfo_parse from libpq's fe-connect.c -func parseOpts(name string, o values) error { - s := newScanner(name) - - for { - var ( - keyRunes, valRunes []rune - r rune - ok bool - ) - - if r, ok = s.SkipSpaces(); !ok { - break - } - - // Scan the key - for !unicode.IsSpace(r) && r != '=' { - keyRunes = append(keyRunes, r) - if r, ok = s.Next(); !ok { - break - } - } - - // Skip any whitespace if we're not at the = yet - if r != '=' { - r, ok = s.SkipSpaces() - } - - // The current character should be = - if r != '=' || !ok { - return fmt.Errorf(`missing "=" after %q in connection info string"`, string(keyRunes)) - } - - // Skip any whitespace after the = - if r, ok = s.SkipSpaces(); !ok { - // If we reach the end here, the last value is just an empty string as per libpq. - o.Set(string(keyRunes), "") - break - } - - if r != '\'' { - for !unicode.IsSpace(r) { - if r == '\\' { - if r, ok = s.Next(); !ok { - return fmt.Errorf(`missing character after backslash`) - } - } - valRunes = append(valRunes, r) - - if r, ok = s.Next(); !ok { - break - } - } - } else { - quote: - for { - if r, ok = s.Next(); !ok { - return fmt.Errorf(`unterminated quoted string literal in connection string`) - } - switch r { - case '\'': - break quote - case '\\': - r, _ = s.Next() - fallthrough - default: - valRunes = append(valRunes, r) - } - } - } - - o.Set(string(keyRunes), string(valRunes)) - } - - return nil -} - -// newScanner returns a new scanner initialized with the option string s. -func newScanner(s string) *scanner { - return &scanner{[]rune(s), 0} -} - -// ParseURL no longer needs to be used by clients of this library since supplying a URL as a -// connection string to sql.Open() is now supported: -// -// sql.Open("postgres", "postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full") -// -// It remains exported here for backwards-compatibility. -// -// ParseURL converts a url to a connection string for driver.Open. -// Example: -// -// "postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full" -// -// converts to: -// -// "user=bob password=secret host=1.2.3.4 port=5432 dbname=mydb sslmode=verify-full" -// -// A minimal example: -// -// "postgres://" -// -// This will be blank, causing driver.Open to use all of the defaults -// -// NOTE: vendored/copied from github.com/lib/pq -func parseURL(uri string) (string, error) { - u, err := url.Parse(uri) - if err != nil { - return "", err - } - - if u.Scheme != "postgres" && u.Scheme != "postgresql" { - return "", fmt.Errorf("invalid connection protocol: %s", u.Scheme) - } - - var kvs []string - escaper := strings.NewReplacer(` `, `\ `, `'`, `\'`, `\`, `\\`) - accrue := func(k, v string) { - if v != "" { - kvs = append(kvs, k+"="+escaper.Replace(v)) - } - } - - if u.User != nil { - v := u.User.Username() - accrue("user", v) - - v, _ = u.User.Password() - accrue("password", v) - } - - if host, port, err := net.SplitHostPort(u.Host); err != nil { - accrue("host", u.Host) - } else { - accrue("host", host) - accrue("port", port) - } - - if u.Path != "" { - accrue("dbname", u.Path[1:]) - } - - q := u.Query() - for k := range q { - accrue(k, q.Get(k)) - } - - sort.Strings(kvs) // Makes testing easier (not a performance concern) - return strings.Join(kvs, " "), nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pgx.go b/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pgx.go deleted file mode 100644 index 5cad7682..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pgx.go +++ /dev/null @@ -1,94 +0,0 @@ -//go:build !pq -// +build !pq - -package postgresql - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import ( - "net" - "sort" - "strings" -) - -// String reassembles the parsed PostgreSQL connection URL into a valid DSN. -func (c ConnectionURL) String() (s string) { - u := []string{} - - // TODO: This surely needs some sort of escaping. - if c.User != "" { - u = append(u, "user="+escaper.Replace(c.User)) - } - - if c.Password != "" { - u = append(u, "password="+escaper.Replace(c.Password)) - } - - if c.Host != "" { - host, port, err := net.SplitHostPort(c.Host) - if err == nil { - if host == "" { - host = "127.0.0.1" - } - if port == "" { - port = "5432" - } - u = append(u, "host="+escaper.Replace(host)) - u = append(u, "port="+escaper.Replace(port)) - } else { - u = append(u, "host="+escaper.Replace(c.Host)) - } - } - - if c.Socket != "" { - u = append(u, "host="+escaper.Replace(c.Socket)) - } - - if c.Database != "" { - u = append(u, "dbname="+escaper.Replace(c.Database)) - } - - // Is there actually any connection data? - if len(u) == 0 { - return "" - } - - if c.Options == nil { - c.Options = map[string]string{} - } - - // If not present, SSL mode is assumed "prefer". - if sslMode, ok := c.Options["sslmode"]; !ok || sslMode == "" { - c.Options["sslmode"] = "prefer" - } - - // Disabled by default - c.Options["statement_cache_capacity"] = "0" - - for k, v := range c.Options { - u = append(u, escaper.Replace(k)+"="+escaper.Replace(v)) - } - - sort.Strings(u) - - return strings.Join(u, " ") -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pq.go b/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pq.go deleted file mode 100644 index c727b2fa..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/connection_pq.go +++ /dev/null @@ -1,91 +0,0 @@ -//go:build pq -// +build pq - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "net" - "sort" - "strings" -) - -// String reassembles the parsed PostgreSQL connection URL into a valid DSN. -func (c ConnectionURL) String() (s string) { - u := []string{} - - // TODO: This surely needs some sort of escaping. - if c.User != "" { - u = append(u, "user="+escaper.Replace(c.User)) - } - - if c.Password != "" { - u = append(u, "password="+escaper.Replace(c.Password)) - } - - if c.Host != "" { - host, port, err := net.SplitHostPort(c.Host) - if err == nil { - if host == "" { - host = "127.0.0.1" - } - if port == "" { - port = "5432" - } - u = append(u, "host="+escaper.Replace(host)) - u = append(u, "port="+escaper.Replace(port)) - } else { - u = append(u, "host="+escaper.Replace(c.Host)) - } - } - - if c.Socket != "" { - u = append(u, "host="+escaper.Replace(c.Socket)) - } - - if c.Database != "" { - u = append(u, "dbname="+escaper.Replace(c.Database)) - } - - // Is there actually any connection data? - if len(u) == 0 { - return "" - } - - if c.Options == nil { - c.Options = map[string]string{} - } - - // If not present, SSL mode is assumed "prefer". - if sslMode, ok := c.Options["sslmode"]; !ok || sslMode == "" { - c.Options["sslmode"] = "prefer" - } - - for k, v := range c.Options { - u = append(u, escaper.Replace(k)+"="+escaper.Replace(v)) - } - - sort.Strings(u) - - return strings.Join(u, " ") -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types.go b/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types.go deleted file mode 100644 index d06ee209..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types.go +++ /dev/null @@ -1,147 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "context" - "database/sql" - "database/sql/driver" - "time" - - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// JSONBMap represents a map of interfaces with string keys -// (`map[string]interface{}`) that is compatible with PostgreSQL's JSONB type. -// JSONBMap satisfies sqlbuilder.ScannerValuer. -type JSONBMap map[string]interface{} - -// Value satisfies the driver.Valuer interface. -func (m JSONBMap) Value() (driver.Value, error) { - return JSONBValue(m) -} - -// Scan satisfies the sql.Scanner interface. -func (m *JSONBMap) Scan(src interface{}) error { - *m = map[string]interface{}(nil) - return ScanJSONB(m, src) -} - -// JSONBArray represents an array of any type (`[]interface{}`) that is -// compatible with PostgreSQL's JSONB type. JSONBArray satisfies -// sqlbuilder.ScannerValuer. -type JSONBArray []interface{} - -// Value satisfies the driver.Valuer interface. -func (a JSONBArray) Value() (driver.Value, error) { - return JSONBValue(a) -} - -// Scan satisfies the sql.Scanner interface. -func (a *JSONBArray) Scan(src interface{}) error { - return ScanJSONB(a, src) -} - -// JSONBValue takes an interface and provides a driver.Value that can be -// stored as a JSONB column. -func JSONBValue(i interface{}) (driver.Value, error) { - v := JSONB{i} - return v.Value() -} - -// ScanJSONB decodes a JSON byte stream into the passed dst value. -func ScanJSONB(dst interface{}, src interface{}) error { - v := JSONB{dst} - return v.Scan(src) -} - -type JSONBConverter struct { -} - -func (*JSONBConverter) ConvertValue(in interface{}) interface { - sql.Scanner - driver.Valuer -} { - return &JSONB{in} -} - -type timeWrapper struct { - v **time.Time - loc *time.Location -} - -func (t timeWrapper) Value() (driver.Value, error) { - if *t.v != nil { - return **t.v, nil - } - return nil, nil -} - -func (t *timeWrapper) Scan(src interface{}) error { - if src == nil { - nilTime := (*time.Time)(nil) - if t.v == nil { - t.v = &nilTime - } else { - *(t.v) = nilTime - } - return nil - } - tz := src.(time.Time) - if t.loc != nil && (tz.Location() == time.Local) { - tz = tz.In(t.loc) - } - if tz.Location().String() == "" { - tz = tz.In(time.UTC) - } - if *(t.v) == nil { - *(t.v) = &tz - } else { - **t.v = tz - } - return nil -} - -func (d *database) ConvertValueContext(ctx context.Context, in interface{}) interface{} { - tz, _ := ctx.Value("timezone").(*time.Location) - - switch v := in.(type) { - case *time.Time: - return &timeWrapper{&v, tz} - case **time.Time: - return &timeWrapper{v, tz} - } - - return d.ConvertValue(in) -} - -// Type checks. -var ( - _ sqlbuilder.ScannerValuer = &StringArray{} - _ sqlbuilder.ScannerValuer = &Int64Array{} - _ sqlbuilder.ScannerValuer = &Float64Array{} - _ sqlbuilder.ScannerValuer = &Float32Array{} - _ sqlbuilder.ScannerValuer = &BoolArray{} - _ sqlbuilder.ScannerValuer = &JSONBMap{} - _ sqlbuilder.ScannerValuer = &JSONBArray{} - _ sqlbuilder.ScannerValuer = &JSONB{} -) diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pgx.go b/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pgx.go deleted file mode 100644 index 3559e6bf..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pgx.go +++ /dev/null @@ -1,306 +0,0 @@ -// +build !pq - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "database/sql/driver" - - "github.com/jackc/pgtype" -) - -// JSONB represents a PostgreSQL's JSONB value: -// https://www.postgresql.org/docs/9.6/static/datatype-json.html. JSONB -// satisfies sqlbuilder.ScannerValuer. -type JSONB struct { - Data interface{} -} - -// MarshalJSON encodes the wrapper value as JSON. -func (j JSONB) MarshalJSON() ([]byte, error) { - t := &pgtype.JSONB{} - if err := t.Set(j.Data); err != nil { - return nil, err - } - return t.MarshalJSON() -} - -// UnmarshalJSON decodes the given JSON into the wrapped value. -func (j *JSONB) UnmarshalJSON(b []byte) error { - t := &pgtype.JSONB{} - if err := t.UnmarshalJSON(b); err != nil { - return err - } - if j.Data == nil { - j.Data = t.Get() - return nil - } - if err := t.AssignTo(&j.Data); err != nil { - return err - } - return nil -} - -// Scan satisfies the sql.Scanner interface. -func (j *JSONB) Scan(src interface{}) error { - t := &pgtype.JSONB{} - if err := t.Scan(src); err != nil { - return err - } - if j.Data == nil { - j.Data = t.Get() - return nil - } - if err := t.AssignTo(j.Data); err != nil { - return err - } - return nil -} - -// Value satisfies the driver.Valuer interface. -func (j JSONB) Value() (driver.Value, error) { - t := &pgtype.JSONB{} - if err := t.Set(j.Data); err != nil { - return nil, err - } - return t.Value() -} - -// StringArray represents a one-dimensional array of strings (`[]string{}`) -// that is compatible with PostgreSQL's text array (`text[]`). StringArray -// satisfies sqlbuilder.ScannerValuer. -type StringArray []string - -// Value satisfies the driver.Valuer interface. -func (a StringArray) Value() (driver.Value, error) { - t := pgtype.TextArray{} - if err := t.Set(a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (sa *StringArray) Scan(src interface{}) error { - d := []string{} - t := pgtype.TextArray{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *sa = StringArray(d) - return nil -} - -type Bytea []byte - -func (b Bytea) Value() (driver.Value, error) { - t := pgtype.Bytea{Bytes: b} - if err := t.Set(b); err != nil { - return nil, err - } - return t.Value() -} - -func (b *Bytea) Scan(src interface{}) error { - d := []byte{} - t := pgtype.Bytea{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *b = Bytea(d) - return nil -} - -// ByteaArray represents a one-dimensional array of strings (`[]string{}`) -// that is compatible with PostgreSQL's text array (`text[]`). ByteaArray -// satisfies sqlbuilder.ScannerValuer. -type ByteaArray [][]byte - -// Value satisfies the driver.Valuer interface. -func (a ByteaArray) Value() (driver.Value, error) { - t := pgtype.ByteaArray{} - if err := t.Set(a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (ba *ByteaArray) Scan(src interface{}) error { - d := [][]byte{} - t := pgtype.ByteaArray{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *ba = ByteaArray(d) - return nil -} - -// Int64Array represents a one-dimensional array of int64s (`[]int64{}`) that -// is compatible with PostgreSQL's integer array (`integer[]`). Int64Array -// satisfies sqlbuilder.ScannerValuer. -type Int64Array []int64 - -// Value satisfies the driver.Valuer interface. -func (i64a Int64Array) Value() (driver.Value, error) { - t := pgtype.Int8Array{} - if err := t.Set(i64a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (i64a *Int64Array) Scan(src interface{}) error { - d := []int64{} - t := pgtype.Int8Array{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *i64a = Int64Array(d) - return nil -} - -// Int32Array represents a one-dimensional array of int32s (`[]int32{}`) that -// is compatible with PostgreSQL's integer array (`integer[]`). Int32Array -// satisfies sqlbuilder.ScannerValuer. -type Int32Array []int32 - -// Value satisfies the driver.Valuer interface. -func (i32a Int32Array) Value() (driver.Value, error) { - t := pgtype.Int4Array{} - if err := t.Set(i32a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (i32a *Int32Array) Scan(src interface{}) error { - d := []int32{} - t := pgtype.Int4Array{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *i32a = Int32Array(d) - return nil -} - -// Float64Array represents a one-dimensional array of float64s (`[]float64{}`) -// that is compatible with PostgreSQL's double precision array (`double -// precision[]`). Float64Array satisfies sqlbuilder.ScannerValuer. -type Float64Array []float64 - -// Value satisfies the driver.Valuer interface. -func (f64a Float64Array) Value() (driver.Value, error) { - t := pgtype.Float8Array{} - if err := t.Set(f64a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (f64a *Float64Array) Scan(src interface{}) error { - d := []float64{} - t := pgtype.Float8Array{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *f64a = Float64Array(d) - return nil -} - -// Float32Array represents a one-dimensional array of float32s (`[]float32{}`) -// that is compatible with PostgreSQL's double precision array (`double -// precision[]`). Float32Array satisfies sqlbuilder.ScannerValuer. -type Float32Array []float32 - -// Value satisfies the driver.Valuer interface. -func (f32a Float32Array) Value() (driver.Value, error) { - t := pgtype.Float8Array{} - if err := t.Set(f32a); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (f32a *Float32Array) Scan(src interface{}) error { - d := []float32{} - t := pgtype.Float8Array{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *f32a = Float32Array(d) - return nil -} - -// BoolArray represents a one-dimensional array of int64s (`[]bool{}`) that -// is compatible with PostgreSQL's boolean type (`boolean[]`). BoolArray -// satisfies sqlbuilder.ScannerValuer. -type BoolArray []bool - -// Value satisfies the driver.Valuer interface. -func (ba BoolArray) Value() (driver.Value, error) { - t := pgtype.BoolArray{} - if err := t.Set(ba); err != nil { - return nil, err - } - return t.Value() -} - -// Scan satisfies the sql.Scanner interface. -func (ba *BoolArray) Scan(src interface{}) error { - d := []bool{} - t := pgtype.BoolArray{} - if err := t.Scan(src); err != nil { - return err - } - if err := t.AssignTo(&d); err != nil { - return err - } - *ba = BoolArray(d) - return nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pq.go b/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pq.go deleted file mode 100644 index 20ef131a..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/custom_types_pq.go +++ /dev/null @@ -1,269 +0,0 @@ -// +build pq - -package postgresql - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import ( - "bytes" - "database/sql/driver" - "encoding/hex" - "encoding/json" - "errors" - "fmt" - "reflect" - "strconv" - "time" - - "github.com/lib/pq" -) - -// JSONB represents a PostgreSQL's JSONB value: -// https://www.postgresql.org/docs/9.6/static/datatype-json.html. JSONB -// satisfies sqlbuilder.ScannerValuer. -type JSONB struct { - Data interface{} -} - -// MarshalJSON encodes the wrapper value as JSON. -func (j JSONB) MarshalJSON() ([]byte, error) { - return json.Marshal(j.Data) -} - -// UnmarshalJSON decodes the given JSON into the wrapped value. -func (j *JSONB) UnmarshalJSON(b []byte) error { - var v interface{} - if err := json.Unmarshal(b, &v); err != nil { - return err - } - j.Data = v - return nil -} - -// Scan satisfies the sql.Scanner interface. -func (j *JSONB) Scan(src interface{}) error { - if j.Data == nil { - return nil - } - if src == nil { - dv := reflect.Indirect(reflect.ValueOf(j.Data)) - dv.Set(reflect.Zero(dv.Type())) - return nil - } - - b, ok := src.([]byte) - if !ok { - return errors.New("Scan source was not []bytes") - } - - if err := json.Unmarshal(b, j.Data); err != nil { - return err - } - return nil -} - -// Value satisfies the driver.Valuer interface. -func (j JSONB) Value() (driver.Value, error) { - // See https://github.com/lib/pq/issues/528#issuecomment-257197239 on why are - // we returning string instead of []byte. - if j.Data == nil { - return nil, nil - } - if v, ok := j.Data.(json.RawMessage); ok { - return string(v), nil - } - b, err := json.Marshal(j.Data) - if err != nil { - return nil, err - } - return string(b), nil -} - -// StringArray represents a one-dimensional array of strings (`[]string{}`) -// that is compatible with PostgreSQL's text array (`text[]`). StringArray -// satisfies sqlbuilder.ScannerValuer. -type StringArray pq.StringArray - -// Value satisfies the driver.Valuer interface. -func (a StringArray) Value() (driver.Value, error) { - return pq.StringArray(a).Value() -} - -// Scan satisfies the sql.Scanner interface. -func (a *StringArray) Scan(src interface{}) error { - s := pq.StringArray(*a) - if err := s.Scan(src); err != nil { - return err - } - *a = StringArray(s) - return nil -} - -// Int64Array represents a one-dimensional array of int64s (`[]int64{}`) that -// is compatible with PostgreSQL's integer array (`integer[]`). Int64Array -// satisfies sqlbuilder.ScannerValuer. -type Int64Array pq.Int64Array - -// Value satisfies the driver.Valuer interface. -func (i Int64Array) Value() (driver.Value, error) { - return pq.Int64Array(i).Value() -} - -// Scan satisfies the sql.Scanner interface. -func (i *Int64Array) Scan(src interface{}) error { - s := pq.Int64Array(*i) - if err := s.Scan(src); err != nil { - return err - } - *i = Int64Array(s) - return nil -} - -// Float64Array represents a one-dimensional array of float64s (`[]float64{}`) -// that is compatible with PostgreSQL's double precision array (`double -// precision[]`). Float64Array satisfies sqlbuilder.ScannerValuer. -type Float64Array pq.Float64Array - -// Value satisfies the driver.Valuer interface. -func (f Float64Array) Value() (driver.Value, error) { - return pq.Float64Array(f).Value() -} - -// Scan satisfies the sql.Scanner interface. -func (f *Float64Array) Scan(src interface{}) error { - s := pq.Float64Array(*f) - if err := s.Scan(src); err != nil { - return err - } - *f = Float64Array(s) - return nil -} - -// Float32Array represents a one-dimensional array of float32s (`[]float32{}`) -// that is compatible with PostgreSQL's double precision array (`double -// precision[]`). Float32Array satisfies sqlbuilder.ScannerValuer. -type Float32Array pq.Float32Array - -// Value satisfies the driver.Valuer interface. -func (f Float32Array) Value() (driver.Value, error) { - return pq.Float32Array(f).Value() -} - -// Scan satisfies the sql.Scanner interface. -func (f *Float32Array) Scan(src interface{}) error { - s := pq.Float32Array(*f) - if err := s.Scan(src); err != nil { - return err - } - *f = Float32Array(s) - return nil -} - -// BoolArray represents a one-dimensional array of int64s (`[]bool{}`) that -// is compatible with PostgreSQL's boolean type (`boolean[]`). BoolArray -// satisfies sqlbuilder.ScannerValuer. -type BoolArray pq.BoolArray - -// Value satisfies the driver.Valuer interface. -func (b BoolArray) Value() (driver.Value, error) { - return pq.BoolArray(b).Value() -} - -// Scan satisfies the sql.Scanner interface. -func (b *BoolArray) Scan(src interface{}) error { - s := pq.BoolArray(*b) - if err := s.Scan(src); err != nil { - return err - } - *b = BoolArray(s) - return nil -} - -type Bytea []byte - -// Scan satisfies the sql.Scanner interface. -func (b *Bytea) Scan(src interface{}) error { - decoded, err := parseBytea(src.([]byte)) - if err != nil { - return err - } - if len(decoded) < 1 { - *b = nil - return nil - } - (*b) = make(Bytea, len(decoded)) - for i := range decoded { - (*b)[i] = decoded[i] - } - return nil -} - -type Time time.Time - -// Parse a bytea value received from the server. Both "hex" and the legacy -// "escape" format are supported. -func parseBytea(s []byte) (result []byte, err error) { - if len(s) >= 2 && bytes.Equal(s[:2], []byte("\\x")) { - // bytea_output = hex - s = s[2:] // trim off leading "\\x" - result = make([]byte, hex.DecodedLen(len(s))) - _, err := hex.Decode(result, s) - if err != nil { - return nil, err - } - } else { - // bytea_output = escape - for len(s) > 0 { - if s[0] == '\\' { - // escaped '\\' - if len(s) >= 2 && s[1] == '\\' { - result = append(result, '\\') - s = s[2:] - continue - } - - // '\\' followed by an octal number - if len(s) < 4 { - return nil, fmt.Errorf("invalid bytea sequence %v", s) - } - r, err := strconv.ParseInt(string(s[1:4]), 8, 9) - if err != nil { - return nil, fmt.Errorf("could not parse bytea value: %s", err.Error()) - } - result = append(result, byte(r)) - s = s[4:] - } else { - // We hit an unescaped, raw byte. Try to read in as many as - // possible in one go. - i := bytes.IndexByte(s, '\\') - if i == -1 { - result = append(result, s...) - break - } - result = append(result, s[:i]...) - s = s[i:] - } - } - } - - return result, nil -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/database.go b/vendor/github.com/upper/db/v4/adapter/postgresql/database.go deleted file mode 100644 index cea7da2e..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/database.go +++ /dev/null @@ -1,201 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package postgresql provides an adapter for PostgreSQL. -// See https://github.com/upper/db/adapter/postgresql for documentation, -// particularities and usage examples. -package postgresql - -import ( - "fmt" - "strings" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" - "github.com/upper/db/v4/internal/sqladapter/exql" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -type database struct { -} - -func (*database) Template() *exql.Template { - return template -} - -func (*database) Collections(sess sqladapter.Session) (collections []string, err error) { - q := sess.SQL(). - Select("table_name"). - From("information_schema.tables"). - Where("table_schema = ?", "public") - - iter := q.Iterator() - defer iter.Close() - - for iter.Next() { - var name string - if err := iter.Scan(&name); err != nil { - return nil, err - } - collections = append(collections, name) - } - if err := iter.Err(); err != nil { - return nil, err - } - - return collections, nil -} - -func (*database) ConvertValue(in interface{}) interface{} { - switch v := in.(type) { - case *[]int64: - return (*Int64Array)(v) - case *[]string: - return (*StringArray)(v) - case *[]float64: - return (*Float64Array)(v) - case *[]bool: - return (*BoolArray)(v) - case *map[string]interface{}: - return (*JSONBMap)(v) - - case []int64: - return (*Int64Array)(&v) - case []string: - return (*StringArray)(&v) - case []float64: - return (*Float64Array)(&v) - case []bool: - return (*BoolArray)(&v) - case map[string]interface{}: - return (*JSONBMap)(&v) - - } - return in -} - -func (*database) CompileStatement(sess sqladapter.Session, stmt *exql.Statement, args []interface{}) (string, []interface{}, error) { - compiled, err := stmt.Compile(template) - if err != nil { - return "", nil, err - } - - query, args := sqlbuilder.Preprocess(compiled, args) - query = string(sqladapter.ReplaceWithDollarSign([]byte(query))) - return query, args, nil -} - -func (*database) Err(err error) error { - if err != nil { - s := err.Error() - // These errors are not exported so we have to check them by they string value. - if strings.Contains(s, `too many clients`) || strings.Contains(s, `remaining connection slots are reserved`) || strings.Contains(s, `too many open`) { - return db.ErrTooManyClients - } - } - return err -} - -func (*database) NewCollection() sqladapter.CollectionAdapter { - return &collectionAdapter{} -} - -func (*database) LookupName(sess sqladapter.Session) (string, error) { - q := sess.SQL(). - Select(db.Raw("CURRENT_DATABASE() AS name")) - - iter := q.Iterator() - defer iter.Close() - - if iter.Next() { - var name string - if err := iter.Scan(&name); err != nil { - return "", err - } - return name, nil - } - - return "", iter.Err() -} - -func (*database) TableExists(sess sqladapter.Session, name string) error { - q := sess.SQL(). - Select("table_name"). - From("information_schema.tables"). - Where("table_catalog = ? AND table_name = ?", sess.Name(), name) - - iter := q.Iterator() - defer iter.Close() - - if iter.Next() { - var name string - if err := iter.Scan(&name); err != nil { - return err - } - return nil - } - if err := iter.Err(); err != nil { - return err - } - - return db.ErrCollectionDoesNotExist -} - -func (*database) PrimaryKeys(sess sqladapter.Session, tableName string) ([]string, error) { - q := sess.SQL(). - Select("pg_attribute.attname AS pkey"). - From("pg_index", "pg_class", "pg_attribute"). - Where(` - pg_class.oid = '` + quotedTableName(tableName) + `'::regclass - AND indrelid = pg_class.oid - AND pg_attribute.attrelid = pg_class.oid - AND pg_attribute.attnum = ANY(pg_index.indkey) - AND indisprimary - `).OrderBy("pkey") - - iter := q.Iterator() - defer iter.Close() - - pk := []string{} - - for iter.Next() { - var k string - if err := iter.Scan(&k); err != nil { - return nil, err - } - pk = append(pk, k) - } - if err := iter.Err(); err != nil { - return nil, err - } - - return pk, nil -} - -// quotedTableName returns a valid regclass name for both regular tables and -// for schemas. -func quotedTableName(s string) string { - chunks := strings.Split(s, ".") - for i := range chunks { - chunks[i] = fmt.Sprintf("%q", chunks[i]) - } - return strings.Join(chunks, ".") -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/database_pgx.go b/vendor/github.com/upper/db/v4/adapter/postgresql/database_pgx.go deleted file mode 100644 index 954a9382..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/database_pgx.go +++ /dev/null @@ -1,46 +0,0 @@ -//go:build !pq -// +build !pq - -package postgresql - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import ( - "context" - "database/sql" - _ "github.com/jackc/pgx/v4/stdlib" - "github.com/upper/db/v4/internal/sqladapter" - "time" -) - -func (*database) OpenDSN(sess sqladapter.Session, dsn string) (*sql.DB, error) { - connURL, err := ParseURL(dsn) - if err != nil { - return nil, err - } - if tz := connURL.Options["timezone"]; tz != "" { - loc, _ := time.LoadLocation(tz) - ctx := context.WithValue(sess.Context(), "timezone", loc) - sess.SetContext(ctx) - } - return sql.Open("pgx", dsn) -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/database_pq.go b/vendor/github.com/upper/db/v4/adapter/postgresql/database_pq.go deleted file mode 100644 index 7b0c9b76..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/database_pq.go +++ /dev/null @@ -1,45 +0,0 @@ -// +build pq - -package postgresql - -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import ( - "context" - "database/sql" - _ "github.com/lib/pq" - "github.com/upper/db/v4/internal/sqladapter" - "time" -) - -func (*database) OpenDSN(sess sqladapter.Session, dsn string) (*sql.DB, error) { - connURL, err := ParseURL(dsn) - if err != nil { - return nil, err - } - if tz := connURL.Options["timezone"]; tz != "" { - loc, _ := time.LoadLocation(tz) - ctx := context.WithValue(sess.Context(), "timezone", loc) - sess.SetContext(ctx) - } - return sql.Open("postgres", dsn) -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/docker-compose.yml b/vendor/github.com/upper/db/v4/adapter/postgresql/docker-compose.yml deleted file mode 100644 index 4f4884a3..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/docker-compose.yml +++ /dev/null @@ -1,13 +0,0 @@ -version: '3' - -services: - - server: - image: postgres:${POSTGRES_VERSION:-11} - environment: - POSTGRES_USER: ${DB_USERNAME:-upperio_user} - POSTGRES_PASSWORD: ${DB_PASSWORD:-upperio//s3cr37} - POSTGRES_DB: ${DB_NAME:-upperio} - ports: - - '${DB_HOST:-127.0.0.1}:${DB_PORT:-5432}:5432' - diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/postgresql.go b/vendor/github.com/upper/db/v4/adapter/postgresql/postgresql.go deleted file mode 100644 index 577f4af1..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/postgresql.go +++ /dev/null @@ -1,51 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "database/sql" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// Adapter is the internal name of the adapter. -const Adapter = "postgresql" - -var registeredAdapter = sqladapter.RegisterAdapter(Adapter, &database{}) - -// Open establishes a connection to the database server and returns a -// sqlbuilder.Session instance (which is compatible with db.Session). -func Open(connURL db.ConnectionURL) (db.Session, error) { - return registeredAdapter.OpenDSN(connURL) -} - -// NewTx creates a sqlbuilder.Tx instance by wrapping a *sql.Tx value. -func NewTx(sqlTx *sql.Tx) (sqlbuilder.Tx, error) { - return registeredAdapter.NewTx(sqlTx) -} - -// New creates a sqlbuilder.Sesion instance by wrapping a *sql.DB value. -func New(sqlDB *sql.DB) (db.Session, error) { - return registeredAdapter.New(sqlDB) -} diff --git a/vendor/github.com/upper/db/v4/adapter/postgresql/template.go b/vendor/github.com/upper/db/v4/adapter/postgresql/template.go deleted file mode 100644 index 59898d3d..00000000 --- a/vendor/github.com/upper/db/v4/adapter/postgresql/template.go +++ /dev/null @@ -1,210 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package postgresql - -import ( - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/cache" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -const ( - adapterColumnSeparator = `.` - adapterIdentifierSeparator = `, ` - adapterIdentifierQuote = `"{{.Value}}"` - adapterValueSeparator = `, ` - adapterValueQuote = `'{{.}}'` - adapterAndKeyword = `AND` - adapterOrKeyword = `OR` - adapterDescKeyword = `DESC` - adapterAscKeyword = `ASC` - adapterAssignmentOperator = `=` - adapterClauseGroup = `({{.}})` - adapterClauseOperator = ` {{.}} ` - adapterColumnValue = `{{.Column}} {{.Operator}} {{.Value}}` - adapterTableAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - adapterColumnAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - adapterSortByColumnLayout = `{{.Column}} {{.Order}}` - - adapterOrderByLayout = ` - {{if .SortColumns}} - ORDER BY {{.SortColumns}} - {{end}} - ` - - adapterWhereLayout = ` - {{if .Conds}} - WHERE {{.Conds}} - {{end}} - ` - - adapterUsingLayout = ` - {{if .Columns}} - USING ({{.Columns}}) - {{end}} - ` - - adapterJoinLayout = ` - {{if .Table}} - {{ if .On }} - {{.Type}} JOIN {{.Table}} - {{.On}} - {{ else if .Using }} - {{.Type}} JOIN {{.Table}} - {{.Using}} - {{ else if .Type | eq "CROSS" }} - {{.Type}} JOIN {{.Table}} - {{else}} - NATURAL {{.Type}} JOIN {{.Table}} - {{end}} - {{end}} - ` - - adapterOnLayout = ` - {{if .Conds}} - ON {{.Conds}} - {{end}} - ` - - adapterSelectLayout = ` - SELECT - {{if .Distinct}} - DISTINCT - {{end}} - - {{if defined .Columns}} - {{.Columns | compile}} - {{else}} - * - {{end}} - - {{if defined .Table}} - FROM {{.Table | compile}} - {{end}} - - {{.Joins | compile}} - - {{.Where | compile}} - - {{if defined .GroupBy}} - {{.GroupBy | compile}} - {{end}} - - {{.OrderBy | compile}} - - {{if .Limit}} - LIMIT {{.Limit}} - {{end}} - - {{if .Offset}} - OFFSET {{.Offset}} - {{end}} - ` - adapterDeleteLayout = ` - DELETE - FROM {{.Table | compile}} - {{.Where | compile}} - ` - adapterUpdateLayout = ` - UPDATE - {{.Table | compile}} - SET {{.ColumnValues | compile}} - {{.Where | compile}} - ` - - adapterSelectCountLayout = ` - SELECT - COUNT(1) AS _t - FROM {{.Table | compile}} - {{.Where | compile}} - ` - - adapterInsertLayout = ` - INSERT INTO {{.Table | compile}} - {{if defined .Columns}}({{.Columns | compile}}){{end}} - VALUES - {{if defined .Values}} - {{.Values | compile}} - {{else}} - (default) - {{end}} - {{if defined .Returning}} - RETURNING {{.Returning | compile}} - {{end}} - ` - - adapterTruncateLayout = ` - TRUNCATE TABLE {{.Table | compile}} RESTART IDENTITY - ` - - adapterDropDatabaseLayout = ` - DROP DATABASE {{.Database | compile}} - ` - - adapterDropTableLayout = ` - DROP TABLE {{.Table | compile}} - ` - - adapterGroupByLayout = ` - {{if .GroupColumns}} - GROUP BY {{.GroupColumns}} - {{end}} - ` -) - -var template = &exql.Template{ - ColumnSeparator: adapterColumnSeparator, - IdentifierSeparator: adapterIdentifierSeparator, - IdentifierQuote: adapterIdentifierQuote, - ValueSeparator: adapterValueSeparator, - ValueQuote: adapterValueQuote, - AndKeyword: adapterAndKeyword, - OrKeyword: adapterOrKeyword, - DescKeyword: adapterDescKeyword, - AscKeyword: adapterAscKeyword, - AssignmentOperator: adapterAssignmentOperator, - ClauseGroup: adapterClauseGroup, - ClauseOperator: adapterClauseOperator, - ColumnValue: adapterColumnValue, - TableAliasLayout: adapterTableAliasLayout, - ColumnAliasLayout: adapterColumnAliasLayout, - SortByColumnLayout: adapterSortByColumnLayout, - WhereLayout: adapterWhereLayout, - JoinLayout: adapterJoinLayout, - OnLayout: adapterOnLayout, - UsingLayout: adapterUsingLayout, - OrderByLayout: adapterOrderByLayout, - InsertLayout: adapterInsertLayout, - SelectLayout: adapterSelectLayout, - UpdateLayout: adapterUpdateLayout, - DeleteLayout: adapterDeleteLayout, - TruncateLayout: adapterTruncateLayout, - DropDatabaseLayout: adapterDropDatabaseLayout, - DropTableLayout: adapterDropTableLayout, - CountLayout: adapterSelectCountLayout, - GroupByLayout: adapterGroupByLayout, - Cache: cache.NewCache(), - ComparisonOperator: map[adapter.ComparisonOperator]string{ - adapter.ComparisonOperatorRegExp: "~", - adapter.ComparisonOperatorNotRegExp: "!~", - }, -} diff --git a/vendor/github.com/upper/db/v4/clauses.go b/vendor/github.com/upper/db/v4/clauses.go deleted file mode 100644 index b6936bdb..00000000 --- a/vendor/github.com/upper/db/v4/clauses.go +++ /dev/null @@ -1,489 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "context" - "fmt" -) - -// Selector represents a SELECT statement. -type Selector interface { - // Columns defines which columns to retrive. - // - // You should call From() after Columns() if you want to query data from an - // specific table. - // - // s.Columns("name", "last_name").From(...) - // - // It is also possible to use an alias for the column, this could be handy if - // you plan to use the alias later, use the "AS" keyword to denote an alias. - // - // s.Columns("name AS n") - // - // or the shortcut: - // - // s.Columns("name n") - // - // If you don't want the column to be escaped use the db.Raw - // function. - // - // s.Columns(db.Raw("MAX(id)")) - // - // The above statement is equivalent to: - // - // s.Columns(db.Func("MAX", "id")) - Columns(columns ...interface{}) Selector - - // From represents a FROM clause and is tipically used after Columns(). - // - // FROM defines from which table data is going to be retrieved - // - // s.Columns(...).From("people") - // - // It is also possible to use an alias for the table, this could be handy if - // you plan to use the alias later: - // - // s.Columns(...).From("people AS p").Where("p.name = ?", ...) - // - // Or with the shortcut: - // - // s.Columns(...).From("people p").Where("p.name = ?", ...) - From(tables ...interface{}) Selector - - // Distict represents a DISTINCT clause - // - // DISTINCT is used to ask the database to return only values that are - // different. - Distinct(columns ...interface{}) Selector - - // As defines an alias for a table. - As(string) Selector - - // Where specifies the conditions that columns must match in order to be - // retrieved. - // - // Where accepts raw strings and fmt.Stringer to define conditions and - // interface{} to specify parameters. Be careful not to embed any parameters - // within the SQL part as that could lead to security problems. You can use - // que question mark (?) as placeholder for parameters. - // - // s.Where("name = ?", "max") - // - // s.Where("name = ? AND last_name = ?", "Mary", "Doe") - // - // s.Where("last_name IS NULL") - // - // You can also use other types of parameters besides only strings, like: - // - // s.Where("online = ? AND last_logged <= ?", true, time.Now()) - // - // and Where() will transform them into strings before feeding them to the - // database. - // - // When an unknown type is provided, Where() will first try to match it with - // the Marshaler interface, then with fmt.Stringer and finally, if the - // argument does not satisfy any of those interfaces Where() will use - // fmt.Sprintf("%v", arg) to transform the type into a string. - // - // Subsequent calls to Where() will overwrite previously set conditions, if - // you want these new conditions to be appended use And() instead. - Where(conds ...interface{}) Selector - - // And appends more constraints to the WHERE clause without overwriting - // conditions that have been already set. - And(conds ...interface{}) Selector - - // GroupBy represents a GROUP BY statement. - // - // GROUP BY defines which columns should be used to aggregate and group - // results. - // - // s.GroupBy("country_id") - // - // GroupBy accepts more than one column: - // - // s.GroupBy("country_id", "city_id") - GroupBy(columns ...interface{}) Selector - - // Having(...interface{}) Selector - - // OrderBy represents a ORDER BY statement. - // - // ORDER BY is used to define which columns are going to be used to sort - // results. - // - // Use the column name to sort results in ascendent order. - // - // // "last_name" ASC - // s.OrderBy("last_name") - // - // Prefix the column name with the minus sign (-) to sort results in - // descendent order. - // - // // "last_name" DESC - // s.OrderBy("-last_name") - // - // If you would rather be very explicit, you can also use ASC and DESC. - // - // s.OrderBy("last_name ASC") - // - // s.OrderBy("last_name DESC", "name ASC") - OrderBy(columns ...interface{}) Selector - - // Join represents a JOIN statement. - // - // JOIN statements are used to define external tables that the user wants to - // include as part of the result. - // - // You can use the On() method after Join() to define the conditions of the - // join. - // - // s.Join("author").On("author.id = book.author_id") - // - // If you don't specify conditions for the join, a NATURAL JOIN will be used. - // - // On() accepts the same arguments as Where() - // - // You can also use Using() after Join(). - // - // s.Join("employee").Using("department_id") - Join(table ...interface{}) Selector - - // FullJoin is like Join() but with FULL JOIN. - FullJoin(...interface{}) Selector - - // CrossJoin is like Join() but with CROSS JOIN. - CrossJoin(...interface{}) Selector - - // RightJoin is like Join() but with RIGHT JOIN. - RightJoin(...interface{}) Selector - - // LeftJoin is like Join() but with LEFT JOIN. - LeftJoin(...interface{}) Selector - - // Using represents the USING clause. - // - // USING is used to specifiy columns to join results. - // - // s.LeftJoin(...).Using("country_id") - Using(...interface{}) Selector - - // On represents the ON clause. - // - // ON is used to define conditions on a join. - // - // s.Join(...).On("b.author_id = a.id") - On(...interface{}) Selector - - // Limit represents the LIMIT parameter. - // - // LIMIT defines the maximum number of rows to return from the table. A - // negative limit cancels any previous limit settings. - // - // s.Limit(42) - Limit(int) Selector - - // Offset represents the OFFSET parameter. - // - // OFFSET defines how many results are going to be skipped before starting to - // return results. A negative offset cancels any previous offset settings. - // - // s.Offset(56) - Offset(int) Selector - - // Amend lets you alter the query's text just before sending it to the - // database server. - Amend(func(queryIn string) (queryOut string)) Selector - - // Paginate returns a paginator that can display a paginated lists of items. - // Paginators ignore previous Offset and Limit settings. Page numbering - // starts at 1. - Paginate(uint) Paginator - - // Iterator provides methods to iterate over the results returned by the - // Selector. - Iterator() Iterator - - // IteratorContext provides methods to iterate over the results returned by - // the Selector. - IteratorContext(ctx context.Context) Iterator - - // SQLPreparer provides methods for creating prepared statements. - SQLPreparer - - // SQLGetter provides methods to compile and execute a query that returns - // results. - SQLGetter - - // ResultMapper provides methods to retrieve and map results. - ResultMapper - - // fmt.Stringer provides `String() string`, you can use `String()` to compile - // the `Selector` into a string. - fmt.Stringer - - // Arguments returns the arguments that are prepared for this query. - Arguments() []interface{} -} - -// Inserter represents an INSERT statement. -type Inserter interface { - // Columns represents the COLUMNS clause. - // - // COLUMNS defines the columns that we are going to provide values for. - // - // i.Columns("name", "last_name").Values(...) - Columns(...string) Inserter - - // Values represents the VALUES clause. - // - // VALUES defines the values of the columns. - // - // i.Columns(...).Values("María", "Méndez") - // - // i.Values(map[string][string]{"name": "María"}) - Values(...interface{}) Inserter - - // Arguments returns the arguments that are prepared for this query. - Arguments() []interface{} - - // Returning represents a RETURNING clause. - // - // RETURNING specifies which columns should be returned after INSERT. - // - // RETURNING may not be supported by all SQL databases. - Returning(columns ...string) Inserter - - // Iterator provides methods to iterate over the results returned by the - // Inserter. This is only possible when using Returning(). - Iterator() Iterator - - // IteratorContext provides methods to iterate over the results returned by - // the Inserter. This is only possible when using Returning(). - IteratorContext(ctx context.Context) Iterator - - // Amend lets you alter the query's text just before sending it to the - // database server. - Amend(func(queryIn string) (queryOut string)) Inserter - - // Batch provies a BatchInserter that can be used to insert many elements at - // once by issuing several calls to Values(). It accepts a size parameter - // which defines the batch size. If size is < 1, the batch size is set to 1. - Batch(size int) BatchInserter - - // SQLExecer provides the Exec method. - SQLExecer - - // SQLPreparer provides methods for creating prepared statements. - SQLPreparer - - // SQLGetter provides methods to return query results from INSERT statements - // that support such feature (e.g.: queries with Returning). - SQLGetter - - // fmt.Stringer provides `String() string`, you can use `String()` to compile - // the `Inserter` into a string. - fmt.Stringer -} - -// Deleter represents a DELETE statement. -type Deleter interface { - // Where represents the WHERE clause. - // - // See Selector.Where for documentation and usage examples. - Where(...interface{}) Deleter - - // And appends more constraints to the WHERE clause without overwriting - // conditions that have been already set. - And(conds ...interface{}) Deleter - - // Limit represents the LIMIT clause. - // - // See Selector.Limit for documentation and usage examples. - Limit(int) Deleter - - // Amend lets you alter the query's text just before sending it to the - // database server. - Amend(func(queryIn string) (queryOut string)) Deleter - - // SQLPreparer provides methods for creating prepared statements. - SQLPreparer - - // SQLExecer provides the Exec method. - SQLExecer - - // fmt.Stringer provides `String() string`, you can use `String()` to compile - // the `Inserter` into a string. - fmt.Stringer - - // Arguments returns the arguments that are prepared for this query. - Arguments() []interface{} -} - -// Updater represents an UPDATE statement. -type Updater interface { - // Set represents the SET clause. - Set(...interface{}) Updater - - // Where represents the WHERE clause. - // - // See Selector.Where for documentation and usage examples. - Where(...interface{}) Updater - - // And appends more constraints to the WHERE clause without overwriting - // conditions that have been already set. - And(conds ...interface{}) Updater - - // Limit represents the LIMIT parameter. - // - // See Selector.Limit for documentation and usage examples. - Limit(int) Updater - - // SQLPreparer provides methods for creating prepared statements. - SQLPreparer - - // SQLExecer provides the Exec method. - SQLExecer - - // fmt.Stringer provides `String() string`, you can use `String()` to compile - // the `Inserter` into a string. - fmt.Stringer - - // Arguments returns the arguments that are prepared for this query. - Arguments() []interface{} - - // Amend lets you alter the query's text just before sending it to the - // database server. - Amend(func(queryIn string) (queryOut string)) Updater -} - -// Paginator provides tools for splitting the results of a query into chunks -// containing a fixed number of items. -type Paginator interface { - // Page sets the page number. - Page(uint) Paginator - - // Cursor defines the column that is going to be taken as basis for - // cursor-based pagination. - // - // Example: - // - // a = q.Paginate(10).Cursor("id") - // b = q.Paginate(12).Cursor("-id") - // - // You can set "" as cursorColumn to disable cursors. - Cursor(cursorColumn string) Paginator - - // NextPage returns the next page according to the cursor. It expects a - // cursorValue, which is the value the cursor column has on the last item of - // the current result set (lower bound). - // - // Example: - // - // p = q.NextPage(items[len(items)-1].ID) - NextPage(cursorValue interface{}) Paginator - - // PrevPage returns the previous page according to the cursor. It expects a - // cursorValue, which is the value the cursor column has on the fist item of - // the current result set (upper bound). - // - // Example: - // - // p = q.PrevPage(items[0].ID) - PrevPage(cursorValue interface{}) Paginator - - // TotalPages returns the total number of pages in the query. - TotalPages() (uint, error) - - // TotalEntries returns the total number of entries in the query. - TotalEntries() (uint64, error) - - // SQLPreparer provides methods for creating prepared statements. - SQLPreparer - - // SQLGetter provides methods to compile and execute a query that returns - // results. - SQLGetter - - // Iterator provides methods to iterate over the results returned by the - // Selector. - Iterator() Iterator - - // IteratorContext provides methods to iterate over the results returned by - // the Selector. - IteratorContext(ctx context.Context) Iterator - - // ResultMapper provides methods to retrieve and map results. - ResultMapper - - // fmt.Stringer provides `String() string`, you can use `String()` to compile - // the `Selector` into a string. - fmt.Stringer - - // Arguments returns the arguments that are prepared for this query. - Arguments() []interface{} -} - -// ResultMapper defined methods for a result mapper. -type ResultMapper interface { - // All dumps all the results into the given slice, All() expects a pointer to - // slice of maps or structs. - // - // The behaviour of One() extends to each one of the results. - All(destSlice interface{}) error - - // One maps the row that is in the current query cursor into the - // given interface, which can be a pointer to either a map or a - // struct. - // - // If dest is a pointer to map, each one of the columns will create a new map - // key and the values of the result will be set as values for the keys. - // - // Depending on the type of map key and value, the results columns and values - // may need to be transformed. - // - // If dest if a pointer to struct, each one of the fields will be tested for - // a `db` tag which defines the column mapping. The value of the result will - // be set as the value of the field. - One(dest interface{}) error -} - -// BatchInserter provides an interface to do massive insertions in batches. -type BatchInserter interface { - // Values pushes column values to be inserted as part of the batch. - Values(...interface{}) BatchInserter - - // NextResult dumps the next slice of results to dst, which can mean having - // the IDs of all inserted elements in the batch. - NextResult(dst interface{}) bool - - // Done signals that no more elements are going to be added. - Done() - - // Wait blocks until the whole batch is executed. - Wait() error - - // Err returns the last error that happened while executing the batch (or nil - // if no error happened). - Err() error -} diff --git a/vendor/github.com/upper/db/v4/collection.go b/vendor/github.com/upper/db/v4/collection.go deleted file mode 100644 index 2957e419..00000000 --- a/vendor/github.com/upper/db/v4/collection.go +++ /dev/null @@ -1,66 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -// Collection defines methods to work with database tables or collections. -type Collection interface { - - // Name returns the name of the collection. - Name() string - - // Session returns the Session that was used to create the collection - // reference. - Session() Session - - // Find defines a new result set. - Find(...interface{}) Result - - Count() (uint64, error) - - // Insert inserts a new item into the collection, the type of this item could - // be a map, a struct or pointer to either of them. If the call succeeds and - // if the collection has a primary key, Insert returns the ID of the newly - // added element as an `interface{}`. The underlying type of this ID depends - // on both the database adapter and the column storing the ID. The ID - // returned by Insert() could be passed directly to Find() to retrieve the - // newly added element. - Insert(interface{}) (InsertResult, error) - - // InsertReturning is like Insert() but it takes a pointer to map or struct - // and, if the operation succeeds, updates it with data from the newly - // inserted row. If the database does not support transactions this method - // returns db.ErrUnsupported. - InsertReturning(interface{}) error - - // UpdateReturning takes a pointer to a map or struct and tries to update the - // row the item is refering to. If the element is updated sucessfully, - // UpdateReturning will fetch the row and update the fields of the passed - // item. If the database does not support transactions this method returns - // db.ErrUnsupported - UpdateReturning(interface{}) error - - // Exists returns true if the collection exists, false otherwise. - Exists() (bool, error) - - // Truncate removes all elements on the collection. - Truncate() error -} diff --git a/vendor/github.com/upper/db/v4/comparison.go b/vendor/github.com/upper/db/v4/comparison.go deleted file mode 100644 index 64ba9913..00000000 --- a/vendor/github.com/upper/db/v4/comparison.go +++ /dev/null @@ -1,179 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "reflect" - "time" - - "github.com/upper/db/v4/internal/adapter" -) - -// Comparison represents a relationship between values. -type Comparison struct { - *adapter.Comparison -} - -// Gte is a comparison that means: is greater than or equal to value. -func Gte(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorGreaterThanOrEqualTo, value)} -} - -// Lte is a comparison that means: is less than or equal to value. -func Lte(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorLessThanOrEqualTo, value)} -} - -// Eq is a comparison that means: is equal to value. -func Eq(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorEqual, value)} -} - -// NotEq is a comparison that means: is not equal to value. -func NotEq(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotEqual, value)} -} - -// Gt is a comparison that means: is greater than value. -func Gt(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorGreaterThan, value)} -} - -// Lt is a comparison that means: is less than value. -func Lt(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorLessThan, value)} -} - -// In is a comparison that means: is any of the values. -func In(value ...interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIn, toInterfaceArray(value))} -} - -// AnyOf is a comparison that means: is any of the values of the slice. -func AnyOf(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIn, toInterfaceArray(value))} -} - -// NotIn is a comparison that means: is none of the values. -func NotIn(value ...interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotIn, toInterfaceArray(value))} -} - -// NotAnyOf is a comparison that means: is none of the values of the slice. -func NotAnyOf(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotIn, toInterfaceArray(value))} -} - -// After is a comparison that means: is after the (time.Time) value. -func After(value time.Time) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorGreaterThan, value)} -} - -// Before is a comparison that means: is before the (time.Time) value. -func Before(value time.Time) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorLessThan, value)} -} - -// OnOrAfter is a comparison that means: is on or after the (time.Time) value. -func OnOrAfter(value time.Time) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorGreaterThanOrEqualTo, value)} -} - -// OnOrBefore is a comparison that means: is on or before the (time.Time) value. -func OnOrBefore(value time.Time) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorLessThanOrEqualTo, value)} -} - -// Between is a comparison that means: is between lowerBound and upperBound. -func Between(lowerBound interface{}, upperBound interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorBetween, []interface{}{lowerBound, upperBound})} -} - -// NotBetween is a comparison that means: is not between lowerBound and upperBound. -func NotBetween(lowerBound interface{}, upperBound interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotBetween, []interface{}{lowerBound, upperBound})} -} - -// Is is a comparison that means: is equivalent to nil, true or false. -func Is(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIs, value)} -} - -// IsNot is a comparison that means: is not equivalent to nil, true nor false. -func IsNot(value interface{}) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIsNot, value)} -} - -// IsNull is a comparison that means: is equivalent to nil. -func IsNull() *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIs, nil)} -} - -// IsNotNull is a comparison that means: is not equivalent to nil. -func IsNotNull() *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorIsNot, nil)} -} - -// Like is a comparison that checks whether the reference matches the wildcard -// value. -func Like(value string) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorLike, value)} -} - -// NotLike is a comparison that checks whether the reference does not match the -// wildcard value. -func NotLike(value string) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotLike, value)} -} - -// RegExp is a comparison that checks whether the reference matches the regular -// expression. -func RegExp(value string) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorRegExp, value)} -} - -// NotRegExp is a comparison that checks whether the reference does not match -// the regular expression. -func NotRegExp(value string) *Comparison { - return &Comparison{adapter.NewComparisonOperator(adapter.ComparisonOperatorNotRegExp, value)} -} - -// Op returns a custom comparison operator. -func Op(customOperator string, value interface{}) *Comparison { - return &Comparison{adapter.NewCustomComparisonOperator(customOperator, value)} -} - -func toInterfaceArray(value interface{}) []interface{} { - rv := reflect.ValueOf(value) - switch rv.Type().Kind() { - case reflect.Ptr: - return toInterfaceArray(rv.Elem().Interface()) - case reflect.Slice: - elems := rv.Len() - args := make([]interface{}, elems) - for i := 0; i < elems; i++ { - args[i] = rv.Index(i).Interface() - } - return args - } - return []interface{}{value} -} diff --git a/vendor/github.com/upper/db/v4/cond.go b/vendor/github.com/upper/db/v4/cond.go deleted file mode 100644 index cd8c070f..00000000 --- a/vendor/github.com/upper/db/v4/cond.go +++ /dev/null @@ -1,130 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "fmt" - "sort" - - "github.com/upper/db/v4/internal/adapter" -) - -// LogicalExpr represents an expression to be used in logical statements. -type LogicalExpr = adapter.LogicalExpr - -// LogicalOperator represents a logical operation. -type LogicalOperator = adapter.LogicalOperator - -// Cond is a map that defines conditions for a query. -// -// Each entry of the map represents a condition (a column-value relation bound -// by a comparison Operator). The comparison can be specified after the column -// name, if no comparison operator is provided the equality operator is used as -// default. -// -// Examples: -// -// // Age equals 18. -// db.Cond{"age": 18} -// -// // Age is greater than or equal to 18. -// db.Cond{"age >=": 18} -// -// // id is any of the values 1, 2 or 3. -// db.Cond{"id IN": []{1, 2, 3}} -// -// // Age is lower than 18 (MongoDB syntax) -// db.Cond{"age $lt": 18} -// -// // age > 32 and age < 35 -// db.Cond{"age >": 32, "age <": 35} -type Cond map[interface{}]interface{} - -// Empty returns false if there are no conditions. -func (c Cond) Empty() bool { - for range c { - return false - } - return true -} - -// Constraints returns each one of the Cond map entires as a constraint. -func (c Cond) Constraints() []adapter.Constraint { - z := make([]adapter.Constraint, 0, len(c)) - for _, k := range c.keys() { - z = append(z, adapter.NewConstraint(k, c[k])) - } - return z -} - -// Operator returns the equality operator. -func (c Cond) Operator() LogicalOperator { - return adapter.DefaultLogicalOperator -} - -func (c Cond) keys() []interface{} { - keys := make(condKeys, 0, len(c)) - for k := range c { - keys = append(keys, k) - } - if len(c) > 1 { - sort.Sort(keys) - } - return keys -} - -// Expressions returns all the expressions contained in the condition. -func (c Cond) Expressions() []LogicalExpr { - z := make([]LogicalExpr, 0, len(c)) - for _, k := range c.keys() { - z = append(z, Cond{k: c[k]}) - } - return z -} - -type condKeys []interface{} - -func (ck condKeys) Len() int { - return len(ck) -} - -func (ck condKeys) Less(i, j int) bool { - return fmt.Sprintf("%v", ck[i]) < fmt.Sprintf("%v", ck[j]) -} - -func (ck condKeys) Swap(i, j int) { - ck[i], ck[j] = ck[j], ck[i] -} - -func defaultJoin(in ...adapter.LogicalExpr) []adapter.LogicalExpr { - for i := range in { - cond, ok := in[i].(Cond) - if ok && !cond.Empty() { - in[i] = And(cond) - } - } - return in -} - -var ( - _ = LogicalExpr(Cond{}) -) diff --git a/vendor/github.com/upper/db/v4/connection_url.go b/vendor/github.com/upper/db/v4/connection_url.go deleted file mode 100644 index 8dc68231..00000000 --- a/vendor/github.com/upper/db/v4/connection_url.go +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -// ConnectionURL represents a data source name (DSN). -type ConnectionURL interface { - // String returns the connection string that is going to be passed to the - // adapter. - String() string -} diff --git a/vendor/github.com/upper/db/v4/db.go b/vendor/github.com/upper/db/v4/db.go deleted file mode 100644 index dc882b74..00000000 --- a/vendor/github.com/upper/db/v4/db.go +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package db (or upper/db) provides an agnostic data access layer to work with -// different databases. -// -// Install upper/db: -// -// go get github.com/upper/db -// -// Usage -// -// package main -// -// import ( -// "log" -// -// "github.com/upper/db/v4/adapter/postgresql" // Imports the postgresql adapter. -// ) -// -// var settings = postgresql.ConnectionURL{ -// Database: `booktown`, -// Host: `demo.upper.io`, -// User: `demouser`, -// Password: `demop4ss`, -// } -// -// // Book represents a book. -// type Book struct { -// ID uint `db:"id"` -// Title string `db:"title"` -// AuthorID uint `db:"author_id"` -// SubjectID uint `db:"subject_id"` -// } -// -// func main() { -// sess, err := postgresql.Open(settings) -// if err != nil { -// log.Fatal(err) -// } -// defer sess.Close() -// -// var books []Book -// if err := sess.Collection("books").Find().OrderBy("title").All(&books); err != nil { -// log.Fatal(err) -// } -// -// log.Println("Books:") -// for _, book := range books { -// log.Printf("%q (ID: %d)\n", book.Title, book.ID) -// } -// } -package db diff --git a/vendor/github.com/upper/db/v4/errors.go b/vendor/github.com/upper/db/v4/errors.go deleted file mode 100644 index ff4a6be4..00000000 --- a/vendor/github.com/upper/db/v4/errors.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "errors" -) - -// Error messages -var ( - ErrMissingAdapter = errors.New(`upper: missing adapter`) - ErrAlreadyWithinTransaction = errors.New(`upper: already within a transaction`) - ErrCollectionDoesNotExist = errors.New(`upper: collection does not exist`) - ErrExpectingNonNilModel = errors.New(`upper: expecting non nil model`) - ErrExpectingPointerToStruct = errors.New(`upper: expecting pointer to struct`) - ErrGivingUpTryingToConnect = errors.New(`upper: giving up trying to connect: too many clients`) - ErrInvalidCollection = errors.New(`upper: invalid collection`) - ErrMissingCollectionName = errors.New(`upper: missing collection name`) - ErrMissingConditions = errors.New(`upper: missing selector conditions`) - ErrMissingConnURL = errors.New(`upper: missing DSN`) - ErrMissingDatabaseName = errors.New(`upper: missing database name`) - ErrNoMoreRows = errors.New(`upper: no more rows in this result set`) - ErrNotConnected = errors.New(`upper: not connected to a database`) - ErrNotImplemented = errors.New(`upper: call not implemented`) - ErrQueryIsPending = errors.New(`upper: can't execute this instruction while the result set is still open`) - ErrQueryLimitParam = errors.New(`upper: a query can accept only one limit parameter`) - ErrQueryOffsetParam = errors.New(`upper: a query can accept only one offset parameter`) - ErrQuerySortParam = errors.New(`upper: a query can accept only one order-by parameter`) - ErrSockerOrHost = errors.New(`upper: you may connect either to a UNIX socket or a TCP address, but not both`) - ErrTooManyClients = errors.New(`upper: can't connect to database server: too many clients`) - ErrUndefined = errors.New(`upper: value is undefined`) - ErrUnknownConditionType = errors.New(`upper: arguments of type %T can't be used as constraints`) - ErrUnsupported = errors.New(`upper: action is not supported by the DBMS`) - ErrUnsupportedDestination = errors.New(`upper: unsupported destination type`) - ErrUnsupportedType = errors.New(`upper: type does not support marshaling`) - ErrUnsupportedValue = errors.New(`upper: value does not support unmarshaling`) - ErrNilRecord = errors.New(`upper: invalid item (nil)`) - ErrRecordIDIsZero = errors.New(`upper: item ID is not defined`) - ErrMissingPrimaryKeys = errors.New(`upper: collection %q has no primary keys`) - ErrWarnSlowQuery = errors.New(`upper: slow query`) - ErrTransactionAborted = errors.New(`upper: transaction was aborted`) - ErrNotWithinTransaction = errors.New(`upper: not within transaction`) - ErrNotSupportedByAdapter = errors.New(`upper: not supported by adapter`) -) diff --git a/vendor/github.com/upper/db/v4/function.go b/vendor/github.com/upper/db/v4/function.go deleted file mode 100644 index d0a11f9d..00000000 --- a/vendor/github.com/upper/db/v4/function.go +++ /dev/null @@ -1,48 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "github.com/upper/db/v4/internal/adapter" -) - -// FuncExpr represents functions. -type FuncExpr = adapter.FuncExpr - -// Func returns a database function expression. -// -// Examples: -// -// // MOD(29, 9) -// db.Func("MOD", 29, 9) -// -// // CONCAT("foo", "bar") -// db.Func("CONCAT", "foo", "bar") -// -// // NOW() -// db.Func("NOW") -// -// // RTRIM("Hello ") -// db.Func("RTRIM", "Hello ") -func Func(name string, args ...interface{}) *FuncExpr { - return adapter.NewFuncExpr(name, args) -} diff --git a/vendor/github.com/upper/db/v4/internal/adapter/comparison.go b/vendor/github.com/upper/db/v4/internal/adapter/comparison.go deleted file mode 100644 index 1f63a204..00000000 --- a/vendor/github.com/upper/db/v4/internal/adapter/comparison.go +++ /dev/null @@ -1,60 +0,0 @@ -package adapter - -// ComparisonOperator is the base type for comparison operators. -type ComparisonOperator uint8 - -// Comparison operators -const ( - ComparisonOperatorNone ComparisonOperator = iota - ComparisonOperatorCustom - - ComparisonOperatorEqual - ComparisonOperatorNotEqual - - ComparisonOperatorLessThan - ComparisonOperatorGreaterThan - - ComparisonOperatorLessThanOrEqualTo - ComparisonOperatorGreaterThanOrEqualTo - - ComparisonOperatorBetween - ComparisonOperatorNotBetween - - ComparisonOperatorIn - ComparisonOperatorNotIn - - ComparisonOperatorIs - ComparisonOperatorIsNot - - ComparisonOperatorLike - ComparisonOperatorNotLike - - ComparisonOperatorRegExp - ComparisonOperatorNotRegExp -) - -type Comparison struct { - t ComparisonOperator - op string - v interface{} -} - -func (c *Comparison) CustomOperator() string { - return c.op -} - -func (c *Comparison) Operator() ComparisonOperator { - return c.t -} - -func (c *Comparison) Value() interface{} { - return c.v -} - -func NewComparisonOperator(t ComparisonOperator, v interface{}) *Comparison { - return &Comparison{t: t, v: v} -} - -func NewCustomComparisonOperator(op string, v interface{}) *Comparison { - return &Comparison{t: ComparisonOperatorCustom, op: op, v: v} -} diff --git a/vendor/github.com/upper/db/v4/internal/adapter/constraint.go b/vendor/github.com/upper/db/v4/internal/adapter/constraint.go deleted file mode 100644 index 8c44914b..00000000 --- a/vendor/github.com/upper/db/v4/internal/adapter/constraint.go +++ /dev/null @@ -1,72 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package adapter - -// ConstraintValuer allows constraints to use specific values of their own. -type ConstraintValuer interface { - ConstraintValue() interface{} -} - -// Constraint interface represents a single condition, like "a = 1". where `a` -// is the key and `1` is the value. This is an exported interface but it's -// rarely used directly, you may want to use the `db.Cond{}` map instead. -type Constraint interface { - // Key is the leftmost part of the constraint and usually contains a column - // name. - Key() interface{} - - // Value if the rightmost part of the constraint and usually contains a - // column value. - Value() interface{} -} - -// Constraints interface represents an array of constraints, like "a = 1, b = -// 2, c = 3". -type Constraints interface { - // Constraints returns an array of constraints. - Constraints() []Constraint -} - -type constraint struct { - k interface{} - v interface{} -} - -func (c constraint) Key() interface{} { - return c.k -} - -func (c constraint) Value() interface{} { - if constraintValuer, ok := c.v.(ConstraintValuer); ok { - return constraintValuer.ConstraintValue() - } - return c.v -} - -// NewConstraint creates a constraint. -func NewConstraint(key interface{}, value interface{}) Constraint { - return &constraint{k: key, v: value} -} - -var ( - _ = Constraint(&constraint{}) -) diff --git a/vendor/github.com/upper/db/v4/internal/adapter/func.go b/vendor/github.com/upper/db/v4/internal/adapter/func.go deleted file mode 100644 index f5654ef2..00000000 --- a/vendor/github.com/upper/db/v4/internal/adapter/func.go +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package adapter - -type FuncExpr struct { - name string - args []interface{} -} - -func (f *FuncExpr) Arguments() []interface{} { - return f.args -} - -func (f *FuncExpr) Name() string { - return f.name -} - -func NewFuncExpr(name string, args []interface{}) *FuncExpr { - return &FuncExpr{name: name, args: args} -} diff --git a/vendor/github.com/upper/db/v4/internal/adapter/logical_expr.go b/vendor/github.com/upper/db/v4/internal/adapter/logical_expr.go deleted file mode 100644 index 30f898c2..00000000 --- a/vendor/github.com/upper/db/v4/internal/adapter/logical_expr.go +++ /dev/null @@ -1,123 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package adapter - -import ( - "github.com/upper/db/v4/internal/immutable" -) - -// LogicalExpr represents a group formed by one or more sentences joined by -// an Operator like "AND" or "OR". -type LogicalExpr interface { - // Expressions returns child sentences. - Expressions() []LogicalExpr - - // Operator returns the Operator that joins all the sentences in the group. - Operator() LogicalOperator - - // Empty returns true if the compound has zero children, false otherwise. - Empty() bool -} - -// LogicalOperator represents the operation on a compound statement. -type LogicalOperator uint - -// LogicalExpr Operators. -const ( - LogicalOperatorNone LogicalOperator = iota - LogicalOperatorAnd - LogicalOperatorOr -) - -const DefaultLogicalOperator = LogicalOperatorAnd - -type LogicalExprGroup struct { - op LogicalOperator - - prev *LogicalExprGroup - fn func(*[]LogicalExpr) error -} - -func NewLogicalExprGroup(op LogicalOperator, conds ...LogicalExpr) *LogicalExprGroup { - group := &LogicalExprGroup{op: op} - if len(conds) == 0 { - return group - } - return group.Frame(func(in *[]LogicalExpr) error { - *in = append(*in, conds...) - return nil - }) -} - -// Expressions returns each one of the conditions as a compound. -func (g *LogicalExprGroup) Expressions() []LogicalExpr { - conds, err := immutable.FastForward(g) - if err == nil { - return *(conds.(*[]LogicalExpr)) - } - return nil -} - -// Operator is undefined for a logical group. -func (g *LogicalExprGroup) Operator() LogicalOperator { - if g.op == LogicalOperatorNone { - panic("operator is not defined") - } - return g.op -} - -// Empty returns true if this condition has no elements. False otherwise. -func (g *LogicalExprGroup) Empty() bool { - if g.fn != nil { - return false - } - if g.prev != nil { - return g.prev.Empty() - } - return true -} - -func (g *LogicalExprGroup) Frame(fn func(*[]LogicalExpr) error) *LogicalExprGroup { - return &LogicalExprGroup{prev: g, op: g.op, fn: fn} -} - -func (g *LogicalExprGroup) Prev() immutable.Immutable { - if g == nil { - return nil - } - return g.prev -} - -func (g *LogicalExprGroup) Fn(in interface{}) error { - if g.fn == nil { - return nil - } - return g.fn(in.(*[]LogicalExpr)) -} - -func (g *LogicalExprGroup) Base() interface{} { - return &[]LogicalExpr{} -} - -var ( - _ = immutable.Immutable(&LogicalExprGroup{}) -) diff --git a/vendor/github.com/upper/db/v4/internal/adapter/raw.go b/vendor/github.com/upper/db/v4/internal/adapter/raw.go deleted file mode 100644 index 73e7551c..00000000 --- a/vendor/github.com/upper/db/v4/internal/adapter/raw.go +++ /dev/null @@ -1,70 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package adapter - -// RawExpr interface represents values that can bypass SQL filters. This is an -// exported interface but it's rarely used directly, you may want to use the -// `db.Raw()` function instead. -type RawExpr struct { - value string - args *[]interface{} -} - -func (r *RawExpr) Arguments() []interface{} { - if r.args != nil { - return *r.args - } - return nil -} - -func (r RawExpr) Raw() string { - return r.value -} - -func (r RawExpr) String() string { - return r.Raw() -} - -// Expressions returns a logical expressio.n -func (r *RawExpr) Expressions() []LogicalExpr { - return []LogicalExpr{r} -} - -// Operator returns the default compound operator. -func (r RawExpr) Operator() LogicalOperator { - return LogicalOperatorNone -} - -// Empty return false if this struct has no value. -func (r *RawExpr) Empty() bool { - return r.value == "" -} - -func NewRawExpr(value string, args []interface{}) *RawExpr { - r := &RawExpr{value: value, args: nil} - if len(args) > 0 { - r.args = &args - } - return r -} - -var _ = LogicalExpr(&RawExpr{}) diff --git a/vendor/github.com/upper/db/v4/internal/cache/cache.go b/vendor/github.com/upper/db/v4/internal/cache/cache.go deleted file mode 100644 index 80dadac9..00000000 --- a/vendor/github.com/upper/db/v4/internal/cache/cache.go +++ /dev/null @@ -1,134 +0,0 @@ -// Copyright (c) 2014-present José Carlos Nieto, https://menteslibres.net/xiam -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package cache - -import ( - "container/list" - "errors" - "sync" -) - -const defaultCapacity = 128 - -// Cache holds a map of volatile key -> values. -type Cache struct { - keys *list.List - items map[uint64]*list.Element - mu sync.RWMutex - capacity int -} - -type cacheItem struct { - key uint64 - value interface{} -} - -// NewCacheWithCapacity initializes a new caching space with the given -// capacity. -func NewCacheWithCapacity(capacity int) (*Cache, error) { - if capacity < 1 { - return nil, errors.New("Capacity must be greater than zero.") - } - c := &Cache{ - capacity: capacity, - } - c.init() - return c, nil -} - -// NewCache initializes a new caching space with default settings. -func NewCache() *Cache { - c, err := NewCacheWithCapacity(defaultCapacity) - if err != nil { - panic(err.Error()) // Should never happen as we're not providing a negative defaultCapacity. - } - return c -} - -func (c *Cache) init() { - c.items = make(map[uint64]*list.Element) - c.keys = list.New() -} - -// Read attempts to retrieve a cached value as a string, if the value does not -// exists returns an empty string and false. -func (c *Cache) Read(h Hashable) (string, bool) { - if v, ok := c.ReadRaw(h); ok { - if s, ok := v.(string); ok { - return s, true - } - } - return "", false -} - -// ReadRaw attempts to retrieve a cached value as an interface{}, if the value -// does not exists returns nil and false. -func (c *Cache) ReadRaw(h Hashable) (interface{}, bool) { - c.mu.RLock() - defer c.mu.RUnlock() - - item, ok := c.items[h.Hash()] - if ok { - return item.Value.(*cacheItem).value, true - } - - return nil, false -} - -// Write stores a value in memory. If the value already exists its overwritten. -func (c *Cache) Write(h Hashable, value interface{}) { - c.mu.Lock() - defer c.mu.Unlock() - - key := h.Hash() - - if item, ok := c.items[key]; ok { - item.Value.(*cacheItem).value = value - c.keys.MoveToFront(item) - return - } - - c.items[key] = c.keys.PushFront(&cacheItem{key, value}) - - for c.keys.Len() > c.capacity { - item := c.keys.Remove(c.keys.Back()).(*cacheItem) - delete(c.items, item.key) - if p, ok := item.value.(HasOnEvict); ok { - p.OnEvict() - } - } -} - -// Clear generates a new memory space, leaving the old memory unreferenced, so -// it can be claimed by the garbage collector. -func (c *Cache) Clear() { - c.mu.Lock() - defer c.mu.Unlock() - - for _, item := range c.items { - if p, ok := item.Value.(*cacheItem).value.(HasOnEvict); ok { - p.OnEvict() - } - } - - c.init() -} diff --git a/vendor/github.com/upper/db/v4/internal/cache/hash.go b/vendor/github.com/upper/db/v4/internal/cache/hash.go deleted file mode 100644 index 4b866a9d..00000000 --- a/vendor/github.com/upper/db/v4/internal/cache/hash.go +++ /dev/null @@ -1,109 +0,0 @@ -package cache - -import ( - "fmt" - - "github.com/segmentio/fasthash/fnv1a" -) - -const ( - hashTypeInt uint64 = 1 << iota - hashTypeSignedInt - hashTypeBool - hashTypeString - hashTypeHashable - hashTypeNil -) - -type hasher struct { - t uint64 - v interface{} -} - -func (h *hasher) Hash() uint64 { - return NewHash(h.t, h.v) -} - -func NewHashable(t uint64, v interface{}) Hashable { - return &hasher{t: t, v: v} -} - -func InitHash(t uint64) uint64 { - return fnv1a.AddUint64(fnv1a.Init64, t) -} - -func NewHash(t uint64, in ...interface{}) uint64 { - return AddToHash(InitHash(t), in...) -} - -func AddToHash(h uint64, in ...interface{}) uint64 { - for i := range in { - if in[i] == nil { - continue - } - h = addToHash(h, in[i]) - } - return h -} - -func addToHash(h uint64, in interface{}) uint64 { - switch v := in.(type) { - case uint64: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), v) - case uint32: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - case uint16: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - case uint8: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - case uint: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - case int64: - if v < 0 { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeSignedInt), uint64(-v)) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - } - case int32: - if v < 0 { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeSignedInt), uint64(-v)) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - } - case int16: - if v < 0 { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeSignedInt), uint64(-v)) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - } - case int8: - if v < 0 { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeSignedInt), uint64(-v)) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - } - case int: - if v < 0 { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeSignedInt), uint64(-v)) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeInt), uint64(v)) - } - case bool: - if v { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeBool), 1) - } else { - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeBool), 2) - } - case string: - return fnv1a.AddString64(fnv1a.AddUint64(h, hashTypeString), v) - case Hashable: - if in == nil { - panic(fmt.Sprintf("could not hash nil element %T", in)) - } - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeHashable), v.Hash()) - case nil: - return fnv1a.AddUint64(fnv1a.AddUint64(h, hashTypeNil), 0) - default: - panic(fmt.Sprintf("unsupported value type %T", in)) - } -} diff --git a/vendor/github.com/upper/db/v4/internal/cache/interface.go b/vendor/github.com/upper/db/v4/internal/cache/interface.go deleted file mode 100644 index c63246af..00000000 --- a/vendor/github.com/upper/db/v4/internal/cache/interface.go +++ /dev/null @@ -1,34 +0,0 @@ -// Copyright (c) 2014-present José Carlos Nieto, https://menteslibres.net/xiam -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package cache - -// Hashable types must implement a method that returns a key. This key will be -// associated with a cached value. -type Hashable interface { - Hash() uint64 -} - -// HasOnEvict type is (optionally) implemented by cache objects to clean after -// themselves. -type HasOnEvict interface { - OnEvict() -} diff --git a/vendor/github.com/upper/db/v4/internal/immutable/immutable.go b/vendor/github.com/upper/db/v4/internal/immutable/immutable.go deleted file mode 100644 index 57d29ce2..00000000 --- a/vendor/github.com/upper/db/v4/internal/immutable/immutable.go +++ /dev/null @@ -1,28 +0,0 @@ -package immutable - -// Immutable represents an immutable chain that, if passed to FastForward, -// applies Fn() to every element of a chain, the first element of this chain is -// represented by Base(). -type Immutable interface { - // Prev is the previous element on a chain. - Prev() Immutable - // Fn a function that is able to modify the passed element. - Fn(interface{}) error - // Base is the first element on a chain, there's no previous element before - // the Base element. - Base() interface{} -} - -// FastForward applies all Fn methods in order on the given new Base. -func FastForward(curr Immutable) (interface{}, error) { - prev := curr.Prev() - if prev == nil { - return curr.Base(), nil - } - in, err := FastForward(prev) - if err != nil { - return nil, err - } - err = curr.Fn(in) - return in, err -} diff --git a/vendor/github.com/upper/db/v4/internal/reflectx/LICENSE b/vendor/github.com/upper/db/v4/internal/reflectx/LICENSE deleted file mode 100644 index 0d31edfa..00000000 --- a/vendor/github.com/upper/db/v4/internal/reflectx/LICENSE +++ /dev/null @@ -1,23 +0,0 @@ - Copyright (c) 2013, Jason Moiron - - Permission is hereby granted, free of charge, to any person - obtaining a copy of this software and associated documentation - files (the "Software"), to deal in the Software without - restriction, including without limitation the rights to use, - copy, modify, merge, publish, distribute, sublicense, and/or sell - copies of the Software, and to permit persons to whom the - Software is furnished to do so, subject to the following - conditions: - - The above copyright notice and this permission notice shall be - included in all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES - OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT - HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, - WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR - OTHER DEALINGS IN THE SOFTWARE. - diff --git a/vendor/github.com/upper/db/v4/internal/reflectx/README.md b/vendor/github.com/upper/db/v4/internal/reflectx/README.md deleted file mode 100644 index 76f1b5df..00000000 --- a/vendor/github.com/upper/db/v4/internal/reflectx/README.md +++ /dev/null @@ -1,17 +0,0 @@ -# reflectx - -The sqlx package has special reflect needs. In particular, it needs to: - -* be able to map a name to a field -* understand embedded structs -* understand mapping names to fields by a particular tag -* user specified name -> field mapping functions - -These behaviors mimic the behaviors by the standard library marshallers and also the -behavior of standard Go accessors. - -The first two are amply taken care of by `Reflect.Value.FieldByName`, and the third is -addressed by `Reflect.Value.FieldByNameFunc`, but these don't quite understand struct -tags in the ways that are vital to most marshalers, and they are slow. - -This reflectx package extends reflect to achieve these goals. diff --git a/vendor/github.com/upper/db/v4/internal/reflectx/reflect.go b/vendor/github.com/upper/db/v4/internal/reflectx/reflect.go deleted file mode 100644 index 888edeb8..00000000 --- a/vendor/github.com/upper/db/v4/internal/reflectx/reflect.go +++ /dev/null @@ -1,405 +0,0 @@ -// Package reflectx implements extensions to the standard reflect lib suitable -// for implementing marshaling and unmarshaling packages. The main Mapper type -// allows for Go-compatible named attribute access, including accessing embedded -// struct attributes and the ability to use functions and struct tags to -// customize field names. -// -package reflectx - -import ( - "fmt" - "reflect" - "runtime" - "strings" - "sync" -) - -// A FieldInfo is a collection of metadata about a struct field. -type FieldInfo struct { - Index []int - Path string - Field reflect.StructField - Zero reflect.Value - Name string - Options map[string]string - Embedded bool - Children []*FieldInfo - Parent *FieldInfo -} - -// A StructMap is an index of field metadata for a struct. -type StructMap struct { - Tree *FieldInfo - Index []*FieldInfo - Paths map[string]*FieldInfo - Names map[string]*FieldInfo -} - -// GetByPath returns a *FieldInfo for a given string path. -func (f StructMap) GetByPath(path string) *FieldInfo { - return f.Paths[path] -} - -// GetByTraversal returns a *FieldInfo for a given integer path. It is -// analogous to reflect.FieldByIndex. -func (f StructMap) GetByTraversal(index []int) *FieldInfo { - if len(index) == 0 { - return nil - } - - tree := f.Tree - for _, i := range index { - if i >= len(tree.Children) || tree.Children[i] == nil { - return nil - } - tree = tree.Children[i] - } - return tree -} - -// Mapper is a general purpose mapper of names to struct fields. A Mapper -// behaves like most marshallers, optionally obeying a field tag for name -// mapping and a function to provide a basic mapping of fields to names. -type Mapper struct { - cache map[reflect.Type]*StructMap - tagName string - tagMapFunc func(string) string - mapFunc func(string) string - mutex sync.Mutex -} - -// NewMapper returns a new mapper which optionally obeys the field tag given -// by tagName. If tagName is the empty string, it is ignored. -func NewMapper(tagName string) *Mapper { - return &Mapper{ - cache: make(map[reflect.Type]*StructMap), - tagName: tagName, - } -} - -// NewMapperTagFunc returns a new mapper which contains a mapper for field names -// AND a mapper for tag values. This is useful for tags like json which can -// have values like "name,omitempty". -func NewMapperTagFunc(tagName string, mapFunc, tagMapFunc func(string) string) *Mapper { - return &Mapper{ - cache: make(map[reflect.Type]*StructMap), - tagName: tagName, - mapFunc: mapFunc, - tagMapFunc: tagMapFunc, - } -} - -// NewMapperFunc returns a new mapper which optionally obeys a field tag and -// a struct field name mapper func given by f. Tags will take precedence, but -// for any other field, the mapped name will be f(field.Name) -func NewMapperFunc(tagName string, f func(string) string) *Mapper { - return &Mapper{ - cache: make(map[reflect.Type]*StructMap), - tagName: tagName, - mapFunc: f, - } -} - -// TypeMap returns a mapping of field strings to int slices representing -// the traversal down the struct to reach the field. -func (m *Mapper) TypeMap(t reflect.Type) *StructMap { - m.mutex.Lock() - mapping, ok := m.cache[t] - if !ok { - mapping = getMapping(t, m.tagName, m.mapFunc, m.tagMapFunc) - m.cache[t] = mapping - } - m.mutex.Unlock() - return mapping -} - -// FieldMap returns the mapper's mapping of field names to reflect values. Panics -// if v's Kind is not Struct, or v is not Indirectable to a struct kind. -func (m *Mapper) FieldMap(v reflect.Value) map[string]reflect.Value { - v = reflect.Indirect(v) - mustBe(v, reflect.Struct) - - r := map[string]reflect.Value{} - tm := m.TypeMap(v.Type()) - for tagName, fi := range tm.Names { - r[tagName] = FieldByIndexes(v, fi.Index) - } - return r -} - -// ValidFieldMap returns the mapper's mapping of field names to reflect valid -// field values. Panics if v's Kind is not Struct, or v is not Indirectable to -// a struct kind. -func (m *Mapper) ValidFieldMap(v reflect.Value) map[string]reflect.Value { - v = reflect.Indirect(v) - mustBe(v, reflect.Struct) - - r := map[string]reflect.Value{} - tm := m.TypeMap(v.Type()) - for tagName, fi := range tm.Names { - v := ValidFieldByIndexes(v, fi.Index) - if v.IsValid() { - r[tagName] = v - } - } - return r -} - -// FieldByName returns a field by the its mapped name as a reflect.Value. -// Panics if v's Kind is not Struct or v is not Indirectable to a struct Kind. -// Returns zero Value if the name is not found. -func (m *Mapper) FieldByName(v reflect.Value, name string) reflect.Value { - v = reflect.Indirect(v) - mustBe(v, reflect.Struct) - - tm := m.TypeMap(v.Type()) - fi, ok := tm.Names[name] - if !ok { - return v - } - return FieldByIndexes(v, fi.Index) -} - -// FieldsByName returns a slice of values corresponding to the slice of names -// for the value. Panics if v's Kind is not Struct or v is not Indirectable -// to a struct Kind. Returns zero Value for each name not found. -func (m *Mapper) FieldsByName(v reflect.Value, names []string) []reflect.Value { - v = reflect.Indirect(v) - mustBe(v, reflect.Struct) - - tm := m.TypeMap(v.Type()) - vals := make([]reflect.Value, 0, len(names)) - for _, name := range names { - fi, ok := tm.Names[name] - if !ok { - vals = append(vals, *new(reflect.Value)) - } else { - vals = append(vals, FieldByIndexes(v, fi.Index)) - } - } - return vals -} - -// TraversalsByName returns a slice of int slices which represent the struct -// traversals for each mapped name. Panics if t is not a struct or Indirectable -// to a struct. Returns empty int slice for each name not found. -func (m *Mapper) TraversalsByName(t reflect.Type, names []string) [][]int { - t = Deref(t) - mustBe(t, reflect.Struct) - tm := m.TypeMap(t) - - r := make([][]int, 0, len(names)) - for _, name := range names { - fi, ok := tm.Names[name] - if !ok { - r = append(r, []int{}) - } else { - r = append(r, fi.Index) - } - } - return r -} - -// FieldByIndexes returns a value for a particular struct traversal. -func FieldByIndexes(v reflect.Value, indexes []int) reflect.Value { - for _, i := range indexes { - v = reflect.Indirect(v).Field(i) - // if this is a pointer, it's possible it is nil - if v.Kind() == reflect.Ptr && v.IsNil() { - alloc := reflect.New(Deref(v.Type())) - v.Set(alloc) - } - if v.Kind() == reflect.Map && v.IsNil() { - v.Set(reflect.MakeMap(v.Type())) - } - } - return v -} - -// ValidFieldByIndexes returns a value for a particular struct traversal. -func ValidFieldByIndexes(v reflect.Value, indexes []int) reflect.Value { - - for _, i := range indexes { - v = reflect.Indirect(v) - if !v.IsValid() { - return reflect.Value{} - } - v = v.Field(i) - // if this is a pointer, it's possible it is nil - if (v.Kind() == reflect.Ptr || v.Kind() == reflect.Map) && v.IsNil() { - return reflect.Value{} - } - } - - return v -} - -// FieldByIndexesReadOnly returns a value for a particular struct traversal, -// but is not concerned with allocating nil pointers because the value is -// going to be used for reading and not setting. -func FieldByIndexesReadOnly(v reflect.Value, indexes []int) reflect.Value { - for _, i := range indexes { - v = reflect.Indirect(v).Field(i) - } - return v -} - -// Deref is Indirect for reflect.Types -func Deref(t reflect.Type) reflect.Type { - if t.Kind() == reflect.Ptr { - t = t.Elem() - } - return t -} - -// -- helpers & utilities -- - -type kinder interface { - Kind() reflect.Kind -} - -// mustBe checks a value against a kind, panicing with a reflect.ValueError -// if the kind isn't that which is required. -func mustBe(v kinder, expected reflect.Kind) { - k := v.Kind() - if k != expected { - panic(&reflect.ValueError{Method: methodName(), Kind: k}) - } -} - -// methodName is returns the caller of the function calling methodName -func methodName() string { - pc, _, _, _ := runtime.Caller(2) - f := runtime.FuncForPC(pc) - if f == nil { - return "unknown method" - } - return f.Name() -} - -type typeQueue struct { - t reflect.Type - fi *FieldInfo - pp string // Parent path -} - -// A copying append that creates a new slice each time. -func apnd(is []int, i int) []int { - x := make([]int, len(is)+1) - copy(x, is) - x[len(x)-1] = i - return x -} - -// getMapping returns a mapping for the t type, using the tagName, mapFunc and -// tagMapFunc to determine the canonical names of fields. -func getMapping(t reflect.Type, tagName string, mapFunc, tagMapFunc func(string) string) *StructMap { - m := []*FieldInfo{} - - root := &FieldInfo{} - queue := []typeQueue{} - queue = append(queue, typeQueue{Deref(t), root, ""}) - - for len(queue) != 0 { - // pop the first item off of the queue - tq := queue[0] - queue = queue[1:] - nChildren := 0 - if tq.t.Kind() == reflect.Struct { - nChildren = tq.t.NumField() - } - tq.fi.Children = make([]*FieldInfo, nChildren) - - // iterate through all of its fields - for fieldPos := 0; fieldPos < nChildren; fieldPos++ { - f := tq.t.Field(fieldPos) - - fi := FieldInfo{} - fi.Field = f - fi.Zero = reflect.New(f.Type).Elem() - fi.Options = map[string]string{} - - var tag, name string - if tagName != "" && strings.Contains(string(f.Tag), tagName+":") { - tag = f.Tag.Get(tagName) - name = tag - } else { - if mapFunc != nil { - name = mapFunc(f.Name) - } - } - - parts := strings.Split(name, ",") - if len(parts) > 1 { - name = parts[0] - for _, opt := range parts[1:] { - kv := strings.Split(opt, "=") - if len(kv) > 1 { - fi.Options[kv[0]] = kv[1] - } else { - fi.Options[kv[0]] = "" - } - } - } - - if tagMapFunc != nil { - tag = tagMapFunc(tag) - } - - fi.Name = name - - if tq.pp == "" || (tq.pp == "" && tag == "") { - fi.Path = fi.Name - } else { - fi.Path = fmt.Sprintf("%s.%s", tq.pp, fi.Name) - } - - // if the name is "-", disabled via a tag, skip it - if name == "-" { - continue - } - - // skip unexported fields - if len(f.PkgPath) != 0 && !f.Anonymous { - continue - } - - // bfs search of anonymous embedded structs - if f.Anonymous { - pp := tq.pp - if tag != "" { - pp = fi.Path - } - - fi.Embedded = true - fi.Index = apnd(tq.fi.Index, fieldPos) - nChildren := 0 - ft := Deref(f.Type) - if ft.Kind() == reflect.Struct { - nChildren = ft.NumField() - } - fi.Children = make([]*FieldInfo, nChildren) - queue = append(queue, typeQueue{Deref(f.Type), &fi, pp}) - } else if fi.Zero.Kind() == reflect.Struct || (fi.Zero.Kind() == reflect.Ptr && fi.Zero.Type().Elem().Kind() == reflect.Struct) { - fi.Index = apnd(tq.fi.Index, fieldPos) - fi.Children = make([]*FieldInfo, Deref(f.Type).NumField()) - queue = append(queue, typeQueue{Deref(f.Type), &fi, fi.Path}) - } - - fi.Index = apnd(tq.fi.Index, fieldPos) - fi.Parent = tq.fi - tq.fi.Children[fieldPos] = &fi - m = append(m, &fi) - } - } - - flds := &StructMap{Index: m, Tree: root, Paths: map[string]*FieldInfo{}, Names: map[string]*FieldInfo{}} - for _, fi := range flds.Index { - flds.Paths[fi.Path] = fi - if fi.Name != "" && !fi.Embedded { - flds.Names[fi.Path] = fi - } - } - - return flds -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/collection.go b/vendor/github.com/upper/db/v4/internal/sqladapter/collection.go deleted file mode 100644 index f70d0c93..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/collection.go +++ /dev/null @@ -1,369 +0,0 @@ -package sqladapter - -import ( - "fmt" - "reflect" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqladapter/exql" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// CollectionAdapter defines methods to be implemented by SQL adapters. -type CollectionAdapter interface { - // Insert prepares and executes an INSERT statament. When the item is - // succefully added, Insert returns a unique identifier of the newly added - // element (or nil if the unique identifier couldn't be determined). - Insert(Collection, interface{}) (interface{}, error) -} - -// Collection satisfies db.Collection. -type Collection interface { - // Insert inserts a new item into the collection. - Insert(interface{}) (db.InsertResult, error) - - // Name returns the name of the collection. - Name() string - - // Session returns the db.Session the collection belongs to. - Session() db.Session - - // Exists returns true if the collection exists, false otherwise. - Exists() (bool, error) - - // Find defined a new result set. - Find(conds ...interface{}) db.Result - - Count() (uint64, error) - - // Truncate removes all elements on the collection and resets the - // collection's IDs. - Truncate() error - - // InsertReturning inserts a new item into the collection and refreshes the - // item with actual data from the database. This is useful to get automatic - // values, such as timestamps, or IDs. - InsertReturning(item interface{}) error - - // UpdateReturning updates a record from the collection and refreshes the item - // with actual data from the database. This is useful to get automatic - // values, such as timestamps, or IDs. - UpdateReturning(item interface{}) error - - // PrimaryKeys returns the names of all primary keys in the table. - PrimaryKeys() ([]string, error) - - // SQLBuilder returns a db.SQL instance. - SQL() db.SQL -} - -type finder interface { - Find(Collection, *Result, ...interface{}) db.Result -} - -type condsFilter interface { - FilterConds(...interface{}) []interface{} -} - -// collection is the implementation of Collection. -type collection struct { - name string - adapter CollectionAdapter -} - -type collectionWithSession struct { - *collection - - session Session -} - -func newCollection(name string, adapter CollectionAdapter) *collection { - if adapter == nil { - panic("upper: nil adapter") - } - return &collection{ - name: name, - adapter: adapter, - } -} - -func (c *collectionWithSession) SQL() db.SQL { - return c.session.SQL() -} - -func (c *collectionWithSession) Session() db.Session { - return c.session -} - -func (c *collectionWithSession) Name() string { - return c.name -} - -func (c *collectionWithSession) Count() (uint64, error) { - return c.Find().Count() -} - -func (c *collectionWithSession) Insert(item interface{}) (db.InsertResult, error) { - id, err := c.adapter.Insert(c, item) - if err != nil { - return nil, err - } - - return db.NewInsertResult(id), nil -} - -func (c *collectionWithSession) PrimaryKeys() ([]string, error) { - return c.session.PrimaryKeys(c.Name()) -} - -func (c *collectionWithSession) filterConds(conds ...interface{}) ([]interface{}, error) { - pk, err := c.PrimaryKeys() - if err != nil { - return nil, err - } - if len(conds) == 1 && len(pk) == 1 { - if id := conds[0]; IsKeyValue(id) { - conds[0] = db.Cond{pk[0]: db.Eq(id)} - } - } - if tr, ok := c.adapter.(condsFilter); ok { - return tr.FilterConds(conds...), nil - } - return conds, nil -} - -func (c *collectionWithSession) Find(conds ...interface{}) db.Result { - filteredConds, err := c.filterConds(conds...) - if err != nil { - res := &Result{} - res.setErr(err) - return res - } - - res := NewResult( - c.session.SQL(), - c.Name(), - filteredConds, - ) - if f, ok := c.adapter.(finder); ok { - return f.Find(c, res, conds...) - } - return res -} - -func (c *collectionWithSession) Exists() (bool, error) { - if err := c.session.TableExists(c.Name()); err != nil { - return false, err - } - return true, nil -} - -func (c *collectionWithSession) InsertReturning(item interface{}) error { - if item == nil || reflect.TypeOf(item).Kind() != reflect.Ptr { - return fmt.Errorf("Expecting a pointer but got %T", item) - } - - // Grab primary keys - pks, err := c.PrimaryKeys() - if err != nil { - return err - } - - if len(pks) == 0 { - if ok, err := c.Exists(); !ok { - return err - } - return fmt.Errorf(db.ErrMissingPrimaryKeys.Error(), c.Name()) - } - - var tx Session - isTransaction := c.session.IsTransaction() - if isTransaction { - tx = c.session - } else { - var err error - tx, err = c.session.NewTransaction(c.session.Context(), nil) - if err != nil { - return err - } - defer tx.Close() - } - - // Allocate a clone of item. - newItem := reflect.New(reflect.ValueOf(item).Elem().Type()).Interface() - var newItemFieldMap map[string]reflect.Value - - itemValue := reflect.ValueOf(item) - - col := tx.Collection(c.Name()) - - // Insert item as is and grab the returning ID. - var newItemRes db.Result - id, err := col.Insert(item) - if err != nil { - goto cancel - } - if id == nil { - err = fmt.Errorf("InsertReturning: Could not get a valid ID after inserting. Does the %q table have a primary key?", c.Name()) - goto cancel - } - - if len(pks) > 1 { - newItemRes = col.Find(id) - } else { - // We have one primary key, build a explicit db.Cond with it to prevent - // string keys to be considered as raw conditions. - newItemRes = col.Find(db.Cond{pks[0]: id}) // We already checked that pks is not empty, so pks[0] is defined. - } - - // Fetch the row that was just interted into newItem - err = newItemRes.One(newItem) - if err != nil { - goto cancel - } - - switch reflect.ValueOf(newItem).Elem().Kind() { - case reflect.Struct: - // Get valid fields from newItem to overwrite those that are on item. - newItemFieldMap = sqlbuilder.Mapper.ValidFieldMap(reflect.ValueOf(newItem)) - for fieldName := range newItemFieldMap { - sqlbuilder.Mapper.FieldByName(itemValue, fieldName).Set(newItemFieldMap[fieldName]) - } - case reflect.Map: - newItemV := reflect.ValueOf(newItem).Elem() - itemV := reflect.ValueOf(item) - if itemV.Kind() == reflect.Ptr { - itemV = itemV.Elem() - } - for _, keyV := range newItemV.MapKeys() { - itemV.SetMapIndex(keyV, newItemV.MapIndex(keyV)) - } - default: - err = fmt.Errorf("InsertReturning: expecting a pointer to map or struct, got %T", newItem) - goto cancel - } - - if !isTransaction { - // This is only executed if t.Session() was **not** a transaction and if - // sess was created with sess.NewTransaction(). - return tx.Commit() - } - - return err - -cancel: - // This goto label should only be used when we got an error within a - // transaction and we don't want to continue. - - if !isTransaction { - // This is only executed if t.Session() was **not** a transaction and if - // sess was created with sess.NewTransaction(). - _ = tx.Rollback() - } - return err -} - -func (c *collectionWithSession) UpdateReturning(item interface{}) error { - if item == nil || reflect.TypeOf(item).Kind() != reflect.Ptr { - return fmt.Errorf("Expecting a pointer but got %T", item) - } - - // Grab primary keys - pks, err := c.PrimaryKeys() - if err != nil { - return err - } - - if len(pks) == 0 { - if ok, err := c.Exists(); !ok { - return err - } - return fmt.Errorf(db.ErrMissingPrimaryKeys.Error(), c.Name()) - } - - var tx Session - isTransaction := c.session.IsTransaction() - - if isTransaction { - tx = c.session - } else { - // Not within a transaction, let's create one. - var err error - tx, err = c.session.NewTransaction(c.session.Context(), nil) - if err != nil { - return err - } - defer tx.Close() - } - - // Allocate a clone of item. - defaultItem := reflect.New(reflect.ValueOf(item).Elem().Type()).Interface() - var defaultItemFieldMap map[string]reflect.Value - - itemValue := reflect.ValueOf(item) - - conds := db.Cond{} - for _, pk := range pks { - conds[pk] = db.Eq(sqlbuilder.Mapper.FieldByName(itemValue, pk).Interface()) - } - - col := tx.(Session).Collection(c.Name()) - - err = col.Find(conds).Update(item) - if err != nil { - goto cancel - } - - if err = col.Find(conds).One(defaultItem); err != nil { - goto cancel - } - - switch reflect.ValueOf(defaultItem).Elem().Kind() { - case reflect.Struct: - // Get valid fields from defaultItem to overwrite those that are on item. - defaultItemFieldMap = sqlbuilder.Mapper.ValidFieldMap(reflect.ValueOf(defaultItem)) - for fieldName := range defaultItemFieldMap { - sqlbuilder.Mapper.FieldByName(itemValue, fieldName).Set(defaultItemFieldMap[fieldName]) - } - case reflect.Map: - defaultItemV := reflect.ValueOf(defaultItem).Elem() - itemV := reflect.ValueOf(item) - if itemV.Kind() == reflect.Ptr { - itemV = itemV.Elem() - } - for _, keyV := range defaultItemV.MapKeys() { - itemV.SetMapIndex(keyV, defaultItemV.MapIndex(keyV)) - } - default: - panic("default") - } - - if !isTransaction { - // This is only executed if t.Session() was **not** a transaction and if - // sess was created with sess.NewTransaction(). - return tx.Commit() - } - return err - -cancel: - // This goto label should only be used when we got an error within a - // transaction and we don't want to continue. - - if !isTransaction { - // This is only executed if t.Session() was **not** a transaction and if - // sess was created with sess.NewTransaction(). - _ = tx.Rollback() - } - return err -} - -func (c *collectionWithSession) Truncate() error { - stmt := exql.Statement{ - Type: exql.Truncate, - Table: exql.TableWithName(c.Name()), - } - if _, err := c.session.SQL().Exec(&stmt); err != nil { - return err - } - return nil -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query.go b/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query.go deleted file mode 100644 index 93cb8fcf..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query.go +++ /dev/null @@ -1,72 +0,0 @@ -// +build !go1.8 - -package compat - -import ( - "context" - "database/sql" -) - -type PreparedExecer interface { - Exec(...interface{}) (sql.Result, error) -} - -func PreparedExecContext(p PreparedExecer, ctx context.Context, args []interface{}) (sql.Result, error) { - return p.Exec(args...) -} - -type Execer interface { - Exec(string, ...interface{}) (sql.Result, error) -} - -func ExecContext(p Execer, ctx context.Context, query string, args []interface{}) (sql.Result, error) { - return p.Exec(query, args...) -} - -type PreparedQueryer interface { - Query(...interface{}) (*sql.Rows, error) -} - -func PreparedQueryContext(p PreparedQueryer, ctx context.Context, args []interface{}) (*sql.Rows, error) { - return p.Query(args...) -} - -type Queryer interface { - Query(string, ...interface{}) (*sql.Rows, error) -} - -func QueryContext(p Queryer, ctx context.Context, query string, args []interface{}) (*sql.Rows, error) { - return p.Query(query, args...) -} - -type PreparedRowQueryer interface { - QueryRow(...interface{}) *sql.Row -} - -func PreparedQueryRowContext(p PreparedRowQueryer, ctx context.Context, args []interface{}) *sql.Row { - return p.QueryRow(args...) -} - -type RowQueryer interface { - QueryRow(string, ...interface{}) *sql.Row -} - -func QueryRowContext(p RowQueryer, ctx context.Context, query string, args []interface{}) *sql.Row { - return p.QueryRow(query, args...) -} - -type Preparer interface { - Prepare(string) (*sql.Stmt, error) -} - -func PrepareContext(p Preparer, ctx context.Context, query string) (*sql.Stmt, error) { - return p.Prepare(query) -} - -type TxStarter interface { - Begin() (*sql.Tx, error) -} - -func BeginTx(p TxStarter, ctx context.Context, opts interface{}) (*sql.Tx, error) { - return p.Begin() -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query_go18.go b/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query_go18.go deleted file mode 100644 index a3abbaf8..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/compat/query_go18.go +++ /dev/null @@ -1,72 +0,0 @@ -// +build go1.8 - -package compat - -import ( - "context" - "database/sql" -) - -type PreparedExecer interface { - ExecContext(context.Context, ...interface{}) (sql.Result, error) -} - -func PreparedExecContext(p PreparedExecer, ctx context.Context, args []interface{}) (sql.Result, error) { - return p.ExecContext(ctx, args...) -} - -type Execer interface { - ExecContext(context.Context, string, ...interface{}) (sql.Result, error) -} - -func ExecContext(p Execer, ctx context.Context, query string, args []interface{}) (sql.Result, error) { - return p.ExecContext(ctx, query, args...) -} - -type PreparedQueryer interface { - QueryContext(context.Context, ...interface{}) (*sql.Rows, error) -} - -func PreparedQueryContext(p PreparedQueryer, ctx context.Context, args []interface{}) (*sql.Rows, error) { - return p.QueryContext(ctx, args...) -} - -type Queryer interface { - QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error) -} - -func QueryContext(p Queryer, ctx context.Context, query string, args []interface{}) (*sql.Rows, error) { - return p.QueryContext(ctx, query, args...) -} - -type PreparedRowQueryer interface { - QueryRowContext(context.Context, ...interface{}) *sql.Row -} - -func PreparedQueryRowContext(p PreparedRowQueryer, ctx context.Context, args []interface{}) *sql.Row { - return p.QueryRowContext(ctx, args...) -} - -type RowQueryer interface { - QueryRowContext(context.Context, string, ...interface{}) *sql.Row -} - -func QueryRowContext(p RowQueryer, ctx context.Context, query string, args []interface{}) *sql.Row { - return p.QueryRowContext(ctx, query, args...) -} - -type Preparer interface { - PrepareContext(context.Context, string) (*sql.Stmt, error) -} - -func PrepareContext(p Preparer, ctx context.Context, query string) (*sql.Stmt, error) { - return p.PrepareContext(ctx, query) -} - -type TxStarter interface { - BeginTx(context.Context, *sql.TxOptions) (*sql.Tx, error) -} - -func BeginTx(p TxStarter, ctx context.Context, opts *sql.TxOptions) (*sql.Tx, error) { - return p.BeginTx(ctx, opts) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column.go deleted file mode 100644 index 5789317b..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column.go +++ /dev/null @@ -1,83 +0,0 @@ -package exql - -import ( - "fmt" - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -type columnWithAlias struct { - Name string - Alias string -} - -// Column represents a SQL column. -type Column struct { - Name interface{} -} - -var _ = Fragment(&Column{}) - -// ColumnWithName creates and returns a Column with the given name. -func ColumnWithName(name string) *Column { - return &Column{Name: name} -} - -// Hash returns a unique identifier for the struct. -func (c *Column) Hash() uint64 { - if c == nil { - return cache.NewHash(FragmentType_Column, nil) - } - return cache.NewHash(FragmentType_Column, c.Name) -} - -// Compile transforms the ColumnValue into an equivalent SQL representation. -func (c *Column) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(c); ok { - return z, nil - } - - var alias string - switch value := c.Name.(type) { - case string: - value = trimString(value) - - chunks := separateByAS(value) - if len(chunks) == 1 { - chunks = separateBySpace(value) - } - - name := chunks[0] - nameChunks := strings.SplitN(name, layout.ColumnSeparator, 2) - - for i := range nameChunks { - nameChunks[i] = trimString(nameChunks[i]) - if nameChunks[i] == "*" { - continue - } - nameChunks[i] = layout.MustCompile(layout.IdentifierQuote, Raw{Value: nameChunks[i]}) - } - - compiled = strings.Join(nameChunks, layout.ColumnSeparator) - - if len(chunks) > 1 { - alias = trimString(chunks[1]) - alias = layout.MustCompile(layout.IdentifierQuote, Raw{Value: alias}) - } - case compilable: - compiled, err = value.Compile(layout) - if err != nil { - return "", err - } - default: - return "", fmt.Errorf(errExpectingHashableFmt, c.Name) - } - - if alias != "" { - compiled = layout.MustCompile(layout.ColumnAliasLayout, columnWithAlias{compiled, alias}) - } - - layout.Write(c, compiled) - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column_value.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column_value.go deleted file mode 100644 index 49296114..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/column_value.go +++ /dev/null @@ -1,111 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" - "strings" -) - -// ColumnValue represents a bundle between a column and a corresponding value. -type ColumnValue struct { - Column Fragment - Operator string - Value Fragment -} - -var _ = Fragment(&ColumnValue{}) - -type columnValueT struct { - Column string - Operator string - Value string -} - -// Hash returns a unique identifier for the struct. -func (c *ColumnValue) Hash() uint64 { - if c == nil { - return cache.NewHash(FragmentType_ColumnValue, nil) - } - return cache.NewHash(FragmentType_ColumnValue, c.Column, c.Operator, c.Value) -} - -// Compile transforms the ColumnValue into an equivalent SQL representation. -func (c *ColumnValue) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(c); ok { - return z, nil - } - - column, err := c.Column.Compile(layout) - if err != nil { - return "", err - } - - data := columnValueT{ - Column: column, - Operator: c.Operator, - } - - if c.Value != nil { - data.Value, err = c.Value.Compile(layout) - if err != nil { - return "", err - } - } - - compiled = strings.TrimSpace(layout.MustCompile(layout.ColumnValue, data)) - - layout.Write(c, compiled) - - return -} - -// ColumnValues represents an array of ColumnValue -type ColumnValues struct { - ColumnValues []Fragment -} - -var _ = Fragment(&ColumnValues{}) - -// JoinColumnValues returns an array of ColumnValue -func JoinColumnValues(values ...Fragment) *ColumnValues { - return &ColumnValues{ColumnValues: values} -} - -// Insert adds a column to the columns array. -func (c *ColumnValues) Insert(values ...Fragment) *ColumnValues { - c.ColumnValues = append(c.ColumnValues, values...) - return c -} - -// Hash returns a unique identifier for the struct. -func (c *ColumnValues) Hash() uint64 { - h := cache.InitHash(FragmentType_ColumnValues) - for i := range c.ColumnValues { - h = cache.AddToHash(h, c.ColumnValues[i]) - } - return h -} - -// Compile transforms the ColumnValues into its SQL representation. -func (c *ColumnValues) Compile(layout *Template) (compiled string, err error) { - - if z, ok := layout.Read(c); ok { - return z, nil - } - - l := len(c.ColumnValues) - - out := make([]string, l) - - for i := range c.ColumnValues { - out[i], err = c.ColumnValues[i].Compile(layout) - if err != nil { - return "", err - } - } - - compiled = strings.TrimSpace(strings.Join(out, layout.IdentifierSeparator)) - - layout.Write(c, compiled) - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/columns.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/columns.go deleted file mode 100644 index c59f73bf..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/columns.go +++ /dev/null @@ -1,83 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -// Columns represents an array of Column. -type Columns struct { - Columns []Fragment -} - -var _ = Fragment(&Columns{}) - -// Hash returns a unique identifier. -func (c *Columns) Hash() uint64 { - if c == nil { - return cache.NewHash(FragmentType_Columns, nil) - } - h := cache.InitHash(FragmentType_Columns) - for i := range c.Columns { - h = cache.AddToHash(h, c.Columns[i]) - } - return h -} - -// JoinColumns creates and returns an array of Column. -func JoinColumns(columns ...Fragment) *Columns { - return &Columns{Columns: columns} -} - -// OnConditions creates and retuens a new On. -func OnConditions(conditions ...Fragment) *On { - return &On{Conditions: conditions} -} - -// UsingColumns builds a Using from the given columns. -func UsingColumns(columns ...Fragment) *Using { - return &Using{Columns: columns} -} - -// Append -func (c *Columns) Append(a *Columns) *Columns { - c.Columns = append(c.Columns, a.Columns...) - return c -} - -// IsEmpty -func (c *Columns) IsEmpty() bool { - if c == nil || len(c.Columns) < 1 { - return true - } - return false -} - -// Compile transforms the Columns into an equivalent SQL representation. -func (c *Columns) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(c); ok { - return z, nil - } - - l := len(c.Columns) - - if l > 0 { - out := make([]string, l) - - for i := 0; i < l; i++ { - out[i], err = c.Columns[i].Compile(layout) - if err != nil { - return "", err - } - } - - compiled = strings.Join(out, layout.IdentifierSeparator) - } else { - compiled = "*" - } - - layout.Write(c, compiled) - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/database.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/database.go deleted file mode 100644 index abdfc1d1..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/database.go +++ /dev/null @@ -1,37 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" -) - -// Database represents a SQL database. -type Database struct { - Name string -} - -var _ = Fragment(&Database{}) - -// DatabaseWithName returns a Database with the given name. -func DatabaseWithName(name string) *Database { - return &Database{Name: name} -} - -// Hash returns a unique identifier for the struct. -func (d *Database) Hash() uint64 { - if d == nil { - return cache.NewHash(FragmentType_Database, nil) - } - return cache.NewHash(FragmentType_Database, d.Name) -} - -// Compile transforms the Database into an equivalent SQL representation. -func (d *Database) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(d); ok { - return c, nil - } - - compiled = layout.MustCompile(layout.IdentifierQuote, Raw{Value: d.Name}) - - layout.Write(d, compiled) - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/default.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/default.go deleted file mode 100644 index 8d3a001f..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/default.go +++ /dev/null @@ -1,192 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" -) - -const ( - defaultColumnSeparator = `.` - defaultIdentifierSeparator = `, ` - defaultIdentifierQuote = `"{{.Value}}"` - defaultValueSeparator = `, ` - defaultValueQuote = `'{{.}}'` - defaultAndKeyword = `AND` - defaultOrKeyword = `OR` - defaultDescKeyword = `DESC` - defaultAscKeyword = `ASC` - defaultAssignmentOperator = `=` - defaultClauseGroup = `({{.}})` - defaultClauseOperator = ` {{.}} ` - defaultColumnValue = `{{.Column}} {{.Operator}} {{.Value}}` - defaultTableAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - defaultColumnAliasLayout = `{{.Name}}{{if .Alias}} AS {{.Alias}}{{end}}` - defaultSortByColumnLayout = `{{.Column}} {{.Order}}` - - defaultOrderByLayout = ` - {{if .SortColumns}} - ORDER BY {{.SortColumns}} - {{end}} - ` - - defaultWhereLayout = ` - {{if .Conds}} - WHERE {{.Conds}} - {{end}} - ` - - defaultUsingLayout = ` - {{if .Columns}} - USING ({{.Columns}}) - {{end}} - ` - - defaultJoinLayout = ` - {{if .Table}} - {{ if .On }} - {{.Type}} JOIN {{.Table}} - {{.On}} - {{ else if .Using }} - {{.Type}} JOIN {{.Table}} - {{.Using}} - {{ else if .Type | eq "CROSS" }} - {{.Type}} JOIN {{.Table}} - {{else}} - NATURAL {{.Type}} JOIN {{.Table}} - {{end}} - {{end}} - ` - - defaultOnLayout = ` - {{if .Conds}} - ON {{.Conds}} - {{end}} - ` - - defaultSelectLayout = ` - SELECT - {{if .Distinct}} - DISTINCT - {{end}} - - {{if .Columns}} - {{.Columns | compile}} - {{else}} - * - {{end}} - - {{if defined .Table}} - FROM {{.Table | compile}} - {{end}} - - {{.Joins | compile}} - - {{.Where | compile}} - - {{.GroupBy | compile}} - - {{.OrderBy | compile}} - - {{if .Limit}} - LIMIT {{.Limit}} - {{end}} - - {{if .Offset}} - OFFSET {{.Offset}} - {{end}} - ` - defaultDeleteLayout = ` - DELETE - FROM {{.Table | compile}} - {{.Where | compile}} - {{if .Limit}} - LIMIT {{.Limit}} - {{end}} - {{if .Offset}} - OFFSET {{.Offset}} - {{end}} - ` - defaultUpdateLayout = ` - UPDATE - {{.Table | compile}} - SET {{.ColumnValues | compile}} - {{.Where | compile}} - ` - - defaultCountLayout = ` - SELECT - COUNT(1) AS _t - FROM {{.Table | compile}} - {{.Where | compile}} - - {{if .Limit}} - LIMIT {{.Limit | compile}} - {{end}} - - {{if .Offset}} - OFFSET {{.Offset}} - {{end}} - ` - - defaultInsertLayout = ` - INSERT INTO {{.Table | compile}} - {{if .Columns }}({{.Columns | compile}}){{end}} - VALUES - {{.Values | compile}} - {{if .Returning}} - RETURNING {{.Returning | compile}} - {{end}} - ` - - defaultTruncateLayout = ` - TRUNCATE TABLE {{.Table | compile}} - ` - - defaultDropDatabaseLayout = ` - DROP DATABASE {{.Database | compile}} - ` - - defaultDropTableLayout = ` - DROP TABLE {{.Table | compile}} - ` - - defaultGroupByLayout = ` - {{if .GroupColumns}} - GROUP BY {{.GroupColumns}} - {{end}} - ` -) - -var defaultTemplate = &Template{ - AndKeyword: defaultAndKeyword, - AscKeyword: defaultAscKeyword, - AssignmentOperator: defaultAssignmentOperator, - ClauseGroup: defaultClauseGroup, - ClauseOperator: defaultClauseOperator, - ColumnAliasLayout: defaultColumnAliasLayout, - ColumnSeparator: defaultColumnSeparator, - ColumnValue: defaultColumnValue, - CountLayout: defaultCountLayout, - DeleteLayout: defaultDeleteLayout, - DescKeyword: defaultDescKeyword, - DropDatabaseLayout: defaultDropDatabaseLayout, - DropTableLayout: defaultDropTableLayout, - GroupByLayout: defaultGroupByLayout, - IdentifierQuote: defaultIdentifierQuote, - IdentifierSeparator: defaultIdentifierSeparator, - InsertLayout: defaultInsertLayout, - JoinLayout: defaultJoinLayout, - OnLayout: defaultOnLayout, - OrKeyword: defaultOrKeyword, - OrderByLayout: defaultOrderByLayout, - SelectLayout: defaultSelectLayout, - SortByColumnLayout: defaultSortByColumnLayout, - TableAliasLayout: defaultTableAliasLayout, - TruncateLayout: defaultTruncateLayout, - UpdateLayout: defaultUpdateLayout, - UsingLayout: defaultUsingLayout, - ValueQuote: defaultValueQuote, - ValueSeparator: defaultValueSeparator, - WhereLayout: defaultWhereLayout, - - Cache: cache.NewCache(), -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/errors.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/errors.go deleted file mode 100644 index b9c8b85e..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/errors.go +++ /dev/null @@ -1,5 +0,0 @@ -package exql - -const ( - errExpectingHashableFmt = "expecting hashable value, got %T" -) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/group_by.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/group_by.go deleted file mode 100644 index 0cb09245..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/group_by.go +++ /dev/null @@ -1,60 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" -) - -// GroupBy represents a SQL's "group by" statement. -type GroupBy struct { - Columns Fragment -} - -var _ = Fragment(&GroupBy{}) - -type groupByT struct { - GroupColumns string -} - -// Hash returns a unique identifier. -func (g *GroupBy) Hash() uint64 { - if g == nil { - return cache.NewHash(FragmentType_GroupBy, nil) - } - return cache.NewHash(FragmentType_GroupBy, g.Columns) -} - -// GroupByColumns creates and returns a GroupBy with the given column. -func GroupByColumns(columns ...Fragment) *GroupBy { - return &GroupBy{Columns: JoinColumns(columns...)} -} - -func (g *GroupBy) IsEmpty() bool { - if g == nil || g.Columns == nil { - return true - } - return g.Columns.(hasIsEmpty).IsEmpty() -} - -// Compile transforms the GroupBy into an equivalent SQL representation. -func (g *GroupBy) Compile(layout *Template) (compiled string, err error) { - - if c, ok := layout.Read(g); ok { - return c, nil - } - - if g.Columns != nil { - columns, err := g.Columns.Compile(layout) - if err != nil { - return "", err - } - - data := groupByT{ - GroupColumns: columns, - } - compiled = layout.MustCompile(layout.GroupByLayout, data) - } - - layout.Write(g, compiled) - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/interfaces.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/interfaces.go deleted file mode 100644 index 1f38d862..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/interfaces.go +++ /dev/null @@ -1,20 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" -) - -// Fragment is any interface that can be both cached and compiled. -type Fragment interface { - cache.Hashable - - compilable -} - -type compilable interface { - Compile(*Template) (string, error) -} - -type hasIsEmpty interface { - IsEmpty() bool -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/join.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/join.go deleted file mode 100644 index c09005a9..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/join.go +++ /dev/null @@ -1,195 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -type innerJoinT struct { - Type string - Table string - On string - Using string -} - -// Joins represents the union of different join conditions. -type Joins struct { - Conditions []Fragment -} - -var _ = Fragment(&Joins{}) - -// Hash returns a unique identifier for the struct. -func (j *Joins) Hash() uint64 { - if j == nil { - return cache.NewHash(FragmentType_Joins, nil) - } - h := cache.InitHash(FragmentType_Joins) - for i := range j.Conditions { - h = cache.AddToHash(h, j.Conditions[i]) - } - return h -} - -// Compile transforms the Where into an equivalent SQL representation. -func (j *Joins) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(j); ok { - return c, nil - } - - l := len(j.Conditions) - - chunks := make([]string, 0, l) - - if l > 0 { - for i := 0; i < l; i++ { - chunk, err := j.Conditions[i].Compile(layout) - if err != nil { - return "", err - } - chunks = append(chunks, chunk) - } - } - - compiled = strings.Join(chunks, " ") - - layout.Write(j, compiled) - - return -} - -// JoinConditions creates a Joins object. -func JoinConditions(joins ...*Join) *Joins { - fragments := make([]Fragment, len(joins)) - for i := range fragments { - fragments[i] = joins[i] - } - return &Joins{Conditions: fragments} -} - -// Join represents a generic JOIN statement. -type Join struct { - Type string - Table Fragment - On Fragment - Using Fragment -} - -var _ = Fragment(&Join{}) - -// Hash returns a unique identifier for the struct. -func (j *Join) Hash() uint64 { - if j == nil { - return cache.NewHash(FragmentType_Join, nil) - } - return cache.NewHash(FragmentType_Join, j.Type, j.Table, j.On, j.Using) -} - -// Compile transforms the Join into its equivalent SQL representation. -func (j *Join) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(j); ok { - return c, nil - } - - if j.Table == nil { - return "", nil - } - - table, err := j.Table.Compile(layout) - if err != nil { - return "", err - } - - on, err := layout.doCompile(j.On) - if err != nil { - return "", err - } - - using, err := layout.doCompile(j.Using) - if err != nil { - return "", err - } - - data := innerJoinT{ - Type: j.Type, - Table: table, - On: on, - Using: using, - } - - compiled = layout.MustCompile(layout.JoinLayout, data) - layout.Write(j, compiled) - return -} - -// On represents JOIN conditions. -type On Where - -var _ = Fragment(&On{}) - -func (o *On) Hash() uint64 { - if o == nil { - return cache.NewHash(FragmentType_On, nil) - } - return cache.NewHash(FragmentType_On, (*Where)(o)) -} - -// Compile transforms the On into an equivalent SQL representation. -func (o *On) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(o); ok { - return c, nil - } - - grouped, err := groupCondition(layout, o.Conditions, layout.MustCompile(layout.ClauseOperator, layout.AndKeyword)) - if err != nil { - return "", err - } - - if grouped != "" { - compiled = layout.MustCompile(layout.OnLayout, conds{grouped}) - } - - layout.Write(o, compiled) - return -} - -// Using represents a USING function. -type Using Columns - -var _ = Fragment(&Using{}) - -type usingT struct { - Columns string -} - -func (u *Using) Hash() uint64 { - if u == nil { - return cache.NewHash(FragmentType_Using, nil) - } - return cache.NewHash(FragmentType_Using, (*Columns)(u)) -} - -// Compile transforms the Using into an equivalent SQL representation. -func (u *Using) Compile(layout *Template) (compiled string, err error) { - if u == nil { - return "", nil - } - - if c, ok := layout.Read(u); ok { - return c, nil - } - - if len(u.Columns) > 0 { - c := Columns(*u) - columns, err := c.Compile(layout) - if err != nil { - return "", err - } - data := usingT{Columns: columns} - compiled = layout.MustCompile(layout.UsingLayout, data) - } - - layout.Write(u, compiled) - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/order_by.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/order_by.go deleted file mode 100644 index ab35507f..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/order_by.go +++ /dev/null @@ -1,175 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -// Order represents the order in which SQL results are sorted. -type Order uint8 - -// Possible values for Order -const ( - Order_Default Order = iota - - Order_Ascendent - Order_Descendent -) - -func (o Order) Hash() uint64 { - return cache.NewHash(FragmentType_Order, uint8(o)) -} - -// SortColumn represents the column-order relation in an ORDER BY clause. -type SortColumn struct { - Column Fragment - Order -} - -var _ = Fragment(&SortColumn{}) - -type sortColumnT struct { - Column string - Order string -} - -var _ = Fragment(&SortColumn{}) - -// SortColumns represents the columns in an ORDER BY clause. -type SortColumns struct { - Columns []Fragment -} - -var _ = Fragment(&SortColumns{}) - -// OrderBy represents an ORDER BY clause. -type OrderBy struct { - SortColumns Fragment -} - -var _ = Fragment(&OrderBy{}) - -type orderByT struct { - SortColumns string -} - -// JoinSortColumns creates and returns an array of column-order relations. -func JoinSortColumns(values ...Fragment) *SortColumns { - return &SortColumns{Columns: values} -} - -// JoinWithOrderBy creates an returns an OrderBy using the given SortColumns. -func JoinWithOrderBy(sc *SortColumns) *OrderBy { - return &OrderBy{SortColumns: sc} -} - -// Hash returns a unique identifier for the struct. -func (s *SortColumn) Hash() uint64 { - if s == nil { - return cache.NewHash(FragmentType_SortColumn, nil) - } - return cache.NewHash(FragmentType_SortColumn, s.Column, s.Order) -} - -// Compile transforms the SortColumn into an equivalent SQL representation. -func (s *SortColumn) Compile(layout *Template) (compiled string, err error) { - - if c, ok := layout.Read(s); ok { - return c, nil - } - - column, err := s.Column.Compile(layout) - if err != nil { - return "", err - } - - orderBy, err := s.Order.Compile(layout) - if err != nil { - return "", err - } - - data := sortColumnT{Column: column, Order: orderBy} - - compiled = layout.MustCompile(layout.SortByColumnLayout, data) - - layout.Write(s, compiled) - - return -} - -// Hash returns a unique identifier for the struct. -func (s *SortColumns) Hash() uint64 { - if s == nil { - return cache.NewHash(FragmentType_SortColumns, nil) - } - h := cache.InitHash(FragmentType_SortColumns) - for i := range s.Columns { - h = cache.AddToHash(h, s.Columns[i]) - } - return h -} - -// Compile transforms the SortColumns into an equivalent SQL representation. -func (s *SortColumns) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(s); ok { - return z, nil - } - - z := make([]string, len(s.Columns)) - - for i := range s.Columns { - z[i], err = s.Columns[i].Compile(layout) - if err != nil { - return "", err - } - } - - compiled = strings.Join(z, layout.IdentifierSeparator) - - layout.Write(s, compiled) - - return -} - -// Hash returns a unique identifier for the struct. -func (s *OrderBy) Hash() uint64 { - if s == nil { - return cache.NewHash(FragmentType_OrderBy, nil) - } - return cache.NewHash(FragmentType_OrderBy, s.SortColumns) -} - -// Compile transforms the SortColumn into an equivalent SQL representation. -func (s *OrderBy) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(s); ok { - return z, nil - } - - if s.SortColumns != nil { - sortColumns, err := s.SortColumns.Compile(layout) - if err != nil { - return "", err - } - - data := orderByT{ - SortColumns: sortColumns, - } - compiled = layout.MustCompile(layout.OrderByLayout, data) - } - - layout.Write(s, compiled) - - return -} - -// Compile transforms the SortColumn into an equivalent SQL representation. -func (s Order) Compile(layout *Template) (string, error) { - switch s { - case Order_Ascendent: - return layout.AscKeyword, nil - case Order_Descendent: - return layout.DescKeyword, nil - } - return "", nil -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/raw.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/raw.go deleted file mode 100644 index 54dc97a1..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/raw.go +++ /dev/null @@ -1,48 +0,0 @@ -package exql - -import ( - "fmt" - - "github.com/upper/db/v4/internal/cache" -) - -var ( - _ = fmt.Stringer(&Raw{}) -) - -// Raw represents a value that is meant to be used in a query without escaping. -type Raw struct { - Value string -} - -func NewRawValue(v interface{}) (*Raw, error) { - switch t := v.(type) { - case string: - return &Raw{Value: t}, nil - case int, uint, int64, uint64, int32, uint32, int16, uint16: - return &Raw{Value: fmt.Sprintf("%d", t)}, nil - case fmt.Stringer: - return &Raw{Value: t.String()}, nil - } - return nil, fmt.Errorf("unexpected type: %T", v) -} - -// Hash returns a unique identifier for the struct. -func (r *Raw) Hash() uint64 { - if r == nil { - return cache.NewHash(FragmentType_Raw, nil) - } - return cache.NewHash(FragmentType_Raw, r.Value) -} - -// Compile returns the raw value. -func (r *Raw) Compile(*Template) (string, error) { - return r.Value, nil -} - -// String returns the raw value. -func (r *Raw) String() string { - return r.Value -} - -var _ = Fragment(&Raw{}) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/returning.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/returning.go deleted file mode 100644 index 6e28f0a5..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/returning.go +++ /dev/null @@ -1,41 +0,0 @@ -package exql - -import ( - "github.com/upper/db/v4/internal/cache" -) - -// Returning represents a RETURNING clause. -type Returning struct { - *Columns -} - -// Hash returns a unique identifier for the struct. -func (r *Returning) Hash() uint64 { - if r == nil { - return cache.NewHash(FragmentType_Returning, nil) - } - return cache.NewHash(FragmentType_Returning, r.Columns) -} - -var _ = Fragment(&Returning{}) - -// ReturningColumns creates and returns an array of Column. -func ReturningColumns(columns ...Fragment) *Returning { - return &Returning{Columns: &Columns{Columns: columns}} -} - -// Compile transforms the clause into its equivalent SQL representation. -func (r *Returning) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(r); ok { - return z, nil - } - - compiled, err = r.Columns.Compile(layout) - if err != nil { - return "", err - } - - layout.Write(r, compiled) - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/statement.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/statement.go deleted file mode 100644 index 9b9fd480..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/statement.go +++ /dev/null @@ -1,132 +0,0 @@ -package exql - -import ( - "errors" - "reflect" - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -var errUnknownTemplateType = errors.New("Unknown template type") - -// represents different kinds of SQL statements. -type Statement struct { - Type - Table Fragment - Database Fragment - Columns Fragment - Values Fragment - Distinct bool - ColumnValues Fragment - OrderBy Fragment - GroupBy Fragment - Joins Fragment - Where Fragment - Returning Fragment - - Limit - Offset - - SQL string - - amendFn func(string) string -} - -func (layout *Template) doCompile(c Fragment) (string, error) { - if c != nil && !reflect.ValueOf(c).IsNil() { - return c.Compile(layout) - } - return "", nil -} - -// Hash returns a unique identifier for the struct. -func (s *Statement) Hash() uint64 { - if s == nil { - return cache.NewHash(FragmentType_Statement, nil) - } - return cache.NewHash( - FragmentType_Statement, - s.Type, - s.Table, - s.Database, - s.Columns, - s.Values, - s.Distinct, - s.ColumnValues, - s.OrderBy, - s.GroupBy, - s.Joins, - s.Where, - s.Returning, - s.Limit, - s.Offset, - s.SQL, - ) -} - -func (s *Statement) SetAmendment(amendFn func(string) string) { - s.amendFn = amendFn -} - -func (s *Statement) Amend(in string) string { - if s.amendFn == nil { - return in - } - return s.amendFn(in) -} - -func (s *Statement) template(layout *Template) (string, error) { - switch s.Type { - case Truncate: - return layout.TruncateLayout, nil - case DropTable: - return layout.DropTableLayout, nil - case DropDatabase: - return layout.DropDatabaseLayout, nil - case Count: - return layout.CountLayout, nil - case Select: - return layout.SelectLayout, nil - case Delete: - return layout.DeleteLayout, nil - case Update: - return layout.UpdateLayout, nil - case Insert: - return layout.InsertLayout, nil - default: - return "", errUnknownTemplateType - } -} - -// Compile transforms the Statement into an equivalent SQL query. -func (s *Statement) Compile(layout *Template) (compiled string, err error) { - if s.Type == SQL { - // No need to hit the cache. - return s.SQL, nil - } - - if z, ok := layout.Read(s); ok { - return s.Amend(z), nil - } - - tpl, err := s.template(layout) - if err != nil { - return "", err - } - - compiled = layout.MustCompile(tpl, s) - - compiled = strings.TrimSpace(compiled) - layout.Write(s, compiled) - - return s.Amend(compiled), nil -} - -// RawSQL represents a raw SQL statement. -func RawSQL(s string) *Statement { - return &Statement{ - Type: SQL, - SQL: s, - } -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/table.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/table.go deleted file mode 100644 index 8b5f9edc..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/table.go +++ /dev/null @@ -1,98 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -type tableT struct { - Name string - Alias string -} - -// Table struct represents a SQL table. -type Table struct { - Name interface{} -} - -var _ = Fragment(&Table{}) - -func quotedTableName(layout *Template, input string) string { - input = trimString(input) - - // chunks := reAliasSeparator.Split(input, 2) - chunks := separateByAS(input) - - if len(chunks) == 1 { - // chunks = reSpaceSeparator.Split(input, 2) - chunks = separateBySpace(input) - } - - name := chunks[0] - - nameChunks := strings.SplitN(name, layout.ColumnSeparator, 2) - - for i := range nameChunks { - // nameChunks[i] = strings.TrimSpace(nameChunks[i]) - nameChunks[i] = trimString(nameChunks[i]) - nameChunks[i] = layout.MustCompile(layout.IdentifierQuote, Raw{Value: nameChunks[i]}) - } - - name = strings.Join(nameChunks, layout.ColumnSeparator) - - var alias string - - if len(chunks) > 1 { - // alias = strings.TrimSpace(chunks[1]) - alias = trimString(chunks[1]) - alias = layout.MustCompile(layout.IdentifierQuote, Raw{Value: alias}) - } - - return layout.MustCompile(layout.TableAliasLayout, tableT{name, alias}) -} - -// TableWithName creates an returns a Table with the given name. -func TableWithName(name string) *Table { - return &Table{Name: name} -} - -// Hash returns a string hash of the table value. -func (t *Table) Hash() uint64 { - if t == nil { - return cache.NewHash(FragmentType_Table, nil) - } - return cache.NewHash(FragmentType_Table, t.Name) -} - -// Compile transforms a table struct into a SQL chunk. -func (t *Table) Compile(layout *Template) (compiled string, err error) { - - if z, ok := layout.Read(t); ok { - return z, nil - } - - switch value := t.Name.(type) { - case string: - if t.Name == "" { - return - } - - // Splitting tables by a comma - parts := separateByComma(value) - - l := len(parts) - - for i := 0; i < l; i++ { - parts[i] = quotedTableName(layout, parts[i]) - } - - compiled = strings.Join(parts, layout.IdentifierSeparator) - case Raw: - compiled = value.String() - } - - layout.Write(t, compiled) - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/template.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/template.go deleted file mode 100644 index 9aef852a..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/template.go +++ /dev/null @@ -1,148 +0,0 @@ -package exql - -import ( - "bytes" - "reflect" - "sync" - "text/template" - - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/cache" -) - -// Type is the type of SQL query the statement represents. -type Type uint8 - -// Values for Type. -const ( - NoOp Type = iota - - Truncate - DropTable - DropDatabase - Count - Insert - Select - Update - Delete - - SQL -) - -func (t Type) Hash() uint64 { - return cache.NewHash(FragmentType_StatementType, uint8(t)) -} - -type ( - // Limit represents the SQL limit in a query. - Limit int64 - // Offset represents the SQL offset in a query. - Offset int64 -) - -func (t Limit) Hash() uint64 { - return cache.NewHash(FragmentType_Limit, uint64(t)) -} - -func (t Offset) Hash() uint64 { - return cache.NewHash(FragmentType_Offset, uint64(t)) -} - -// Template is an SQL template. -type Template struct { - AndKeyword string - AscKeyword string - AssignmentOperator string - ClauseGroup string - ClauseOperator string - ColumnAliasLayout string - ColumnSeparator string - ColumnValue string - CountLayout string - DeleteLayout string - DescKeyword string - DropDatabaseLayout string - DropTableLayout string - GroupByLayout string - IdentifierQuote string - IdentifierSeparator string - InsertLayout string - JoinLayout string - OnLayout string - OrKeyword string - OrderByLayout string - SelectLayout string - SortByColumnLayout string - TableAliasLayout string - TruncateLayout string - UpdateLayout string - UsingLayout string - ValueQuote string - ValueSeparator string - WhereLayout string - - ComparisonOperator map[adapter.ComparisonOperator]string - - templateMutex sync.RWMutex - templateMap map[string]*template.Template - - *cache.Cache -} - -func (layout *Template) MustCompile(templateText string, data interface{}) string { - var b bytes.Buffer - - v, ok := layout.getTemplate(templateText) - if !ok { - v = template. - Must(template.New(""). - Funcs(map[string]interface{}{ - "defined": func(in Fragment) bool { - if in == nil || reflect.ValueOf(in).IsNil() { - return false - } - if check, ok := in.(hasIsEmpty); ok { - if check.IsEmpty() { - return false - } - } - return true - }, - "compile": func(in Fragment) (string, error) { - s, err := layout.doCompile(in) - if err != nil { - return "", err - } - return s, nil - }, - }). - Parse(templateText)) - - layout.setTemplate(templateText, v) - } - - if err := v.Execute(&b, data); err != nil { - panic("There was an error compiling the following template:\n" + templateText + "\nError was: " + err.Error()) - } - - return b.String() -} - -func (t *Template) getTemplate(k string) (*template.Template, bool) { - t.templateMutex.RLock() - defer t.templateMutex.RUnlock() - - if t.templateMap == nil { - t.templateMap = make(map[string]*template.Template) - } - - v, ok := t.templateMap[k] - return v, ok -} - -func (t *Template) setTemplate(k string, v *template.Template) { - t.templateMutex.Lock() - defer t.templateMutex.Unlock() - - t.templateMap[k] = v -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/types.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/types.go deleted file mode 100644 index d6ecca96..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/types.go +++ /dev/null @@ -1,35 +0,0 @@ -package exql - -const ( - FragmentType_None uint64 = iota + 713910251627 - - FragmentType_And - FragmentType_Column - FragmentType_ColumnValue - FragmentType_ColumnValues - FragmentType_Columns - FragmentType_Database - FragmentType_GroupBy - FragmentType_Join - FragmentType_Joins - FragmentType_Nil - FragmentType_Or - FragmentType_Limit - FragmentType_Offset - FragmentType_OrderBy - FragmentType_Order - FragmentType_Raw - FragmentType_Returning - FragmentType_SortBy - FragmentType_SortColumn - FragmentType_SortColumns - FragmentType_Statement - FragmentType_StatementType - FragmentType_Table - FragmentType_Value - FragmentType_On - FragmentType_Using - FragmentType_ValueGroups - FragmentType_Values - FragmentType_Where -) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/utilities.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/utilities.go deleted file mode 100644 index 972ebb47..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/utilities.go +++ /dev/null @@ -1,151 +0,0 @@ -package exql - -import ( - "strings" -) - -// isBlankSymbol returns true if the given byte is either space, tab, carriage -// return or newline. -func isBlankSymbol(in byte) bool { - return in == ' ' || in == '\t' || in == '\r' || in == '\n' -} - -// trimString returns a slice of s with a leading and trailing blank symbols -// (as defined by isBlankSymbol) removed. -func trimString(s string) string { - - // This conversion is rather slow. - // return string(trimBytes([]byte(s))) - - start, end := 0, len(s)-1 - - if end < start { - return "" - } - - for isBlankSymbol(s[start]) { - start++ - if start >= end { - return "" - } - } - - for isBlankSymbol(s[end]) { - end-- - } - - return s[start : end+1] -} - -// trimBytes returns a slice of s with a leading and trailing blank symbols (as -// defined by isBlankSymbol) removed. -func trimBytes(s []byte) []byte { - - start, end := 0, len(s)-1 - - if end < start { - return []byte{} - } - - for isBlankSymbol(s[start]) { - start++ - if start >= end { - return []byte{} - } - } - - for isBlankSymbol(s[end]) { - end-- - } - - return s[start : end+1] -} - -/* -// Separates by a comma, ignoring spaces too. -// This was slower than strings.Split. -func separateByComma(in string) (out []string) { - - out = []string{} - - start, lim := 0, len(in)-1 - - for start < lim { - var end int - - for end = start; end <= lim; end++ { - // Is a comma? - if in[end] == ',' { - break - } - } - - out = append(out, trimString(in[start:end])) - - start = end + 1 - } - - return -} -*/ - -// Separates by a comma, ignoring spaces too. -func separateByComma(in string) (out []string) { - out = strings.Split(in, ",") - for i := range out { - out[i] = trimString(out[i]) - } - return -} - -// Separates by spaces, ignoring spaces too. -func separateBySpace(in string) (out []string) { - if len(in) == 0 { - return []string{""} - } - - pre := strings.Split(in, " ") - out = make([]string, 0, len(pre)) - - for i := range pre { - pre[i] = trimString(pre[i]) - if pre[i] != "" { - out = append(out, pre[i]) - } - } - - return -} - -func separateByAS(in string) (out []string) { - out = []string{} - - if len(in) < 6 { - // The minimum expression with the AS keyword is "x AS y", 6 chars. - return []string{in} - } - - start, lim := 0, len(in)-1 - - for start <= lim { - var end int - - for end = start; end <= lim; end++ { - if end > 3 && isBlankSymbol(in[end]) && isBlankSymbol(in[end-3]) { - if (in[end-1] == 's' || in[end-1] == 'S') && (in[end-2] == 'a' || in[end-2] == 'A') { - break - } - } - } - - if end < lim { - out = append(out, trimString(in[start:end-3])) - } else { - out = append(out, trimString(in[start:end])) - } - - start = end + 1 - } - - return -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/value.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/value.go deleted file mode 100644 index 6c628287..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/value.go +++ /dev/null @@ -1,166 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -// ValueGroups represents an array of value groups. -type ValueGroups struct { - Values []*Values -} - -func (vg *ValueGroups) IsEmpty() bool { - if vg == nil || len(vg.Values) < 1 { - return true - } - for i := range vg.Values { - if !vg.Values[i].IsEmpty() { - return false - } - } - return true -} - -var _ = Fragment(&ValueGroups{}) - -// Values represents an array of Value. -type Values struct { - Values []Fragment -} - -func (vs *Values) IsEmpty() bool { - if vs == nil || len(vs.Values) < 1 { - return true - } - return false -} - -// NewValueGroup creates and returns an array of values. -func NewValueGroup(v ...Fragment) *Values { - return &Values{Values: v} -} - -var _ = Fragment(&Values{}) - -// Value represents an escaped SQL value. -type Value struct { - V interface{} -} - -var _ = Fragment(&Value{}) - -// NewValue creates and returns a Value. -func NewValue(v interface{}) *Value { - return &Value{V: v} -} - -// Hash returns a unique identifier for the struct. -func (v *Value) Hash() uint64 { - if v == nil { - return cache.NewHash(FragmentType_Value, nil) - } - return cache.NewHash(FragmentType_Value, v.V) -} - -// Compile transforms the Value into an equivalent SQL representation. -func (v *Value) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(v); ok { - return z, nil - } - - switch value := v.V.(type) { - case compilable: - compiled, err = value.Compile(layout) - if err != nil { - return "", err - } - default: - value, err := NewRawValue(v.V) - if err != nil { - return "", err - } - compiled = layout.MustCompile( - layout.ValueQuote, - value, - ) - } - - layout.Write(v, compiled) - return -} - -// Hash returns a unique identifier for the struct. -func (vs *Values) Hash() uint64 { - if vs == nil { - return cache.NewHash(FragmentType_Values, nil) - } - h := cache.InitHash(FragmentType_Values) - for i := range vs.Values { - h = cache.AddToHash(h, vs.Values[i]) - } - return h -} - -// Compile transforms the Values into an equivalent SQL representation. -func (vs *Values) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(vs); ok { - return c, nil - } - - l := len(vs.Values) - if l > 0 { - chunks := make([]string, 0, l) - for i := 0; i < l; i++ { - chunk, err := vs.Values[i].Compile(layout) - if err != nil { - return "", err - } - chunks = append(chunks, chunk) - } - compiled = layout.MustCompile(layout.ClauseGroup, strings.Join(chunks, layout.ValueSeparator)) - } - layout.Write(vs, compiled) - return -} - -// Hash returns a unique identifier for the struct. -func (vg *ValueGroups) Hash() uint64 { - if vg == nil { - return cache.NewHash(FragmentType_ValueGroups, nil) - } - h := cache.InitHash(FragmentType_ValueGroups) - for i := range vg.Values { - h = cache.AddToHash(h, vg.Values[i]) - } - return h -} - -// Compile transforms the ValueGroups into an equivalent SQL representation. -func (vg *ValueGroups) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(vg); ok { - return c, nil - } - - l := len(vg.Values) - if l > 0 { - chunks := make([]string, 0, l) - for i := 0; i < l; i++ { - chunk, err := vg.Values[i].Compile(layout) - if err != nil { - return "", err - } - chunks = append(chunks, chunk) - } - compiled = strings.Join(chunks, layout.ValueSeparator) - } - - layout.Write(vg, compiled) - return -} - -// JoinValueGroups creates a new *ValueGroups object. -func JoinValueGroups(values ...*Values) *ValueGroups { - return &ValueGroups{Values: values} -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/where.go b/vendor/github.com/upper/db/v4/internal/sqladapter/exql/where.go deleted file mode 100644 index 37b51d6f..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/exql/where.go +++ /dev/null @@ -1,149 +0,0 @@ -package exql - -import ( - "strings" - - "github.com/upper/db/v4/internal/cache" -) - -// Or represents an SQL OR operator. -type Or Where - -// And represents an SQL AND operator. -type And Where - -// Where represents an SQL WHERE clause. -type Where struct { - Conditions []Fragment -} - -var _ = Fragment(&Where{}) - -type conds struct { - Conds string -} - -// WhereConditions creates and retuens a new Where. -func WhereConditions(conditions ...Fragment) *Where { - return &Where{Conditions: conditions} -} - -// JoinWithOr creates and returns a new Or. -func JoinWithOr(conditions ...Fragment) *Or { - return &Or{Conditions: conditions} -} - -// JoinWithAnd creates and returns a new And. -func JoinWithAnd(conditions ...Fragment) *And { - return &And{Conditions: conditions} -} - -// Hash returns a unique identifier for the struct. -func (w *Where) Hash() uint64 { - if w == nil { - return cache.NewHash(FragmentType_Where, nil) - } - h := cache.InitHash(FragmentType_Where) - for i := range w.Conditions { - h = cache.AddToHash(h, w.Conditions[i]) - } - return h -} - -// Appends adds the conditions to the ones that already exist. -func (w *Where) Append(a *Where) *Where { - if a != nil { - w.Conditions = append(w.Conditions, a.Conditions...) - } - return w -} - -// Hash returns a unique identifier. -func (o *Or) Hash() uint64 { - if o == nil { - return cache.NewHash(FragmentType_Or, nil) - } - return cache.NewHash(FragmentType_Or, (*Where)(o)) -} - -// Hash returns a unique identifier. -func (a *And) Hash() uint64 { - if a == nil { - return cache.NewHash(FragmentType_And, nil) - } - return cache.NewHash(FragmentType_And, (*Where)(a)) -} - -// Compile transforms the Or into an equivalent SQL representation. -func (o *Or) Compile(layout *Template) (compiled string, err error) { - if z, ok := layout.Read(o); ok { - return z, nil - } - - compiled, err = groupCondition(layout, o.Conditions, layout.MustCompile(layout.ClauseOperator, layout.OrKeyword)) - if err != nil { - return "", err - } - - layout.Write(o, compiled) - - return -} - -// Compile transforms the And into an equivalent SQL representation. -func (a *And) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(a); ok { - return c, nil - } - - compiled, err = groupCondition(layout, a.Conditions, layout.MustCompile(layout.ClauseOperator, layout.AndKeyword)) - if err != nil { - return "", err - } - - layout.Write(a, compiled) - - return -} - -// Compile transforms the Where into an equivalent SQL representation. -func (w *Where) Compile(layout *Template) (compiled string, err error) { - if c, ok := layout.Read(w); ok { - return c, nil - } - - grouped, err := groupCondition(layout, w.Conditions, layout.MustCompile(layout.ClauseOperator, layout.AndKeyword)) - if err != nil { - return "", err - } - - if grouped != "" { - compiled = layout.MustCompile(layout.WhereLayout, conds{grouped}) - } - - layout.Write(w, compiled) - - return -} - -func groupCondition(layout *Template, terms []Fragment, joinKeyword string) (string, error) { - l := len(terms) - - chunks := make([]string, 0, l) - - if l > 0 { - for i := 0; i < l; i++ { - chunk, err := terms[i].Compile(layout) - if err != nil { - return "", err - } - chunks = append(chunks, chunk) - } - } - - if len(chunks) > 0 { - return layout.MustCompile(layout.ClauseGroup, strings.Join(chunks, joinKeyword)), nil - } - - return "", nil -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/hash.go b/vendor/github.com/upper/db/v4/internal/sqladapter/hash.go deleted file mode 100644 index 4d754914..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/hash.go +++ /dev/null @@ -1,8 +0,0 @@ -package sqladapter - -const ( - hashTypeNone = iota + 345065139389 - - hashTypeCollection - hashTypePrimaryKeys -) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/record.go b/vendor/github.com/upper/db/v4/internal/sqladapter/record.go deleted file mode 100644 index 189e94af..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/record.go +++ /dev/null @@ -1,122 +0,0 @@ -package sqladapter - -import ( - "reflect" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -func recordID(store db.Store, record db.Record) (db.Cond, error) { - if record == nil { - return nil, db.ErrNilRecord - } - - if hasConstraints, ok := record.(db.HasConstraints); ok { - return hasConstraints.Constraints(), nil - } - - id := db.Cond{} - - keys, fields, err := recordPrimaryKeyFieldValues(store, record) - if err != nil { - return nil, err - } - for i := range fields { - if fields[i] == reflect.Zero(reflect.TypeOf(fields[i])).Interface() { - return nil, db.ErrRecordIDIsZero - } - id[keys[i]] = fields[i] - } - if len(id) < 1 { - return nil, db.ErrRecordIDIsZero - } - - return id, nil -} - -func recordPrimaryKeyFieldValues(store db.Store, record db.Record) ([]string, []interface{}, error) { - sess := store.Session() - - pKeys, err := sess.(Session).PrimaryKeys(store.Name()) - if err != nil { - return nil, nil, err - } - - fields := sqlbuilder.Mapper.FieldsByName(reflect.ValueOf(record), pKeys) - - values := make([]interface{}, 0, len(fields)) - for i := range fields { - if fields[i].IsValid() { - values = append(values, fields[i].Interface()) - } - } - - return pKeys, values, nil -} - -func recordCreate(store db.Store, record db.Record) error { - sess := store.Session() - - if validator, ok := record.(db.Validator); ok { - if err := validator.Validate(); err != nil { - return err - } - } - - if hook, ok := record.(db.BeforeCreateHook); ok { - if err := hook.BeforeCreate(sess); err != nil { - return err - } - } - - if creator, ok := store.(db.StoreCreator); ok { - if err := creator.Create(record); err != nil { - return err - } - } else { - if err := store.InsertReturning(record); err != nil { - return err - } - } - - if hook, ok := record.(db.AfterCreateHook); ok { - if err := hook.AfterCreate(sess); err != nil { - return err - } - } - return nil -} - -func recordUpdate(store db.Store, record db.Record) error { - sess := store.Session() - - if validator, ok := record.(db.Validator); ok { - if err := validator.Validate(); err != nil { - return err - } - } - - if hook, ok := record.(db.BeforeUpdateHook); ok { - if err := hook.BeforeUpdate(sess); err != nil { - return err - } - } - - if updater, ok := store.(db.StoreUpdater); ok { - if err := updater.Update(record); err != nil { - return err - } - } else { - if err := record.Store(sess).UpdateReturning(record); err != nil { - return err - } - } - - if hook, ok := record.(db.AfterUpdateHook); ok { - if err := hook.AfterUpdate(sess); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/result.go b/vendor/github.com/upper/db/v4/internal/sqladapter/result.go deleted file mode 100644 index 3a7e9392..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/result.go +++ /dev/null @@ -1,519 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package sqladapter - -import ( - "errors" - "sync" - "sync/atomic" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/immutable" -) - -type Result struct { - builder db.SQL - - err atomic.Value - - iter db.Iterator - iterMu sync.Mutex - - prev *Result - fn func(*result) error -} - -// result represents a delimited set of items bound by a condition. -type result struct { - table string - limit int - offset int - - pageSize uint - pageNumber uint - - cursorColumn string - nextPageCursorValue interface{} - prevPageCursorValue interface{} - - fields []interface{} - orderBy []interface{} - groupBy []interface{} - conds [][]interface{} -} - -func filter(conds []interface{}) []interface{} { - return conds -} - -// NewResult creates and Results a new Result set on the given table, this set -// is limited by the given exql.Where conditions. -func NewResult(builder db.SQL, table string, conds []interface{}) *Result { - r := &Result{ - builder: builder, - } - return r.from(table).where(conds) -} - -func (r *Result) frame(fn func(*result) error) *Result { - return &Result{err: r.err, prev: r, fn: fn} -} - -func (r *Result) SQL() db.SQL { - if r.prev == nil { - return r.builder - } - return r.prev.SQL() -} - -func (r *Result) from(table string) *Result { - return r.frame(func(res *result) error { - res.table = table - return nil - }) -} - -func (r *Result) where(conds []interface{}) *Result { - return r.frame(func(res *result) error { - res.conds = [][]interface{}{conds} - return nil - }) -} - -func (r *Result) setErr(err error) { - if err == nil { - return - } - r.err.Store(err) -} - -// Err returns the last error that has happened with the result set, -// nil otherwise -func (r *Result) Err() error { - if errV := r.err.Load(); errV != nil { - return errV.(error) - } - return nil -} - -// Where sets conditions for the result set. -func (r *Result) Where(conds ...interface{}) db.Result { - return r.where(conds) -} - -// And adds more conditions on top of the existing ones. -func (r *Result) And(conds ...interface{}) db.Result { - return r.frame(func(res *result) error { - res.conds = append(res.conds, conds) - return nil - }) -} - -// Limit determines the maximum limit of Results to be returned. -func (r *Result) Limit(n int) db.Result { - return r.frame(func(res *result) error { - res.limit = n - return nil - }) -} - -func (r *Result) Paginate(pageSize uint) db.Result { - return r.frame(func(res *result) error { - res.pageSize = pageSize - return nil - }) -} - -func (r *Result) Page(pageNumber uint) db.Result { - return r.frame(func(res *result) error { - res.pageNumber = pageNumber - res.nextPageCursorValue = nil - res.prevPageCursorValue = nil - return nil - }) -} - -func (r *Result) Cursor(cursorColumn string) db.Result { - return r.frame(func(res *result) error { - res.cursorColumn = cursorColumn - return nil - }) -} - -func (r *Result) NextPage(cursorValue interface{}) db.Result { - return r.frame(func(res *result) error { - res.nextPageCursorValue = cursorValue - res.prevPageCursorValue = nil - return nil - }) -} - -func (r *Result) PrevPage(cursorValue interface{}) db.Result { - return r.frame(func(res *result) error { - res.nextPageCursorValue = nil - res.prevPageCursorValue = cursorValue - return nil - }) -} - -// Offset determines how many documents will be skipped before starting to grab -// Results. -func (r *Result) Offset(n int) db.Result { - return r.frame(func(res *result) error { - res.offset = n - return nil - }) -} - -// GroupBy is used to group Results that have the same value in the same column -// or columns. -func (r *Result) GroupBy(fields ...interface{}) db.Result { - return r.frame(func(res *result) error { - res.groupBy = fields - return nil - }) -} - -// OrderBy determines sorting of Results according to the provided names. Fields -// may be prefixed by - (minus) which means descending order, ascending order -// would be used otherwise. -func (r *Result) OrderBy(fields ...interface{}) db.Result { - return r.frame(func(res *result) error { - res.orderBy = fields - return nil - }) -} - -// Select determines which fields to return. -func (r *Result) Select(fields ...interface{}) db.Result { - return r.frame(func(res *result) error { - res.fields = fields - return nil - }) -} - -// String satisfies fmt.Stringer -func (r *Result) String() string { - query, err := r.Paginator() - if err != nil { - panic(err.Error()) - } - return query.String() -} - -// All dumps all Results into a pointer to an slice of structs or maps. -func (r *Result) All(dst interface{}) error { - query, err := r.Paginator() - if err != nil { - r.setErr(err) - return err - } - err = query.Iterator().All(dst) - r.setErr(err) - return err -} - -// One fetches only one Result from the set. -func (r *Result) One(dst interface{}) error { - one := r.Limit(1).(*Result) - query, err := one.Paginator() - if err != nil { - r.setErr(err) - return err - } - - err = query.Iterator().One(dst) - r.setErr(err) - return err -} - -// Next fetches the next Result from the set. -func (r *Result) Next(dst interface{}) bool { - r.iterMu.Lock() - defer r.iterMu.Unlock() - - if r.iter == nil { - query, err := r.Paginator() - if err != nil { - r.setErr(err) - return false - } - r.iter = query.Iterator() - } - - if r.iter.Next(dst) { - return true - } - - if err := r.iter.Err(); !errors.Is(err, db.ErrNoMoreRows) { - r.setErr(err) - return false - } - - return false -} - -// Delete deletes all matching items from the collection. -func (r *Result) Delete() error { - query, err := r.buildDelete() - if err != nil { - r.setErr(err) - return err - } - - _, err = query.Exec() - r.setErr(err) - return err -} - -// Close closes the Result set. -func (r *Result) Close() error { - if r.iter != nil { - err := r.iter.Close() - r.setErr(err) - return err - } - return nil -} - -// Update updates matching items from the collection with values of the given -// map or struct. -func (r *Result) Update(values interface{}) error { - query, err := r.buildUpdate(values) - if err != nil { - r.setErr(err) - return err - } - - _, err = query.Exec() - r.setErr(err) - return err -} - -func (r *Result) TotalPages() (uint, error) { - query, err := r.Paginator() - if err != nil { - r.setErr(err) - return 0, err - } - - total, err := query.TotalPages() - if err != nil { - r.setErr(err) - return 0, err - } - - return total, nil -} - -func (r *Result) TotalEntries() (uint64, error) { - query, err := r.Paginator() - if err != nil { - r.setErr(err) - return 0, err - } - - total, err := query.TotalEntries() - if err != nil { - r.setErr(err) - return 0, err - } - - return total, nil -} - -// Exists returns true if at least one item on the collection exists. -func (r *Result) Exists() (bool, error) { - query, err := r.buildCount() - if err != nil { - r.setErr(err) - return false, err - } - - query = query.Limit(1) - - value := struct { - Exists uint64 `db:"_t"` - }{} - - if err := query.One(&value); err != nil { - if errors.Is(err, db.ErrNoMoreRows) { - return false, nil - } - r.setErr(err) - return false, err - } - - if value.Exists > 0 { - return true, nil - } - - return false, nil -} - -// Count counts the elements on the set. -func (r *Result) Count() (uint64, error) { - query, err := r.buildCount() - if err != nil { - r.setErr(err) - return 0, err - } - - counter := struct { - Count uint64 `db:"_t"` - }{} - if err := query.One(&counter); err != nil { - if errors.Is(err, db.ErrNoMoreRows) { - return 0, nil - } - r.setErr(err) - return 0, err - } - - return counter.Count, nil -} - -func (r *Result) Paginator() (db.Paginator, error) { - if err := r.Err(); err != nil { - return nil, err - } - - res, err := r.fastForward() - if err != nil { - return nil, err - } - - sel := r.SQL().Select(res.fields...). - From(res.table). - Limit(res.limit). - Offset(res.offset). - GroupBy(res.groupBy...). - OrderBy(res.orderBy...) - - for i := range res.conds { - sel = sel.And(filter(res.conds[i])...) - } - - pag := sel.Paginate(res.pageSize). - Page(res.pageNumber). - Cursor(res.cursorColumn) - - if res.nextPageCursorValue != nil { - pag = pag.NextPage(res.nextPageCursorValue) - } - - if res.prevPageCursorValue != nil { - pag = pag.PrevPage(res.prevPageCursorValue) - } - - return pag, nil -} - -func (r *Result) buildDelete() (db.Deleter, error) { - if err := r.Err(); err != nil { - return nil, err - } - - res, err := r.fastForward() - if err != nil { - return nil, err - } - - del := r.SQL().DeleteFrom(res.table). - Limit(res.limit) - - for i := range res.conds { - del = del.And(filter(res.conds[i])...) - } - - return del, nil -} - -func (r *Result) buildUpdate(values interface{}) (db.Updater, error) { - if err := r.Err(); err != nil { - return nil, err - } - - res, err := r.fastForward() - if err != nil { - return nil, err - } - - upd := r.SQL().Update(res.table). - Set(values). - Limit(res.limit) - - for i := range res.conds { - upd = upd.And(filter(res.conds[i])...) - } - - return upd, nil -} - -func (r *Result) buildCount() (db.Selector, error) { - if err := r.Err(); err != nil { - return nil, err - } - - res, err := r.fastForward() - if err != nil { - return nil, err - } - - sel := r.SQL().Select(db.Raw("count(1) AS _t")). - From(res.table). - GroupBy(res.groupBy...) - - for i := range res.conds { - sel = sel.And(filter(res.conds[i])...) - } - - return sel, nil -} - -func (r *Result) Prev() immutable.Immutable { - if r == nil { - return nil - } - return r.prev -} - -func (r *Result) Fn(in interface{}) error { - if r.fn == nil { - return nil - } - return r.fn(in.(*result)) -} - -func (r *Result) Base() interface{} { - return &result{} -} - -func (r *Result) fastForward() (*result, error) { - ff, err := immutable.FastForward(r) - if err != nil { - return nil, err - } - return ff.(*result), nil -} - -var _ = immutable.Immutable(&Result{}) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/session.go b/vendor/github.com/upper/db/v4/internal/sqladapter/session.go deleted file mode 100644 index 0978205a..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/session.go +++ /dev/null @@ -1,1106 +0,0 @@ -package sqladapter - -import ( - "bytes" - "context" - "database/sql" - "database/sql/driver" - "errors" - "fmt" - "math" - "reflect" - "strconv" - "sync" - "sync/atomic" - "time" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/cache" - "github.com/upper/db/v4/internal/sqladapter/compat" - "github.com/upper/db/v4/internal/sqladapter/exql" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -var ( - lastSessID uint64 - lastTxID uint64 -) - -var ( - slowQueryThreshold = time.Millisecond * 200 - retryTransactionWaitTime = time.Millisecond * 10 - retryTransactionMaxWaitTime = time.Second * 1 -) - -// hasCleanUp is implemented by structs that have a clean up routine that needs -// to be called before Close(). -type hasCleanUp interface { - CleanUp() error -} - -// statementExecer allows the adapter to have its own exec statement. -type statementExecer interface { - StatementExec(sess Session, ctx context.Context, query string, args ...interface{}) (sql.Result, error) -} - -// statementCompiler transforms an internal statement into a format -// database/sql can understand. -type statementCompiler interface { - CompileStatement(sess Session, stmt *exql.Statement, args []interface{}) (string, []interface{}, error) -} - -// sessValueConverter converts values before being passed to the underlying driver. -type sessValueConverter interface { - ConvertValue(in interface{}) interface{} -} - -// sessValueConverterContext converts values before being passed to the underlying driver. -type sessValueConverterContext interface { - ConvertValueContext(ctx context.Context, in interface{}) interface{} -} - -// valueConverter converts values before being passed to the underlying driver. -type valueConverter interface { - ConvertValue(in interface{}) interface { - sql.Scanner - driver.Valuer - } -} - -// errorConverter converts an error value from the underlying driver into -// something different. -type errorConverter interface { - Err(errIn error) (errOut error) -} - -// AdapterSession defines methods to be implemented by SQL adapters. -type AdapterSession interface { - Template() *exql.Template - - NewCollection() CollectionAdapter - - // Open opens a new connection - OpenDSN(sess Session, dsn string) (*sql.DB, error) - - // Collections returns a list of non-system tables from the database. - Collections(sess Session) ([]string, error) - - // TableExists returns an error if the given table does not exist. - TableExists(sess Session, name string) error - - // LookupName returns the name of the database. - LookupName(sess Session) (string, error) - - // PrimaryKeys returns all primary keys on the table. - PrimaryKeys(sess Session, name string) ([]string, error) -} - -// Session satisfies db.Session. -type Session interface { - SQL() db.SQL - - // PrimaryKeys returns all primary keys on the table. - PrimaryKeys(tableName string) ([]string, error) - - // Collections returns a list of references to all collections in the - // database. - Collections() ([]db.Collection, error) - - // Name returns the name of the database. - Name() string - - // Close closes the database session - Close() error - - // Ping checks if the database server is reachable. - Ping() error - - // Reset clears all caches the session is using - Reset() - - // Collection returns a new collection. - Collection(string) db.Collection - - // ConnectionURL returns the ConnectionURL that was used to create the - // Session. - ConnectionURL() db.ConnectionURL - - // Open attempts to establish a connection to the database server. - Open() error - - // TableExists returns an error if the table doesn't exists. - TableExists(name string) error - - // Driver returns the underlying driver the session is using - Driver() interface{} - - Save(db.Record) error - - Get(db.Record, interface{}) error - - Delete(db.Record) error - - // WaitForConnection attempts to run the given connection function a fixed - // number of times before failing. - WaitForConnection(func() error) error - - // BindDB sets the *sql.DB the session will use. - BindDB(*sql.DB) error - - // Session returns the *sql.DB the session is using. - DB() *sql.DB - - // BindTx binds a transaction to the current session. - BindTx(context.Context, *sql.Tx) error - - // Returns the current transaction the session is using. - Transaction() *sql.Tx - - // NewClone clones the database using the given AdapterSession as base. - NewClone(AdapterSession, bool) (Session, error) - - // Context returns the default context the session is using. - Context() context.Context - - // SetContext sets the default context for the session. - SetContext(context.Context) - - NewTransaction(ctx context.Context, opts *sql.TxOptions) (Session, error) - - Tx(fn func(sess db.Session) error) error - - TxContext(ctx context.Context, fn func(sess db.Session) error, opts *sql.TxOptions) error - - WithContext(context.Context) db.Session - - IsTransaction() bool - - Commit() error - - Rollback() error - - db.Settings -} - -// NewTx wraps a *sql.Tx and returns a Tx. -func NewTx(adapter AdapterSession, tx *sql.Tx) (Session, error) { - sessTx := &sessionWithContext{ - session: &session{ - Settings: db.DefaultSettings, - - sqlTx: tx, - adapter: adapter, - cachedPKs: cache.NewCache(), - cachedCollections: cache.NewCache(), - cachedStatements: cache.NewCache(), - }, - ctx: context.Background(), - } - return sessTx, nil -} - -// NewSession creates a new Session. -func NewSession(connURL db.ConnectionURL, adapter AdapterSession) Session { - sess := &sessionWithContext{ - session: &session{ - Settings: db.DefaultSettings, - - connURL: connURL, - adapter: adapter, - cachedPKs: cache.NewCache(), - cachedCollections: cache.NewCache(), - cachedStatements: cache.NewCache(), - }, - ctx: context.Background(), - } - return sess -} - -type session struct { - db.Settings - - adapter AdapterSession - - connURL db.ConnectionURL - - builder db.SQL - - lookupNameOnce sync.Once - name string - - mu sync.Mutex // guards ctx, txOptions - txOptions *sql.TxOptions - - sqlDBMu sync.Mutex // guards sess, baseTx - - sqlDB *sql.DB - sqlTx *sql.Tx - - sessID uint64 - txID uint64 - - cacheMu sync.Mutex // guards cachedStatements and cachedCollections - cachedPKs *cache.Cache - cachedStatements *cache.Cache - cachedCollections *cache.Cache - - template *exql.Template -} - -type sessionWithContext struct { - *session - - ctx context.Context -} - -func (sess *sessionWithContext) WithContext(ctx context.Context) db.Session { - if ctx == nil { - panic("nil context") - } - newSess := &sessionWithContext{ - session: sess.session, - ctx: ctx, - } - return newSess -} - -func (sess *sessionWithContext) Tx(fn func(sess db.Session) error) error { - return TxContext(sess.Context(), sess, fn, nil) -} - -func (sess *sessionWithContext) TxContext(ctx context.Context, fn func(sess db.Session) error, opts *sql.TxOptions) error { - return TxContext(ctx, sess, fn, opts) -} - -func (sess *sessionWithContext) SQL() db.SQL { - return sqlbuilder.WithSession( - sess, - sess.adapter.Template(), - ) -} - -func (sess *sessionWithContext) Err(errIn error) (errOur error) { - if convertError, ok := sess.adapter.(errorConverter); ok { - return convertError.Err(errIn) - } - return errIn -} - -func (sess *sessionWithContext) PrimaryKeys(tableName string) ([]string, error) { - h := cache.NewHashable(hashTypePrimaryKeys, tableName) - - cachedPK, ok := sess.cachedPKs.ReadRaw(h) - if ok { - return cachedPK.([]string), nil - } - - pk, err := sess.adapter.PrimaryKeys(sess, tableName) - if err != nil { - return nil, err - } - - sess.cachedPKs.Write(h, pk) - return pk, nil -} - -func (sess *sessionWithContext) TableExists(name string) error { - return sess.adapter.TableExists(sess, name) -} - -func (sess *sessionWithContext) NewTransaction(ctx context.Context, opts *sql.TxOptions) (Session, error) { - if ctx == nil { - ctx = context.Background() - } - clone, err := sess.NewClone(sess.adapter, false) - if err != nil { - return nil, err - } - - connFn := func() error { - sqlTx, err := compat.BeginTx(clone.DB(), clone.Context(), opts) - if err == nil { - return clone.BindTx(ctx, sqlTx) - } - return err - } - - if err := clone.WaitForConnection(connFn); err != nil { - return nil, err - } - - return clone, nil -} - -func (sess *sessionWithContext) Collections() ([]db.Collection, error) { - names, err := sess.adapter.Collections(sess) - if err != nil { - return nil, err - } - - collections := make([]db.Collection, 0, len(names)) - for i := range names { - collections = append(collections, sess.Collection(names[i])) - } - - return collections, nil -} - -func (sess *sessionWithContext) ConnectionURL() db.ConnectionURL { - return sess.connURL -} - -func (sess *sessionWithContext) Open() error { - var sqlDB *sql.DB - var err error - - connFn := func() error { - sqlDB, err = sess.adapter.OpenDSN(sess, sess.connURL.String()) - if err != nil { - return err - } - - sqlDB.SetConnMaxLifetime(sess.ConnMaxLifetime()) - sqlDB.SetConnMaxIdleTime(sess.ConnMaxIdleTime()) - sqlDB.SetMaxIdleConns(sess.MaxIdleConns()) - sqlDB.SetMaxOpenConns(sess.MaxOpenConns()) - return nil - } - - if err := sess.WaitForConnection(connFn); err != nil { - return err - } - - return sess.BindDB(sqlDB) -} - -func (sess *sessionWithContext) Get(record db.Record, id interface{}) error { - store := record.Store(sess) - if getter, ok := store.(db.StoreGetter); ok { - return getter.Get(record, id) - } - return store.Find(id).One(record) -} - -func (sess *sessionWithContext) Save(record db.Record) error { - if record == nil { - return db.ErrNilRecord - } - - if reflect.TypeOf(record).Kind() != reflect.Ptr { - return db.ErrExpectingPointerToStruct - } - - store := record.Store(sess) - if saver, ok := store.(db.StoreSaver); ok { - return saver.Save(record) - } - - id := db.Cond{} - keys, values, err := recordPrimaryKeyFieldValues(store, record) - if err != nil { - return err - } - for i := range values { - if values[i] != reflect.Zero(reflect.TypeOf(values[i])).Interface() { - id[keys[i]] = values[i] - } - } - - if len(id) > 0 && len(id) == len(values) { - // check if record exists before updating it - exists, _ := store.Find(id).Count() - if exists > 0 { - return recordUpdate(store, record) - } - } - - return recordCreate(store, record) -} - -func (sess *sessionWithContext) Delete(record db.Record) error { - if record == nil { - return db.ErrNilRecord - } - - if reflect.TypeOf(record).Kind() != reflect.Ptr { - return db.ErrExpectingPointerToStruct - } - - store := record.Store(sess) - - if hook, ok := record.(db.BeforeDeleteHook); ok { - if err := hook.BeforeDelete(sess); err != nil { - return err - } - } - - if deleter, ok := store.(db.StoreDeleter); ok { - if err := deleter.Delete(record); err != nil { - return err - } - } else { - conds, err := recordID(store, record) - if err != nil { - return err - } - if err := store.Find(conds).Delete(); err != nil { - return err - } - } - - if hook, ok := record.(db.AfterDeleteHook); ok { - if err := hook.AfterDelete(sess); err != nil { - return err - } - } - - return nil -} - -func (sess *sessionWithContext) DB() *sql.DB { - return sess.sqlDB -} - -func (sess *sessionWithContext) SetContext(ctx context.Context) { - sess.mu.Lock() - sess.ctx = ctx - sess.mu.Unlock() -} - -func (sess *sessionWithContext) Context() context.Context { - return sess.ctx -} - -func (sess *sessionWithContext) SetTxOptions(txOptions sql.TxOptions) { - sess.mu.Lock() - sess.txOptions = &txOptions - sess.mu.Unlock() -} - -func (sess *sessionWithContext) TxOptions() *sql.TxOptions { - sess.mu.Lock() - defer sess.mu.Unlock() - if sess.txOptions == nil { - return nil - } - return sess.txOptions -} - -func (sess *sessionWithContext) BindTx(ctx context.Context, tx *sql.Tx) error { - sess.sqlDBMu.Lock() - defer sess.sqlDBMu.Unlock() - - sess.sqlTx = tx - sess.SetContext(ctx) - - sess.txID = newBaseTxID() - - return nil -} - -func (sess *sessionWithContext) Commit() error { - if sess.sqlTx != nil { - return sess.sqlTx.Commit() - } - return db.ErrNotWithinTransaction -} - -func (sess *sessionWithContext) Rollback() error { - if sess.sqlTx != nil { - return sess.sqlTx.Rollback() - } - return db.ErrNotWithinTransaction -} - -func (sess *sessionWithContext) IsTransaction() bool { - return sess.sqlTx != nil -} - -func (sess *sessionWithContext) Transaction() *sql.Tx { - return sess.sqlTx -} - -func (sess *sessionWithContext) Name() string { - sess.lookupNameOnce.Do(func() { - if sess.name == "" { - sess.name, _ = sess.adapter.LookupName(sess) - } - }) - - return sess.name -} - -func (sess *sessionWithContext) BindDB(sqlDB *sql.DB) error { - - sess.sqlDBMu.Lock() - sess.sqlDB = sqlDB - sess.sqlDBMu.Unlock() - - if err := sess.Ping(); err != nil { - return err - } - - sess.sessID = newSessionID() - name, err := sess.adapter.LookupName(sess) - if err != nil { - return err - } - sess.name = name - - return nil -} - -func (sess *sessionWithContext) Ping() error { - if sess.sqlDB != nil { - return sess.sqlDB.Ping() - } - return db.ErrNotConnected -} - -func (sess *sessionWithContext) SetConnMaxLifetime(t time.Duration) { - sess.Settings.SetConnMaxLifetime(t) - if sessDB := sess.DB(); sessDB != nil { - sessDB.SetConnMaxLifetime(sess.Settings.ConnMaxLifetime()) - } -} - -func (sess *sessionWithContext) SetConnMaxIdleTime(t time.Duration) { - sess.Settings.SetConnMaxIdleTime(t) - if sessDB := sess.DB(); sessDB != nil { - sessDB.SetConnMaxIdleTime(sess.Settings.ConnMaxIdleTime()) - } -} - -func (sess *sessionWithContext) SetMaxIdleConns(n int) { - sess.Settings.SetMaxIdleConns(n) - if sessDB := sess.DB(); sessDB != nil { - sessDB.SetMaxIdleConns(sess.Settings.MaxIdleConns()) - } -} - -func (sess *sessionWithContext) SetMaxOpenConns(n int) { - sess.Settings.SetMaxOpenConns(n) - if sessDB := sess.DB(); sessDB != nil { - sessDB.SetMaxOpenConns(sess.Settings.MaxOpenConns()) - } -} - -// Reset removes all caches. -func (sess *sessionWithContext) Reset() { - sess.cacheMu.Lock() - defer sess.cacheMu.Unlock() - - sess.cachedPKs.Clear() - sess.cachedCollections.Clear() - sess.cachedStatements.Clear() - - if sess.template != nil { - sess.template.Cache.Clear() - } -} - -func (sess *sessionWithContext) NewClone(adapter AdapterSession, checkConn bool) (Session, error) { - - newSess := NewSession(sess.connURL, adapter).(*sessionWithContext) - - newSess.name = sess.name - newSess.sqlDB = sess.sqlDB - newSess.cachedPKs = sess.cachedPKs - - if checkConn { - if err := newSess.Ping(); err != nil { - // Retry once if ping fails. - return sess.NewClone(adapter, false) - } - } - - newSess.sessID = newSessionID() - - // New transaction should inherit parent settings - copySettings(sess, newSess) - - return newSess, nil -} - -func (sess *sessionWithContext) Close() error { - defer func() { - sess.sqlDBMu.Lock() - sess.sqlDB = nil - sess.sqlTx = nil - sess.sqlDBMu.Unlock() - }() - - if sess.sqlDB == nil { - return nil - } - - sess.cachedCollections.Clear() - sess.cachedStatements.Clear() // Closes prepared statements as well. - - if !sess.IsTransaction() { - if cleaner, ok := sess.adapter.(hasCleanUp); ok { - if err := cleaner.CleanUp(); err != nil { - return err - } - } - // Not within a transaction. - return sess.sqlDB.Close() - } - - return nil -} - -func (sess *sessionWithContext) Collection(name string) db.Collection { - sess.cacheMu.Lock() - defer sess.cacheMu.Unlock() - - h := cache.NewHashable(hashTypeCollection, name) - - col, ok := sess.cachedCollections.ReadRaw(h) - if !ok { - col = newCollection(name, sess.adapter.NewCollection()) - sess.cachedCollections.Write(h, col) - } - - return &collectionWithSession{ - collection: col.(*collection), - session: sess, - } -} - -func queryLog(status *db.QueryStatus) { - diff := status.End.Sub(status.Start) - - slowQuery := false - if diff >= slowQueryThreshold { - status.Err = db.ErrWarnSlowQuery - slowQuery = true - } - - if status.Err != nil || slowQuery { - db.LC().Warn(status) - return - } - - db.LC().Debug(status) -} - -func (sess *sessionWithContext) StatementPrepare(ctx context.Context, stmt *exql.Statement) (sqlStmt *sql.Stmt, err error) { - var query string - - defer func(start time.Time) { - queryLog(&db.QueryStatus{ - TxID: sess.txID, - SessID: sess.sessID, - RawQuery: query, - Err: err, - Start: start, - End: time.Now(), - Context: ctx, - }) - }(time.Now()) - - query, _, err = sess.compileStatement(stmt, nil) - if err != nil { - return nil, err - } - - tx := sess.Transaction() - if tx != nil { - sqlStmt, err = compat.PrepareContext(tx, ctx, query) - return - } - - sqlStmt, err = compat.PrepareContext(sess.sqlDB, ctx, query) - return -} - -func (sess *sessionWithContext) ConvertValue(value interface{}) interface{} { - if scannerValuer, ok := value.(sqlbuilder.ScannerValuer); ok { - return scannerValuer - } - - dv := reflect.Indirect(reflect.ValueOf(value)) - if dv.IsValid() { - if converter, ok := dv.Interface().(valueConverter); ok { - return converter.ConvertValue(dv.Interface()) - } - } - - if converter, ok := sess.adapter.(sessValueConverterContext); ok { - return converter.ConvertValueContext(sess.Context(), value) - } - - if converter, ok := sess.adapter.(sessValueConverter); ok { - return converter.ConvertValue(value) - } - - return value -} - -func (sess *sessionWithContext) StatementExec(ctx context.Context, stmt *exql.Statement, args ...interface{}) (res sql.Result, err error) { - var query string - - defer func(start time.Time) { - status := db.QueryStatus{ - TxID: sess.txID, - SessID: sess.sessID, - RawQuery: query, - Args: args, - Err: err, - Start: start, - End: time.Now(), - Context: ctx, - } - - if res != nil { - if rowsAffected, err := res.RowsAffected(); err == nil { - status.RowsAffected = &rowsAffected - } - - if lastInsertID, err := res.LastInsertId(); err == nil { - status.LastInsertID = &lastInsertID - } - } - - queryLog(&status) - }(time.Now()) - - if execer, ok := sess.adapter.(statementExecer); ok { - query, args, err = sess.compileStatement(stmt, args) - if err != nil { - return nil, err - } - res, err = execer.StatementExec(sess, ctx, query, args...) - return - } - - tx := sess.Transaction() - if sess.Settings.PreparedStatementCacheEnabled() && tx == nil { - var p *Stmt - if p, query, args, err = sess.prepareStatement(ctx, stmt, args); err != nil { - return nil, err - } - defer p.Close() - - res, err = compat.PreparedExecContext(p, ctx, args) - return - } - - query, args, err = sess.compileStatement(stmt, args) - if err != nil { - return nil, err - } - - if tx != nil { - res, err = compat.ExecContext(tx, ctx, query, args) - return - } - - res, err = compat.ExecContext(sess.sqlDB, ctx, query, args) - return -} - -// StatementQuery compiles and executes a statement that returns rows. -func (sess *sessionWithContext) StatementQuery(ctx context.Context, stmt *exql.Statement, args ...interface{}) (rows *sql.Rows, err error) { - var query string - - defer func(start time.Time) { - status := db.QueryStatus{ - TxID: sess.txID, - SessID: sess.sessID, - RawQuery: query, - Args: args, - Err: err, - Start: start, - End: time.Now(), - Context: ctx, - } - queryLog(&status) - }(time.Now()) - - tx := sess.Transaction() - - if sess.Settings.PreparedStatementCacheEnabled() && tx == nil { - var p *Stmt - if p, query, args, err = sess.prepareStatement(ctx, stmt, args); err != nil { - return nil, err - } - defer p.Close() - - rows, err = compat.PreparedQueryContext(p, ctx, args) - return - } - - query, args, err = sess.compileStatement(stmt, args) - if err != nil { - return nil, err - } - if tx != nil { - rows, err = compat.QueryContext(tx, ctx, query, args) - return - } - - rows, err = compat.QueryContext(sess.sqlDB, ctx, query, args) - return -} - -// StatementQueryRow compiles and executes a statement that returns at most one -// row. -func (sess *sessionWithContext) StatementQueryRow(ctx context.Context, stmt *exql.Statement, args ...interface{}) (row *sql.Row, err error) { - var query string - - defer func(start time.Time) { - status := db.QueryStatus{ - TxID: sess.txID, - SessID: sess.sessID, - RawQuery: query, - Args: args, - Err: err, - Start: start, - End: time.Now(), - Context: ctx, - } - queryLog(&status) - }(time.Now()) - - tx := sess.Transaction() - - if sess.Settings.PreparedStatementCacheEnabled() && tx == nil { - var p *Stmt - if p, query, args, err = sess.prepareStatement(ctx, stmt, args); err != nil { - return nil, err - } - defer p.Close() - - row = compat.PreparedQueryRowContext(p, ctx, args) - return - } - - query, args, err = sess.compileStatement(stmt, args) - if err != nil { - return nil, err - } - if tx != nil { - row = compat.QueryRowContext(tx, ctx, query, args) - return - } - - row = compat.QueryRowContext(sess.sqlDB, ctx, query, args) - return -} - -// Driver returns the underlying *sql.DB or *sql.Tx instance. -func (sess *sessionWithContext) Driver() interface{} { - if sess.sqlTx != nil { - return sess.sqlTx - } - return sess.sqlDB -} - -// compileStatement compiles the given statement into a string. -func (sess *sessionWithContext) compileStatement(stmt *exql.Statement, args []interface{}) (string, []interface{}, error) { - for i := range args { - args[i] = sess.ConvertValue(args[i]) - } - if statementCompiler, ok := sess.adapter.(statementCompiler); ok { - return statementCompiler.CompileStatement(sess, stmt, args) - } - - compiled, err := stmt.Compile(sess.adapter.Template()) - if err != nil { - return "", nil, err - } - query, args := sqlbuilder.Preprocess(compiled, args) - return query, args, nil -} - -// prepareStatement compiles a query and tries to use previously generated -// statement. -func (sess *sessionWithContext) prepareStatement(ctx context.Context, stmt *exql.Statement, args []interface{}) (*Stmt, string, []interface{}, error) { - sess.sqlDBMu.Lock() - defer sess.sqlDBMu.Unlock() - - sqlDB, tx := sess.sqlDB, sess.Transaction() - if sqlDB == nil && tx == nil { - return nil, "", nil, db.ErrNotConnected - } - - pc, ok := sess.cachedStatements.ReadRaw(stmt) - if ok { - // The statement was cachesess. - ps, err := pc.(*Stmt).Open() - if err == nil { - _, args, err = sess.compileStatement(stmt, args) - if err != nil { - return nil, "", nil, err - } - return ps, ps.query, args, nil - } - } - - query, args, err := sess.compileStatement(stmt, args) - if err != nil { - return nil, "", nil, err - } - sqlStmt, err := func(query *string) (*sql.Stmt, error) { - if tx != nil { - return compat.PrepareContext(tx, ctx, *query) - } - return compat.PrepareContext(sess.sqlDB, ctx, *query) - }(&query) - if err != nil { - return nil, "", nil, err - } - - p, err := NewStatement(sqlStmt, query).Open() - if err != nil { - return nil, query, args, err - } - sess.cachedStatements.Write(stmt, p) - return p, p.query, args, nil -} - -var waitForConnMu sync.Mutex - -// WaitForConnection tries to execute the given connectFn function, if -// connectFn returns an error, then WaitForConnection will keep trying until -// connectFn returns nil. Maximum waiting time is 5s after having acquired the -// lock. -func (sess *sessionWithContext) WaitForConnection(connectFn func() error) error { - // This lock ensures first-come, first-served and prevents opening too many - // file descriptors. - waitForConnMu.Lock() - defer waitForConnMu.Unlock() - - // Minimum waiting time. - waitTime := time.Millisecond * 10 - - // Waitig 5 seconds for a successful connection. - for timeStart := time.Now(); time.Since(timeStart) < time.Second*5; { - err := connectFn() - if err == nil { - return nil // Connected! - } - - // Only attempt to reconnect if the error is too many clients. - if sess.Err(err) == db.ErrTooManyClients { - // Sleep and try again if, and only if, the server replied with a "too - // many clients" error. - time.Sleep(waitTime) - if waitTime < time.Millisecond*500 { - // Wait a bit more next time. - waitTime = waitTime * 2 - } - continue - } - - // Return any other error immediately. - return err - } - - return db.ErrGivingUpTryingToConnect -} - -// ReplaceWithDollarSign turns a SQL statament with '?' placeholders into -// dollar placeholders, like $1, $2, ..., $n -func ReplaceWithDollarSign(buf []byte) []byte { - z := bytes.Count(buf, []byte{'?'}) - // the capacity is a quick estimation of the total memory required, this - // reduces reallocations - out := make([]byte, 0, len(buf)+z*3) - - var i, k = 0, 1 - for i < len(buf) { - if buf[i] == '?' { - out = append(out, buf[:i]...) - buf = buf[i+1:] - i = 0 - - if len(buf) > 0 && buf[0] == '?' { - out = append(out, '?') - buf = buf[1:] - continue - } - - out = append(out, '$') - out = append(out, []byte(strconv.Itoa(k))...) - k = k + 1 - continue - } - i = i + 1 - } - - out = append(out, buf[:len(buf)]...) - buf = nil - - return out -} - -func copySettings(from Session, into Session) { - into.SetPreparedStatementCache(from.PreparedStatementCacheEnabled()) - into.SetConnMaxLifetime(from.ConnMaxLifetime()) - into.SetConnMaxIdleTime(from.ConnMaxIdleTime()) - into.SetMaxIdleConns(from.MaxIdleConns()) - into.SetMaxOpenConns(from.MaxOpenConns()) -} - -func newSessionID() uint64 { - if atomic.LoadUint64(&lastSessID) == math.MaxUint64 { - atomic.StoreUint64(&lastSessID, 0) - return 0 - } - return atomic.AddUint64(&lastSessID, 1) -} - -func newBaseTxID() uint64 { - if atomic.LoadUint64(&lastTxID) == math.MaxUint64 { - atomic.StoreUint64(&lastTxID, 0) - return 0 - } - return atomic.AddUint64(&lastTxID, 1) -} - -// TxContext creates a transaction context and runs fn within it. -func TxContext(ctx context.Context, sess db.Session, fn func(tx db.Session) error, opts *sql.TxOptions) error { - txFn := func(sess db.Session) error { - tx, err := sess.(Session).NewTransaction(ctx, opts) - if err != nil { - return err - } - defer tx.Close() - - if err := fn(tx); err != nil { - if rollbackErr := tx.Rollback(); rollbackErr != nil { - return fmt.Errorf("%v: %w", rollbackErr, err) - } - return err - } - return tx.Commit() - } - - retryTime := retryTransactionWaitTime - - var txErr error - for i := 0; i < sess.MaxTransactionRetries(); i++ { - txErr = sess.(*sessionWithContext).Err(txFn(sess)) - if txErr == nil { - return nil - } - if errors.Is(txErr, db.ErrTransactionAborted) { - time.Sleep(retryTime) - - retryTime = retryTime * 2 - if retryTime > retryTransactionMaxWaitTime { - retryTime = retryTransactionMaxWaitTime - } - - continue - } - return txErr - } - - return fmt.Errorf("db: giving up trying to commit transaction: %w", txErr) -} - -var _ = db.Session(&sessionWithContext{}) diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/sqladapter.go b/vendor/github.com/upper/db/v4/internal/sqladapter/sqladapter.go deleted file mode 100644 index aaeb2c40..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/sqladapter.go +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package sqladapter provides common logic for SQL adapters. -package sqladapter - -import ( - "database/sql" - "database/sql/driver" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/sqlbuilder" -) - -// IsKeyValue reports whether v is a valid value for a primary key that can be -// used with Find(pKey). -func IsKeyValue(v interface{}) bool { - if v == nil { - return true - } - switch v.(type) { - case int64, int, uint, uint64, - []int64, []int, []uint, []uint64, - []byte, []string, - []interface{}, - driver.Valuer: - return true - } - return false -} - -type sqlAdapterWrapper struct { - adapter AdapterSession -} - -func (w *sqlAdapterWrapper) OpenDSN(dsn db.ConnectionURL) (db.Session, error) { - sess := NewSession(dsn, w.adapter) - if err := sess.Open(); err != nil { - return nil, err - } - return sess, nil -} - -func (w *sqlAdapterWrapper) NewTx(sqlTx *sql.Tx) (sqlbuilder.Tx, error) { - tx, err := NewTx(w.adapter, sqlTx) - if err != nil { - return nil, err - } - return tx, nil -} - -func (w *sqlAdapterWrapper) New(sqlDB *sql.DB) (db.Session, error) { - sess := NewSession(nil, w.adapter) - if err := sess.BindDB(sqlDB); err != nil { - return nil, err - } - return sess, nil -} - -// RegisterAdapter registers a new SQL adapter. -func RegisterAdapter(name string, adapter AdapterSession) sqlbuilder.Adapter { - z := &sqlAdapterWrapper{adapter} - db.RegisterAdapter(name, sqlbuilder.NewCompatAdapter(z)) - return z -} diff --git a/vendor/github.com/upper/db/v4/internal/sqladapter/statement.go b/vendor/github.com/upper/db/v4/internal/sqladapter/statement.go deleted file mode 100644 index 0b18ebd1..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqladapter/statement.go +++ /dev/null @@ -1,85 +0,0 @@ -package sqladapter - -import ( - "database/sql" - "errors" - "sync" - "sync/atomic" -) - -var ( - activeStatements int64 -) - -// Stmt represents a *sql.Stmt that is cached and provides the -// OnEvict method to allow it to clean after itself. -type Stmt struct { - *sql.Stmt - - query string - mu sync.Mutex - - count int64 - dead bool -} - -// NewStatement creates an returns an opened statement -func NewStatement(stmt *sql.Stmt, query string) *Stmt { - s := &Stmt{ - Stmt: stmt, - query: query, - } - atomic.AddInt64(&activeStatements, 1) - return s -} - -// Open marks the statement as in-use -func (c *Stmt) Open() (*Stmt, error) { - c.mu.Lock() - defer c.mu.Unlock() - - if c.dead { - return nil, errors.New("statement is dead") - } - - c.count++ - return c, nil -} - -// Close closes the underlying statement if no other go-routine is using it. -func (c *Stmt) Close() error { - c.mu.Lock() - defer c.mu.Unlock() - - c.count-- - - return c.checkClose() -} - -func (c *Stmt) checkClose() error { - if c.dead && c.count == 0 { - // Statement is dead and we can close it for real. - err := c.Stmt.Close() - if err != nil { - return err - } - // Reduce active statements counter. - atomic.AddInt64(&activeStatements, -1) - } - return nil -} - -// OnEvict marks the statement as ready to be cleaned up. -func (c *Stmt) OnEvict() { - c.mu.Lock() - defer c.mu.Unlock() - - c.dead = true - c.checkClose() -} - -// NumActiveStatements returns the global number of prepared statements in use -// at any point. -func NumActiveStatements() int64 { - return atomic.LoadInt64(&activeStatements) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/batch.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/batch.go deleted file mode 100644 index c9465e9a..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/batch.go +++ /dev/null @@ -1,86 +0,0 @@ -package sqlbuilder - -import ( - "github.com/upper/db/v4" -) - -// BatchInserter provides a helper that can be used to do massive insertions in -// batches. -type BatchInserter struct { - inserter *inserter - size int - values chan []interface{} - err error -} - -func newBatchInserter(inserter *inserter, size int) *BatchInserter { - if size < 1 { - size = 1 - } - b := &BatchInserter{ - inserter: inserter, - size: size, - values: make(chan []interface{}, size), - } - return b -} - -// Values pushes column values to be inserted as part of the batch. -func (b *BatchInserter) Values(values ...interface{}) db.BatchInserter { - b.values <- values - return b -} - -func (b *BatchInserter) nextQuery() *inserter { - ins := &inserter{} - *ins = *b.inserter - i := 0 - for values := range b.values { - i++ - ins = ins.Values(values...).(*inserter) - if i == b.size { - break - } - } - if i == 0 { - return nil - } - return ins -} - -// NextResult is useful when using PostgreSQL and Returning(), it dumps the -// next slice of results to dst, which can mean having the IDs of all inserted -// elements in the batch. -func (b *BatchInserter) NextResult(dst interface{}) bool { - clone := b.nextQuery() - if clone == nil { - return false - } - b.err = clone.Iterator().All(dst) - return (b.err == nil) -} - -// Done means that no more elements are going to be added. -func (b *BatchInserter) Done() { - close(b.values) -} - -// Wait blocks until the whole batch is executed. -func (b *BatchInserter) Wait() error { - for { - q := b.nextQuery() - if q == nil { - break - } - if _, err := q.Exec(); err != nil { - b.err = err - break - } - } - return b.Err() -} - -// Err returns any error while executing the batch. -func (b *BatchInserter) Err() error { - return b.err -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/builder.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/builder.go deleted file mode 100644 index b9cf5799..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/builder.go +++ /dev/null @@ -1,632 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package sqlbuilder provides tools for building custom SQL queries. -package sqlbuilder - -import ( - "context" - "database/sql" - "errors" - "fmt" - "reflect" - "sort" - "strconv" - "strings" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/reflectx" - "github.com/upper/db/v4/internal/sqladapter/compat" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -// MapOptions represents options for the mapper. -type MapOptions struct { - IncludeZeroed bool - IncludeNil bool -} - -var defaultMapOptions = MapOptions{ - IncludeZeroed: false, - IncludeNil: false, -} - -type hasPaginator interface { - Paginator() (db.Paginator, error) -} - -type isCompilable interface { - Compile() (string, error) - Arguments() []interface{} -} - -type hasIsZero interface { - IsZero() bool -} - -type iterator struct { - sess exprDB - cursor *sql.Rows // This is the main query cursor. It starts as a nil value. - err error -} - -type fieldValue struct { - fields []string - values []interface{} -} - -var ( - sqlPlaceholder = &exql.Raw{Value: `?`} -) - -var ( - errDeprecatedJSONBTag = errors.New(`Tag "jsonb" is deprecated. See "PostgreSQL: jsonb tag" at https://github.com/upper/db/releases/tag/v3.4.0`) -) - -type exprDB interface { - StatementExec(ctx context.Context, stmt *exql.Statement, args ...interface{}) (sql.Result, error) - StatementPrepare(ctx context.Context, stmt *exql.Statement) (*sql.Stmt, error) - StatementQuery(ctx context.Context, stmt *exql.Statement, args ...interface{}) (*sql.Rows, error) - StatementQueryRow(ctx context.Context, stmt *exql.Statement, args ...interface{}) (*sql.Row, error) - - Context() context.Context -} - -type sqlBuilder struct { - sess exprDB - t *templateWithUtils -} - -// WithSession returns a query builder that is bound to the given database session. -func WithSession(sess interface{}, t *exql.Template) db.SQL { - if sqlDB, ok := sess.(*sql.DB); ok { - sess = sqlDB - } - return &sqlBuilder{ - sess: sess.(exprDB), // Let it panic, it will show the developer an informative error. - t: newTemplateWithUtils(t), - } -} - -// WithTemplate returns a builder that is based on the given template. -func WithTemplate(t *exql.Template) db.SQL { - return &sqlBuilder{ - t: newTemplateWithUtils(t), - } -} - -func (b *sqlBuilder) NewIteratorContext(ctx context.Context, rows *sql.Rows) db.Iterator { - return &iterator{b.sess, rows, nil} -} - -func (b *sqlBuilder) NewIterator(rows *sql.Rows) db.Iterator { - return b.NewIteratorContext(b.sess.Context(), rows) -} - -func (b *sqlBuilder) Iterator(query interface{}, args ...interface{}) db.Iterator { - return b.IteratorContext(b.sess.Context(), query, args...) -} - -func (b *sqlBuilder) IteratorContext(ctx context.Context, query interface{}, args ...interface{}) db.Iterator { - rows, err := b.QueryContext(ctx, query, args...) - return &iterator{b.sess, rows, err} -} - -func (b *sqlBuilder) Prepare(query interface{}) (*sql.Stmt, error) { - return b.PrepareContext(b.sess.Context(), query) -} - -func (b *sqlBuilder) PrepareContext(ctx context.Context, query interface{}) (*sql.Stmt, error) { - switch q := query.(type) { - case *exql.Statement: - return b.sess.StatementPrepare(ctx, q) - case string: - return b.sess.StatementPrepare(ctx, exql.RawSQL(q)) - case *adapter.RawExpr: - return b.PrepareContext(ctx, q.Raw()) - default: - return nil, fmt.Errorf("unsupported query type %T", query) - } -} - -func (b *sqlBuilder) Exec(query interface{}, args ...interface{}) (sql.Result, error) { - return b.ExecContext(b.sess.Context(), query, args...) -} - -func (b *sqlBuilder) ExecContext(ctx context.Context, query interface{}, args ...interface{}) (sql.Result, error) { - switch q := query.(type) { - case *exql.Statement: - return b.sess.StatementExec(ctx, q, args...) - case string: - return b.sess.StatementExec(ctx, exql.RawSQL(q), args...) - case *adapter.RawExpr: - return b.ExecContext(ctx, q.Raw(), q.Arguments()...) - default: - return nil, fmt.Errorf("unsupported query type %T", query) - } -} - -func (b *sqlBuilder) Query(query interface{}, args ...interface{}) (*sql.Rows, error) { - return b.QueryContext(b.sess.Context(), query, args...) -} - -func (b *sqlBuilder) QueryContext(ctx context.Context, query interface{}, args ...interface{}) (*sql.Rows, error) { - switch q := query.(type) { - case *exql.Statement: - return b.sess.StatementQuery(ctx, q, args...) - case string: - return b.sess.StatementQuery(ctx, exql.RawSQL(q), args...) - case *adapter.RawExpr: - return b.QueryContext(ctx, q.Raw(), q.Arguments()...) - default: - return nil, fmt.Errorf("unsupported query type %T", query) - } -} - -func (b *sqlBuilder) QueryRow(query interface{}, args ...interface{}) (*sql.Row, error) { - return b.QueryRowContext(b.sess.Context(), query, args...) -} - -func (b *sqlBuilder) QueryRowContext(ctx context.Context, query interface{}, args ...interface{}) (*sql.Row, error) { - switch q := query.(type) { - case *exql.Statement: - return b.sess.StatementQueryRow(ctx, q, args...) - case string: - return b.sess.StatementQueryRow(ctx, exql.RawSQL(q), args...) - case *adapter.RawExpr: - return b.QueryRowContext(ctx, q.Raw(), q.Arguments()...) - default: - return nil, fmt.Errorf("unsupported query type %T", query) - } -} - -func (b *sqlBuilder) SelectFrom(table ...interface{}) db.Selector { - qs := &selector{ - builder: b, - } - return qs.From(table...) -} - -func (b *sqlBuilder) Select(columns ...interface{}) db.Selector { - qs := &selector{ - builder: b, - } - return qs.Columns(columns...) -} - -func (b *sqlBuilder) InsertInto(table string) db.Inserter { - qi := &inserter{ - builder: b, - } - return qi.Into(table) -} - -func (b *sqlBuilder) DeleteFrom(table string) db.Deleter { - qd := &deleter{ - builder: b, - } - return qd.setTable(table) -} - -func (b *sqlBuilder) Update(table string) db.Updater { - qu := &updater{ - builder: b, - } - return qu.setTable(table) -} - -// Map receives a pointer to map or struct and maps it to columns and values. -func Map(item interface{}, options *MapOptions) ([]string, []interface{}, error) { - var fv fieldValue - if options == nil { - options = &defaultMapOptions - } - - itemV := reflect.ValueOf(item) - if !itemV.IsValid() { - return nil, nil, nil - } - - itemT := itemV.Type() - - if itemT.Kind() == reflect.Ptr { - // Single dereference. Just in case the user passes a pointer to struct - // instead of a struct. - item = itemV.Elem().Interface() - itemV = reflect.ValueOf(item) - itemT = itemV.Type() - } - - switch itemT.Kind() { - case reflect.Struct: - fieldMap := Mapper.TypeMap(itemT).Names - nfields := len(fieldMap) - - fv.values = make([]interface{}, 0, nfields) - fv.fields = make([]string, 0, nfields) - - for _, fi := range fieldMap { - - // Check for deprecated JSONB tag - if _, hasJSONBTag := fi.Options["jsonb"]; hasJSONBTag { - return nil, nil, errDeprecatedJSONBTag - } - - // Field options - _, tagOmitEmpty := fi.Options["omitempty"] - - fld := reflectx.FieldByIndexesReadOnly(itemV, fi.Index) - if fld.Kind() == reflect.Ptr && fld.IsNil() { - if tagOmitEmpty && !options.IncludeNil { - continue - } - fv.fields = append(fv.fields, fi.Name) - if tagOmitEmpty { - fv.values = append(fv.values, sqlDefault) - } else { - fv.values = append(fv.values, nil) - } - continue - } - - value := fld.Interface() - - isZero := false - if t, ok := fld.Interface().(hasIsZero); ok { - if t.IsZero() { - isZero = true - } - } else if fld.Kind() == reflect.Array || fld.Kind() == reflect.Slice { - if fld.Len() == 0 { - isZero = true - } - } else if reflect.DeepEqual(fi.Zero.Interface(), value) { - isZero = true - } - - if isZero && tagOmitEmpty && !options.IncludeZeroed { - continue - } - - fv.fields = append(fv.fields, fi.Name) - v, err := marshal(value) - if err != nil { - return nil, nil, err - } - if isZero && tagOmitEmpty { - v = sqlDefault - } - fv.values = append(fv.values, v) - } - - case reflect.Map: - nfields := itemV.Len() - fv.values = make([]interface{}, nfields) - fv.fields = make([]string, nfields) - mkeys := itemV.MapKeys() - - for i, keyV := range mkeys { - valv := itemV.MapIndex(keyV) - fv.fields[i] = fmt.Sprintf("%v", keyV.Interface()) - - v, err := marshal(valv.Interface()) - if err != nil { - return nil, nil, err - } - - fv.values[i] = v - } - default: - return nil, nil, ErrExpectingPointerToEitherMapOrStruct - } - - sort.Sort(&fv) - - return fv.fields, fv.values, nil -} - -func columnFragments(columns []interface{}) ([]exql.Fragment, []interface{}, error) { - f := make([]exql.Fragment, len(columns)) - args := []interface{}{} - - for i := range columns { - switch v := columns[i].(type) { - case hasPaginator: - p, err := v.Paginator() - if err != nil { - return nil, nil, err - } - - q, a := Preprocess(p.String(), p.Arguments()) - - f[i] = &exql.Raw{Value: "(" + q + ")"} - args = append(args, a...) - case isCompilable: - c, err := v.Compile() - if err != nil { - return nil, nil, err - } - q, a := Preprocess(c, v.Arguments()) - if _, ok := v.(db.Selector); ok { - q = "(" + q + ")" - } - f[i] = &exql.Raw{Value: q} - args = append(args, a...) - case *adapter.FuncExpr: - fnName, fnArgs := v.Name(), v.Arguments() - if len(fnArgs) == 0 { - fnName = fnName + "()" - } else { - fnName = fnName + "(?" + strings.Repeat("?, ", len(fnArgs)-1) + ")" - } - fnName, fnArgs = Preprocess(fnName, fnArgs) - f[i] = &exql.Raw{Value: fnName} - args = append(args, fnArgs...) - case *adapter.RawExpr: - q, a := Preprocess(v.Raw(), v.Arguments()) - f[i] = &exql.Raw{Value: q} - args = append(args, a...) - case exql.Fragment: - f[i] = v - case string: - f[i] = exql.ColumnWithName(v) - case fmt.Stringer: - f[i] = exql.ColumnWithName(v.String()) - default: - var err error - f[i], err = exql.NewRawValue(columns[i]) - if err != nil { - return nil, nil, fmt.Errorf("unexpected argument type %T for Select() argument: %w", v, err) - } - } - } - return f, args, nil -} - -func prepareQueryForDisplay(in string) string { - out := make([]byte, 0, len(in)) - - offset := 0 - whitespace := true - placeholders := 1 - - for i := 0; i < len(in); i++ { - if in[i] == ' ' || in[i] == '\r' || in[i] == '\n' || in[i] == '\t' { - if whitespace { - offset = i - } else { - whitespace = true - out = append(out, in[offset:i]...) - offset = i - } - continue - } - if whitespace { - whitespace = false - if len(out) > 0 { - out = append(out, ' ') - } - offset = i - } - if in[i] == '?' { - out = append(out, in[offset:i]...) - offset = i + 1 - - out = append(out, '$') - out = append(out, strconv.Itoa(placeholders)...) - placeholders++ - } - } - if !whitespace { - out = append(out, in[offset:len(in)]...) - } - return string(out) -} - -func (iter *iterator) NextScan(dst ...interface{}) error { - if ok := iter.Next(); ok { - return iter.Scan(dst...) - } - if err := iter.Err(); err != nil { - return err - } - return db.ErrNoMoreRows -} - -func (iter *iterator) ScanOne(dst ...interface{}) error { - defer iter.Close() - return iter.NextScan(dst...) -} - -func (iter *iterator) Scan(dst ...interface{}) error { - if err := iter.Err(); err != nil { - return err - } - return iter.cursor.Scan(dst...) -} - -func (iter *iterator) setErr(err error) error { - iter.err = err - return iter.err -} - -func (iter *iterator) One(dst interface{}) error { - if err := iter.Err(); err != nil { - return err - } - defer iter.Close() - return iter.setErr(iter.next(dst)) -} - -func (iter *iterator) All(dst interface{}) error { - if err := iter.Err(); err != nil { - return err - } - defer iter.Close() - - // Fetching all results within the cursor. - if err := fetchRows(iter, dst); err != nil { - return iter.setErr(err) - } - - return nil -} - -func (iter *iterator) Err() (err error) { - return iter.err -} - -func (iter *iterator) Next(dst ...interface{}) bool { - if err := iter.Err(); err != nil { - return false - } - - if err := iter.next(dst...); err != nil { - // ignore db.ErrNoMoreRows, just break. - if !errors.Is(err, db.ErrNoMoreRows) { - _ = iter.setErr(err) - } - return false - } - - return true -} - -func (iter *iterator) next(dst ...interface{}) error { - if iter.cursor == nil { - return iter.setErr(db.ErrNoMoreRows) - } - - switch len(dst) { - case 0: - if ok := iter.cursor.Next(); !ok { - defer iter.Close() - err := iter.cursor.Err() - if err == nil { - err = db.ErrNoMoreRows - } - return err - } - return nil - case 1: - if err := fetchRow(iter, dst[0]); err != nil { - defer iter.Close() - return err - } - return nil - } - - return errors.New("Next does not currently supports more than one parameters") -} - -func (iter *iterator) Close() (err error) { - if iter.cursor != nil { - err = iter.cursor.Close() - iter.cursor = nil - } - return err -} - -func marshal(v interface{}) (interface{}, error) { - if m, isMarshaler := v.(db.Marshaler); isMarshaler { - var err error - if v, err = m.MarshalDB(); err != nil { - return nil, err - } - } - return v, nil -} - -func (fv *fieldValue) Len() int { - return len(fv.fields) -} - -func (fv *fieldValue) Swap(i, j int) { - fv.fields[i], fv.fields[j] = fv.fields[j], fv.fields[i] - fv.values[i], fv.values[j] = fv.values[j], fv.values[i] -} - -func (fv *fieldValue) Less(i, j int) bool { - return fv.fields[i] < fv.fields[j] -} - -type exprProxy struct { - db *sql.DB - t *exql.Template -} - -func (p *exprProxy) Context() context.Context { - return context.Background() -} - -func (p *exprProxy) StatementExec(ctx context.Context, stmt *exql.Statement, args ...interface{}) (sql.Result, error) { - s, err := stmt.Compile(p.t) - if err != nil { - return nil, err - } - return compat.ExecContext(p.db, ctx, s, args) -} - -func (p *exprProxy) StatementPrepare(ctx context.Context, stmt *exql.Statement) (*sql.Stmt, error) { - s, err := stmt.Compile(p.t) - if err != nil { - return nil, err - } - return compat.PrepareContext(p.db, ctx, s) -} - -func (p *exprProxy) StatementQuery(ctx context.Context, stmt *exql.Statement, args ...interface{}) (*sql.Rows, error) { - s, err := stmt.Compile(p.t) - if err != nil { - return nil, err - } - return compat.QueryContext(p.db, ctx, s, args) -} - -func (p *exprProxy) StatementQueryRow(ctx context.Context, stmt *exql.Statement, args ...interface{}) (*sql.Row, error) { - s, err := stmt.Compile(p.t) - if err != nil { - return nil, err - } - return compat.QueryRowContext(p.db, ctx, s, args), nil -} - -var ( - _ = db.SQL(&sqlBuilder{}) - _ = exprDB(&exprProxy{}) -) - -func joinArguments(args ...[]interface{}) []interface{} { - total := 0 - for i := range args { - total += len(args[i]) - } - if total == 0 { - return nil - } - - flatten := make([]interface{}, 0, total) - for i := range args { - flatten = append(flatten, args[i]...) - } - return flatten -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/comparison.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/comparison.go deleted file mode 100644 index ae5f001c..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/comparison.go +++ /dev/null @@ -1,122 +0,0 @@ -package sqlbuilder - -import ( - "fmt" - "strings" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -var comparisonOperators = map[adapter.ComparisonOperator]string{ - adapter.ComparisonOperatorEqual: "=", - adapter.ComparisonOperatorNotEqual: "!=", - - adapter.ComparisonOperatorLessThan: "<", - adapter.ComparisonOperatorGreaterThan: ">", - - adapter.ComparisonOperatorLessThanOrEqualTo: "<=", - adapter.ComparisonOperatorGreaterThanOrEqualTo: ">=", - - adapter.ComparisonOperatorBetween: "BETWEEN", - adapter.ComparisonOperatorNotBetween: "NOT BETWEEN", - - adapter.ComparisonOperatorIn: "IN", - adapter.ComparisonOperatorNotIn: "NOT IN", - - adapter.ComparisonOperatorIs: "IS", - adapter.ComparisonOperatorIsNot: "IS NOT", - - adapter.ComparisonOperatorLike: "LIKE", - adapter.ComparisonOperatorNotLike: "NOT LIKE", - - adapter.ComparisonOperatorRegExp: "REGEXP", - adapter.ComparisonOperatorNotRegExp: "NOT REGEXP", -} - -type operatorWrapper struct { - tu *templateWithUtils - cv *exql.ColumnValue - - op *adapter.Comparison - v interface{} -} - -func (ow *operatorWrapper) cmp() *adapter.Comparison { - if ow.op != nil { - return ow.op - } - - if ow.cv.Operator != "" { - return db.Op(ow.cv.Operator, ow.v).Comparison - } - - if ow.v == nil { - return db.Is(nil).Comparison - } - - args, isSlice := toInterfaceArguments(ow.v) - if isSlice { - return db.In(args...).Comparison - } - - return db.Eq(ow.v).Comparison -} - -func (ow *operatorWrapper) preprocess() (string, []interface{}) { - placeholder := "?" - - column, err := ow.cv.Column.Compile(ow.tu.Template) - if err != nil { - panic(fmt.Sprintf("could not compile column: %v", err.Error())) - } - - c := ow.cmp() - - op := ow.tu.comparisonOperatorMapper(c.Operator()) - - var args []interface{} - - switch c.Operator() { - case adapter.ComparisonOperatorNone: - panic("no operator given") - case adapter.ComparisonOperatorCustom: - op = c.CustomOperator() - case adapter.ComparisonOperatorIn, adapter.ComparisonOperatorNotIn: - values := c.Value().([]interface{}) - if len(values) < 1 { - placeholder, args = "(NULL)", []interface{}{} - break - } - placeholder, args = "(?"+strings.Repeat(", ?", len(values)-1)+")", values - case adapter.ComparisonOperatorIs, adapter.ComparisonOperatorIsNot: - switch c.Value() { - case nil: - placeholder, args = "NULL", []interface{}{} - case false: - placeholder, args = "FALSE", []interface{}{} - case true: - placeholder, args = "TRUE", []interface{}{} - } - case adapter.ComparisonOperatorBetween, adapter.ComparisonOperatorNotBetween: - values := c.Value().([]interface{}) - placeholder, args = "? AND ?", []interface{}{values[0], values[1]} - case adapter.ComparisonOperatorEqual: - v := c.Value() - if b, ok := v.([]byte); ok { - v = string(b) - } - args = []interface{}{v} - } - - if args == nil { - args = []interface{}{c.Value()} - } - - if strings.Contains(op, ":column") { - return strings.Replace(op, ":column", column, -1), args - } - - return column + " " + op + " " + placeholder, args -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/convert.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/convert.go deleted file mode 100644 index 21e161fb..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/convert.go +++ /dev/null @@ -1,166 +0,0 @@ -package sqlbuilder - -import ( - "bytes" - "database/sql/driver" - "reflect" - - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -var ( - sqlDefault = &exql.Raw{Value: "DEFAULT"} -) - -func expandQuery(in []byte, inArgs []interface{}) ([]byte, []interface{}) { - out := make([]byte, 0, len(in)) - outArgs := make([]interface{}, 0, len(inArgs)) - - i := 0 - for i < len(in) && len(inArgs) > 0 { - if in[i] == '?' { - out = append(out, in[:i]...) - in = in[i+1:] - i = 0 - - replace, replaceArgs := expandArgument(inArgs[0]) - inArgs = inArgs[1:] - - if len(replace) > 0 { - replace, replaceArgs = expandQuery(replace, replaceArgs) - out = append(out, replace...) - } else { - out = append(out, '?') - } - - outArgs = append(outArgs, replaceArgs...) - continue - } - i = i + 1 - } - - if len(out) < 1 { - return in, inArgs - } - - out = append(out, in[:len(in)]...) - in = nil - - outArgs = append(outArgs, inArgs[:len(inArgs)]...) - inArgs = nil - - return out, outArgs -} - -func expandArgument(arg interface{}) ([]byte, []interface{}) { - values, isSlice := toInterfaceArguments(arg) - - if isSlice { - if len(values) == 0 { - return []byte("(NULL)"), nil - } - buf := bytes.Repeat([]byte(" ?,"), len(values)) - buf[0] = '(' - buf[len(buf)-1] = ')' - return buf, values - } - - if len(values) == 1 { - switch t := arg.(type) { - case *adapter.RawExpr: - return expandQuery([]byte(t.Raw()), t.Arguments()) - case hasPaginator: - p, err := t.Paginator() - if err == nil { - return append([]byte{'('}, append([]byte(p.String()), ')')...), p.Arguments() - } - panic(err.Error()) - case isCompilable: - s, err := t.Compile() - if err == nil { - return append([]byte{'('}, append([]byte(s), ')')...), t.Arguments() - } - panic(err.Error()) - } - } else if len(values) == 0 { - return []byte("NULL"), nil - } - - return nil, []interface{}{arg} -} - -// toInterfaceArguments converts the given value into an array of interfaces. -func toInterfaceArguments(value interface{}) (args []interface{}, isSlice bool) { - if value == nil { - return nil, false - } - - switch t := value.(type) { - case driver.Valuer: - return []interface{}{t}, false - } - - v := reflect.ValueOf(value) - if v.Type().Kind() == reflect.Slice { - var i, total int - - // Byte slice gets transformed into a string. - if v.Type().Elem().Kind() == reflect.Uint8 { - return []interface{}{string(v.Bytes())}, false - } - - total = v.Len() - args = make([]interface{}, total) - for i = 0; i < total; i++ { - args[i] = v.Index(i).Interface() - } - return args, true - } - - return []interface{}{value}, false -} - -// toColumnsValuesAndArguments maps the given columnNames and columnValues into -// expr's Columns and Values, it also extracts and returns query arguments. -func toColumnsValuesAndArguments(columnNames []string, columnValues []interface{}) (*exql.Columns, *exql.Values, []interface{}, error) { - var arguments []interface{} - - columns := new(exql.Columns) - - columns.Columns = make([]exql.Fragment, 0, len(columnNames)) - for i := range columnNames { - columns.Columns = append(columns.Columns, exql.ColumnWithName(columnNames[i])) - } - - values := new(exql.Values) - - arguments = make([]interface{}, 0, len(columnValues)) - values.Values = make([]exql.Fragment, 0, len(columnValues)) - - for i := range columnValues { - switch v := columnValues[i].(type) { - case *exql.Raw, exql.Raw: - values.Values = append(values.Values, sqlDefault) - case *exql.Value: - // Adding value. - values.Values = append(values.Values, v) - case exql.Value: - // Adding value. - values.Values = append(values.Values, &v) - default: - // Adding both value and placeholder. - values.Values = append(values.Values, sqlPlaceholder) - arguments = append(arguments, v) - } - } - - return columns, values, arguments, nil -} - -// Preprocess expands arguments that needs to be expanded and compiles a query -// into a single string. -func Preprocess(in string, args []interface{}) (string, []interface{}) { - b, args := expandQuery([]byte(in), args) - return string(b), args -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/custom_types.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/custom_types.go deleted file mode 100644 index 9b5e0cf0..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/custom_types.go +++ /dev/null @@ -1,11 +0,0 @@ -package sqlbuilder - -import ( - "database/sql" - "database/sql/driver" -) - -type ScannerValuer interface { - sql.Scanner - driver.Valuer -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/delete.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/delete.go deleted file mode 100644 index 1cae2df5..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/delete.go +++ /dev/null @@ -1,195 +0,0 @@ -package sqlbuilder - -import ( - "context" - "database/sql" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/immutable" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -type deleterQuery struct { - table string - limit int - - where *exql.Where - whereArgs []interface{} - - amendFn func(string) string -} - -func (dq *deleterQuery) and(b *sqlBuilder, terms ...interface{}) error { - where, whereArgs := b.t.toWhereWithArguments(terms) - - if dq.where == nil { - dq.where, dq.whereArgs = &exql.Where{}, []interface{}{} - } - dq.where.Append(&where) - dq.whereArgs = append(dq.whereArgs, whereArgs...) - - return nil -} - -func (dq *deleterQuery) statement() *exql.Statement { - stmt := &exql.Statement{ - Type: exql.Delete, - Table: exql.TableWithName(dq.table), - } - - if dq.where != nil { - stmt.Where = dq.where - } - - if dq.limit != 0 { - stmt.Limit = exql.Limit(dq.limit) - } - - stmt.SetAmendment(dq.amendFn) - - return stmt -} - -type deleter struct { - builder *sqlBuilder - - fn func(*deleterQuery) error - prev *deleter -} - -var _ = immutable.Immutable(&deleter{}) - -func (del *deleter) SQL() *sqlBuilder { - if del.prev == nil { - return del.builder - } - return del.prev.SQL() -} - -func (del *deleter) template() *exql.Template { - return del.SQL().t.Template -} - -func (del *deleter) String() string { - s, err := del.Compile() - if err != nil { - panic(err.Error()) - } - return prepareQueryForDisplay(s) -} - -func (del *deleter) setTable(table string) *deleter { - return del.frame(func(uq *deleterQuery) error { - uq.table = table - return nil - }) -} - -func (del *deleter) frame(fn func(*deleterQuery) error) *deleter { - return &deleter{prev: del, fn: fn} -} - -func (del *deleter) Where(terms ...interface{}) db.Deleter { - return del.frame(func(dq *deleterQuery) error { - dq.where, dq.whereArgs = &exql.Where{}, []interface{}{} - return dq.and(del.SQL(), terms...) - }) -} - -func (del *deleter) And(terms ...interface{}) db.Deleter { - return del.frame(func(dq *deleterQuery) error { - return dq.and(del.SQL(), terms...) - }) -} - -func (del *deleter) Limit(limit int) db.Deleter { - return del.frame(func(dq *deleterQuery) error { - dq.limit = limit - return nil - }) -} - -func (del *deleter) Amend(fn func(string) string) db.Deleter { - return del.frame(func(dq *deleterQuery) error { - dq.amendFn = fn - return nil - }) -} - -func (dq *deleterQuery) arguments() []interface{} { - return joinArguments(dq.whereArgs) -} - -func (del *deleter) Arguments() []interface{} { - dq, err := del.build() - if err != nil { - return nil - } - return dq.arguments() -} - -func (del *deleter) Prepare() (*sql.Stmt, error) { - return del.PrepareContext(del.SQL().sess.Context()) -} - -func (del *deleter) PrepareContext(ctx context.Context) (*sql.Stmt, error) { - dq, err := del.build() - if err != nil { - return nil, err - } - return del.SQL().sess.StatementPrepare(ctx, dq.statement()) -} - -func (del *deleter) Exec() (sql.Result, error) { - return del.ExecContext(del.SQL().sess.Context()) -} - -func (del *deleter) ExecContext(ctx context.Context) (sql.Result, error) { - dq, err := del.build() - if err != nil { - return nil, err - } - return del.SQL().sess.StatementExec(ctx, dq.statement(), dq.arguments()...) -} - -func (del *deleter) statement() (*exql.Statement, error) { - iq, err := del.build() - if err != nil { - return nil, err - } - return iq.statement(), nil -} - -func (del *deleter) build() (*deleterQuery, error) { - dq, err := immutable.FastForward(del) - if err != nil { - return nil, err - } - return dq.(*deleterQuery), nil -} - -func (del *deleter) Compile() (string, error) { - s, err := del.statement() - if err != nil { - return "", err - } - return s.Compile(del.template()) -} - -func (del *deleter) Prev() immutable.Immutable { - if del == nil { - return nil - } - return del.prev -} - -func (del *deleter) Fn(in interface{}) error { - if del.fn == nil { - return nil - } - return del.fn(in.(*deleterQuery)) -} - -func (del *deleter) Base() interface{} { - return &deleterQuery{} -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/errors.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/errors.go deleted file mode 100644 index 5c5a723d..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/errors.go +++ /dev/null @@ -1,14 +0,0 @@ -package sqlbuilder - -import ( - "errors" -) - -// Common error messages. -var ( - ErrExpectingPointer = errors.New(`argument must be an address`) - ErrExpectingSlicePointer = errors.New(`argument must be a slice address`) - ErrExpectingSliceMapStruct = errors.New(`argument must be a slice address of maps or structs`) - ErrExpectingMapOrStruct = errors.New(`argument must be either a map or a struct`) - ErrExpectingPointerToEitherMapOrStruct = errors.New(`expecting a pointer to either a map or a struct`) -) diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/fetch.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/fetch.go deleted file mode 100644 index fe35dd89..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/fetch.go +++ /dev/null @@ -1,254 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package sqlbuilder - -import ( - "reflect" - - "database/sql" - "database/sql/driver" - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/reflectx" -) - -type sessValueConverter interface { - ConvertValue(interface{}) interface{} -} - -type valueConverter interface { - ConvertValue(in interface{}) (out interface { - sql.Scanner - driver.Valuer - }) -} - -var Mapper = reflectx.NewMapper("db") - -// fetchRow receives a *sql.Rows value and tries to map all the rows into a -// single struct given by the pointer `dst`. -func fetchRow(iter *iterator, dst interface{}) error { - var columns []string - var err error - - rows := iter.cursor - - dstv := reflect.ValueOf(dst) - - if dstv.IsNil() || dstv.Kind() != reflect.Ptr { - return ErrExpectingPointer - } - - itemV := dstv.Elem() - - if columns, err = rows.Columns(); err != nil { - return err - } - - reset(dst) - - next := rows.Next() - - if !next { - if err = rows.Err(); err != nil { - return err - } - return db.ErrNoMoreRows - } - - itemT := itemV.Type() - item, err := fetchResult(iter, itemT, columns) - if err != nil { - return err - } - - if itemT.Kind() == reflect.Ptr { - itemV.Set(item) - } else { - itemV.Set(reflect.Indirect(item)) - } - - return nil -} - -// fetchRows receives a *sql.Rows value and tries to map all the rows into a -// slice of structs given by the pointer `dst`. -func fetchRows(iter *iterator, dst interface{}) error { - var err error - rows := iter.cursor - defer rows.Close() - - // Destination. - dstv := reflect.ValueOf(dst) - - if dstv.IsNil() || dstv.Kind() != reflect.Ptr { - return ErrExpectingPointer - } - - if dstv.Elem().Kind() != reflect.Slice { - return ErrExpectingSlicePointer - } - - if dstv.Kind() != reflect.Ptr || dstv.Elem().Kind() != reflect.Slice || dstv.IsNil() { - return ErrExpectingSliceMapStruct - } - - var columns []string - if columns, err = rows.Columns(); err != nil { - return err - } - - slicev := dstv.Elem() - itemT := slicev.Type().Elem() - - reset(dst) - - for rows.Next() { - item, err := fetchResult(iter, itemT, columns) - if err != nil { - return err - } - if itemT.Kind() == reflect.Ptr { - slicev = reflect.Append(slicev, item) - } else { - slicev = reflect.Append(slicev, reflect.Indirect(item)) - } - } - - dstv.Elem().Set(slicev) - - return rows.Err() -} - -func fetchResult(iter *iterator, itemT reflect.Type, columns []string) (reflect.Value, error) { - - var item reflect.Value - var err error - rows := iter.cursor - - objT := itemT - - switch objT.Kind() { - case reflect.Map: - item = reflect.MakeMap(objT) - case reflect.Struct: - item = reflect.New(objT) - case reflect.Ptr: - objT = itemT.Elem() - if objT.Kind() != reflect.Struct { - return item, ErrExpectingMapOrStruct - } - item = reflect.New(objT) - default: - return item, ErrExpectingMapOrStruct - } - - switch objT.Kind() { - case reflect.Struct: - - values := make([]interface{}, len(columns)) - typeMap := Mapper.TypeMap(itemT) - fieldMap := typeMap.Names - - for i, k := range columns { - fi, ok := fieldMap[k] - if !ok { - values[i] = new(interface{}) - continue - } - - // Check for deprecated jsonb tag. - if _, hasJSONBTag := fi.Options["jsonb"]; hasJSONBTag { - return item, errDeprecatedJSONBTag - } - - f := reflectx.FieldByIndexes(item, fi.Index) - - // TODO: type switch + scanner - - if w, ok := f.Interface().(valueConverter); ok { - wrapper := w.ConvertValue(f.Addr().Interface()) - z := reflect.ValueOf(wrapper) - values[i] = z.Interface() - continue - } else { - values[i] = f.Addr().Interface() - } - - if unmarshaler, ok := values[i].(db.Unmarshaler); ok { - values[i] = scanner{unmarshaler} - continue - } - - if converter, ok := iter.sess.(sessValueConverter); ok { - values[i] = converter.ConvertValue(values[i]) - continue - } - } - - if err = rows.Scan(values...); err != nil { - return item, err - } - - case reflect.Map: - - columns, err := rows.Columns() - if err != nil { - return item, err - } - - values := make([]interface{}, len(columns)) - for i := range values { - if itemT.Elem().Kind() == reflect.Interface { - values[i] = new(interface{}) - } else { - values[i] = reflect.New(itemT.Elem()).Interface() - } - } - - if err = rows.Scan(values...); err != nil { - return item, err - } - - for i, column := range columns { - item.SetMapIndex(reflect.ValueOf(column), reflect.Indirect(reflect.ValueOf(values[i]))) - } - } - - return item, nil -} - -func reset(data interface{}) { - // Resetting element. - v := reflect.ValueOf(data).Elem() - t := v.Type() - - var z reflect.Value - - switch v.Kind() { - case reflect.Slice: - z = reflect.MakeSlice(t, 0, v.Cap()) - default: - z = reflect.Zero(t) - } - - v.Set(z) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/insert.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/insert.go deleted file mode 100644 index 80e26d4c..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/insert.go +++ /dev/null @@ -1,285 +0,0 @@ -package sqlbuilder - -import ( - "context" - "database/sql" - "errors" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/immutable" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -type inserterQuery struct { - table string - enqueuedValues [][]interface{} - returning []exql.Fragment - columns []exql.Fragment - values []*exql.Values - arguments []interface{} - amendFn func(string) string -} - -func (iq *inserterQuery) processValues() ([]*exql.Values, []interface{}, error) { - var values []*exql.Values - var arguments []interface{} - - var mapOptions *MapOptions - if len(iq.enqueuedValues) > 1 { - mapOptions = &MapOptions{IncludeZeroed: true, IncludeNil: true} - } - - for _, enqueuedValue := range iq.enqueuedValues { - if len(enqueuedValue) == 1 { - // If and only if we passed one argument to Values. - ff, vv, err := Map(enqueuedValue[0], mapOptions) - - if err == nil { - // If we didn't have any problem with mapping we can convert it into - // columns and values. - columns, vals, args, _ := toColumnsValuesAndArguments(ff, vv) - - values, arguments = append(values, vals), append(arguments, args...) - - if len(iq.columns) == 0 { - iq.columns = append(iq.columns, columns.Columns...) - } - continue - } - - // The only error we can expect without exiting is this argument not - // being a map or struct, in which case we can continue. - if !errors.Is(err, ErrExpectingPointerToEitherMapOrStruct) { - return nil, nil, err - } - } - - if len(iq.columns) == 0 || len(enqueuedValue) == len(iq.columns) { - arguments = append(arguments, enqueuedValue...) - - l := len(enqueuedValue) - placeholders := make([]exql.Fragment, l) - for i := 0; i < l; i++ { - placeholders[i] = sqlPlaceholder - } - values = append(values, exql.NewValueGroup(placeholders...)) - } - } - - return values, arguments, nil -} - -func (iq *inserterQuery) statement() *exql.Statement { - stmt := &exql.Statement{ - Type: exql.Insert, - Table: exql.TableWithName(iq.table), - } - - if len(iq.values) > 0 { - stmt.Values = exql.JoinValueGroups(iq.values...) - } - - if len(iq.columns) > 0 { - stmt.Columns = exql.JoinColumns(iq.columns...) - } - - if len(iq.returning) > 0 { - stmt.Returning = exql.ReturningColumns(iq.returning...) - } - - stmt.SetAmendment(iq.amendFn) - - return stmt -} - -type inserter struct { - builder *sqlBuilder - - fn func(*inserterQuery) error - prev *inserter -} - -var _ = immutable.Immutable(&inserter{}) - -func (ins *inserter) SQL() *sqlBuilder { - if ins.prev == nil { - return ins.builder - } - return ins.prev.SQL() -} - -func (ins *inserter) template() *exql.Template { - return ins.SQL().t.Template -} - -func (ins *inserter) String() string { - s, err := ins.Compile() - if err != nil { - panic(err.Error()) - } - return prepareQueryForDisplay(s) -} - -func (ins *inserter) frame(fn func(*inserterQuery) error) *inserter { - return &inserter{prev: ins, fn: fn} -} - -func (ins *inserter) Batch(n int) db.BatchInserter { - return newBatchInserter(ins, n) -} - -func (ins *inserter) Amend(fn func(string) string) db.Inserter { - return ins.frame(func(iq *inserterQuery) error { - iq.amendFn = fn - return nil - }) -} - -func (ins *inserter) Arguments() []interface{} { - iq, err := ins.build() - if err != nil { - return nil - } - return iq.arguments -} - -func (ins *inserter) Returning(columns ...string) db.Inserter { - return ins.frame(func(iq *inserterQuery) error { - columnsToFragments(&iq.returning, columns) - return nil - }) -} - -func (ins *inserter) Exec() (sql.Result, error) { - return ins.ExecContext(ins.SQL().sess.Context()) -} - -func (ins *inserter) ExecContext(ctx context.Context) (sql.Result, error) { - iq, err := ins.build() - if err != nil { - return nil, err - } - return ins.SQL().sess.StatementExec(ctx, iq.statement(), iq.arguments...) -} - -func (ins *inserter) Prepare() (*sql.Stmt, error) { - return ins.PrepareContext(ins.SQL().sess.Context()) -} - -func (ins *inserter) PrepareContext(ctx context.Context) (*sql.Stmt, error) { - iq, err := ins.build() - if err != nil { - return nil, err - } - return ins.SQL().sess.StatementPrepare(ctx, iq.statement()) -} - -func (ins *inserter) Query() (*sql.Rows, error) { - return ins.QueryContext(ins.SQL().sess.Context()) -} - -func (ins *inserter) QueryContext(ctx context.Context) (*sql.Rows, error) { - iq, err := ins.build() - if err != nil { - return nil, err - } - return ins.SQL().sess.StatementQuery(ctx, iq.statement(), iq.arguments...) -} - -func (ins *inserter) QueryRow() (*sql.Row, error) { - return ins.QueryRowContext(ins.SQL().sess.Context()) -} - -func (ins *inserter) QueryRowContext(ctx context.Context) (*sql.Row, error) { - iq, err := ins.build() - if err != nil { - return nil, err - } - return ins.SQL().sess.StatementQueryRow(ctx, iq.statement(), iq.arguments...) -} - -func (ins *inserter) Iterator() db.Iterator { - return ins.IteratorContext(ins.SQL().sess.Context()) -} - -func (ins *inserter) IteratorContext(ctx context.Context) db.Iterator { - rows, err := ins.QueryContext(ctx) - return &iterator{ins.SQL().sess, rows, err} -} - -func (ins *inserter) Into(table string) db.Inserter { - return ins.frame(func(iq *inserterQuery) error { - iq.table = table - return nil - }) -} - -func (ins *inserter) Columns(columns ...string) db.Inserter { - return ins.frame(func(iq *inserterQuery) error { - columnsToFragments(&iq.columns, columns) - return nil - }) -} - -func (ins *inserter) Values(values ...interface{}) db.Inserter { - return ins.frame(func(iq *inserterQuery) error { - iq.enqueuedValues = append(iq.enqueuedValues, values) - return nil - }) -} - -func (ins *inserter) statement() (*exql.Statement, error) { - iq, err := ins.build() - if err != nil { - return nil, err - } - return iq.statement(), nil -} - -func (ins *inserter) build() (*inserterQuery, error) { - iq, err := immutable.FastForward(ins) - if err != nil { - return nil, err - } - ret := iq.(*inserterQuery) - ret.values, ret.arguments, err = ret.processValues() - if err != nil { - return nil, err - } - return ret, nil -} - -func (ins *inserter) Compile() (string, error) { - s, err := ins.statement() - if err != nil { - return "", err - } - return s.Compile(ins.template()) -} - -func (ins *inserter) Prev() immutable.Immutable { - if ins == nil { - return nil - } - return ins.prev -} - -func (ins *inserter) Fn(in interface{}) error { - if ins.fn == nil { - return nil - } - return ins.fn(in.(*inserterQuery)) -} - -func (ins *inserter) Base() interface{} { - return &inserterQuery{} -} - -func columnsToFragments(dst *[]exql.Fragment, columns []string) { - l := len(columns) - f := make([]exql.Fragment, l) - for i := 0; i < l; i++ { - f[i] = exql.ColumnWithName(columns[i]) - } - *dst = append(*dst, f...) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/paginate.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/paginate.go deleted file mode 100644 index 906b5276..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/paginate.go +++ /dev/null @@ -1,340 +0,0 @@ -package sqlbuilder - -import ( - "context" - "database/sql" - "errors" - "math" - "strings" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/immutable" -) - -var ( - errMissingCursorColumn = errors.New("Missing cursor column") -) - -type paginatorQuery struct { - sel db.Selector - - cursorColumn string - cursorValue interface{} - cursorCond db.Cond - cursorReverseOrder bool - - pageSize uint - pageNumber uint -} - -func newPaginator(sel db.Selector, pageSize uint) db.Paginator { - pag := &paginator{} - return pag.frame(func(pq *paginatorQuery) error { - pq.pageSize = pageSize - pq.sel = sel - return nil - }).Page(1) -} - -func (pq *paginatorQuery) count() (uint64, error) { - var count uint64 - - row, err := pq.sel.(*selector).setColumns(db.Raw("count(1) AS _t")). - Limit(0). - Offset(0). - OrderBy(nil). - QueryRow() - if err != nil { - return 0, err - } - - err = row.Scan(&count) - if err != nil { - return 0, err - } - - return count, nil -} - -type paginator struct { - fn func(*paginatorQuery) error - prev *paginator -} - -var _ = immutable.Immutable(&paginator{}) - -func (pag *paginator) frame(fn func(*paginatorQuery) error) *paginator { - return &paginator{prev: pag, fn: fn} -} - -func (pag *paginator) Page(pageNumber uint) db.Paginator { - return pag.frame(func(pq *paginatorQuery) error { - if pageNumber < 1 { - pageNumber = 1 - } - pq.pageNumber = pageNumber - return nil - }) -} - -func (pag *paginator) Cursor(column string) db.Paginator { - return pag.frame(func(pq *paginatorQuery) error { - pq.cursorColumn = column - pq.cursorValue = nil - return nil - }) -} - -func (pag *paginator) NextPage(cursorValue interface{}) db.Paginator { - return pag.frame(func(pq *paginatorQuery) error { - if pq.cursorValue != nil && pq.cursorColumn == "" { - return errMissingCursorColumn - } - pq.cursorValue = cursorValue - pq.cursorReverseOrder = false - if strings.HasPrefix(pq.cursorColumn, "-") { - pq.cursorCond = db.Cond{ - pq.cursorColumn[1:]: db.Lt(cursorValue), - } - } else { - pq.cursorCond = db.Cond{ - pq.cursorColumn: db.Gt(cursorValue), - } - } - return nil - }) -} - -func (pag *paginator) PrevPage(cursorValue interface{}) db.Paginator { - return pag.frame(func(pq *paginatorQuery) error { - if pq.cursorValue != nil && pq.cursorColumn == "" { - return errMissingCursorColumn - } - pq.cursorValue = cursorValue - pq.cursorReverseOrder = true - if strings.HasPrefix(pq.cursorColumn, "-") { - pq.cursorCond = db.Cond{ - pq.cursorColumn[1:]: db.Gt(cursorValue), - } - } else { - pq.cursorCond = db.Cond{ - pq.cursorColumn: db.Lt(cursorValue), - } - } - return nil - }) -} - -func (pag *paginator) TotalPages() (uint, error) { - pq, err := pag.build() - if err != nil { - return 0, err - } - - count, err := pq.count() - if err != nil { - return 0, err - } - if count < 1 { - return 0, nil - } - - if pq.pageSize < 1 { - return 1, nil - } - - pages := uint(math.Ceil(float64(count) / float64(pq.pageSize))) - return pages, nil -} - -func (pag *paginator) All(dest interface{}) error { - pq, err := pag.buildWithCursor() - if err != nil { - return err - } - err = pq.sel.All(dest) - if err != nil { - return err - } - return nil -} - -func (pag *paginator) One(dest interface{}) error { - pq, err := pag.buildWithCursor() - if err != nil { - return err - } - return pq.sel.One(dest) -} - -func (pag *paginator) Iterator() db.Iterator { - pq, err := pag.buildWithCursor() - if err != nil { - sess := pq.sel.(*selector).SQL().sess - return &iterator{sess, nil, err} - } - return pq.sel.Iterator() -} - -func (pag *paginator) IteratorContext(ctx context.Context) db.Iterator { - pq, err := pag.buildWithCursor() - if err != nil { - sess := pq.sel.(*selector).SQL().sess - return &iterator{sess, nil, err} - } - return pq.sel.IteratorContext(ctx) -} - -func (pag *paginator) String() string { - pq, err := pag.buildWithCursor() - if err != nil { - panic(err.Error()) - } - return pq.sel.String() -} - -func (pag *paginator) Arguments() []interface{} { - pq, err := pag.buildWithCursor() - if err != nil { - return nil - } - return pq.sel.Arguments() -} - -func (pag *paginator) Compile() (string, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return "", err - } - return pq.sel.(*selector).Compile() -} - -func (pag *paginator) Query() (*sql.Rows, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.Query() -} - -func (pag *paginator) QueryContext(ctx context.Context) (*sql.Rows, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.QueryContext(ctx) -} - -func (pag *paginator) QueryRow() (*sql.Row, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.QueryRow() -} - -func (pag *paginator) QueryRowContext(ctx context.Context) (*sql.Row, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.QueryRowContext(ctx) -} - -func (pag *paginator) Prepare() (*sql.Stmt, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.Prepare() -} - -func (pag *paginator) PrepareContext(ctx context.Context) (*sql.Stmt, error) { - pq, err := pag.buildWithCursor() - if err != nil { - return nil, err - } - return pq.sel.PrepareContext(ctx) -} - -func (pag *paginator) TotalEntries() (uint64, error) { - pq, err := pag.build() - if err != nil { - return 0, err - } - return pq.count() -} - -func (pag *paginator) build() (*paginatorQuery, error) { - pq, err := immutable.FastForward(pag) - if err != nil { - return nil, err - } - return pq.(*paginatorQuery), nil -} - -func (pag *paginator) buildWithCursor() (*paginatorQuery, error) { - pq, err := immutable.FastForward(pag) - if err != nil { - return nil, err - } - - pqq := pq.(*paginatorQuery) - - if pqq.cursorReverseOrder { - orderBy := pqq.cursorColumn - - if orderBy == "" { - return nil, errMissingCursorColumn - } - - if strings.HasPrefix(orderBy, "-") { - orderBy = orderBy[1:] - } else { - orderBy = "-" + orderBy - } - - pqq.sel = pqq.sel.OrderBy(orderBy) - } - - if pqq.pageSize > 0 { - pqq.sel = pqq.sel.Limit(int(pqq.pageSize)) - if pqq.pageNumber > 1 { - pqq.sel = pqq.sel.Offset(int(pqq.pageSize * (pqq.pageNumber - 1))) - } - } - - if pqq.cursorCond != nil { - pqq.sel = pqq.sel.Where(pqq.cursorCond).Offset(0) - } - - if pqq.cursorColumn != "" { - if pqq.cursorReverseOrder { - pqq.sel = pqq.sel.(*selector).SQL(). - SelectFrom(db.Raw("? AS p0", pqq.sel)). - OrderBy(pqq.cursorColumn) - } else { - pqq.sel = pqq.sel.OrderBy(pqq.cursorColumn) - } - } - - return pqq, nil -} - -func (pag *paginator) Prev() immutable.Immutable { - if pag == nil { - return nil - } - return pag.prev -} - -func (pag *paginator) Fn(in interface{}) error { - if pag.fn == nil { - return nil - } - return pag.fn(in.(*paginatorQuery)) -} - -func (pag *paginator) Base() interface{} { - return &paginatorQuery{} -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/scanner.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/scanner.go deleted file mode 100644 index 8f592b8f..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/scanner.go +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package sqlbuilder - -import ( - "database/sql" - - db "github.com/upper/db/v4" -) - -type scanner struct { - v db.Unmarshaler -} - -func (u scanner) Scan(v interface{}) error { - return u.v.UnmarshalDB(v) -} - -var _ sql.Scanner = scanner{} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/select.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/select.go deleted file mode 100644 index 93772405..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/select.go +++ /dev/null @@ -1,524 +0,0 @@ -package sqlbuilder - -import ( - "context" - "database/sql" - "errors" - "fmt" - "strings" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/immutable" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -type selectorQuery struct { - table *exql.Columns - tableArgs []interface{} - - distinct bool - - where *exql.Where - whereArgs []interface{} - - groupBy *exql.GroupBy - groupByArgs []interface{} - - orderBy *exql.OrderBy - orderByArgs []interface{} - - limit exql.Limit - offset exql.Offset - - columns *exql.Columns - columnsArgs []interface{} - - joins []*exql.Join - joinsArgs []interface{} - - amendFn func(string) string -} - -func (sq *selectorQuery) and(b *sqlBuilder, terms ...interface{}) error { - where, whereArgs := b.t.toWhereWithArguments(terms) - - if sq.where == nil { - sq.where, sq.whereArgs = &exql.Where{}, []interface{}{} - } - sq.where.Append(&where) - sq.whereArgs = append(sq.whereArgs, whereArgs...) - - return nil -} - -func (sq *selectorQuery) arguments() []interface{} { - return joinArguments( - sq.columnsArgs, - sq.tableArgs, - sq.joinsArgs, - sq.whereArgs, - sq.groupByArgs, - sq.orderByArgs, - ) -} - -func (sq *selectorQuery) statement() *exql.Statement { - stmt := &exql.Statement{ - Type: exql.Select, - Table: sq.table, - Columns: sq.columns, - Distinct: sq.distinct, - Limit: sq.limit, - Offset: sq.offset, - Where: sq.where, - OrderBy: sq.orderBy, - GroupBy: sq.groupBy, - } - - if len(sq.joins) > 0 { - stmt.Joins = exql.JoinConditions(sq.joins...) - } - - stmt.SetAmendment(sq.amendFn) - - return stmt -} - -func (sq *selectorQuery) pushJoin(t string, tables []interface{}) error { - fragments, args, err := columnFragments(tables) - if err != nil { - return err - } - - if sq.joins == nil { - sq.joins = []*exql.Join{} - } - sq.joins = append(sq.joins, - &exql.Join{ - Type: t, - Table: exql.JoinColumns(fragments...), - }, - ) - - sq.joinsArgs = append(sq.joinsArgs, args...) - - return nil -} - -type selector struct { - builder *sqlBuilder - - fn func(*selectorQuery) error - prev *selector -} - -var _ = immutable.Immutable(&selector{}) - -func (sel *selector) SQL() *sqlBuilder { - if sel.prev == nil { - return sel.builder - } - return sel.prev.SQL() -} - -func (sel *selector) String() string { - s, err := sel.Compile() - if err != nil { - panic(err.Error()) - } - return prepareQueryForDisplay(s) -} - -func (sel *selector) frame(fn func(*selectorQuery) error) *selector { - return &selector{prev: sel, fn: fn} -} - -func (sel *selector) clone() db.Selector { - return sel.frame(func(*selectorQuery) error { - return nil - }) -} - -func (sel *selector) From(tables ...interface{}) db.Selector { - return sel.frame( - func(sq *selectorQuery) error { - fragments, args, err := columnFragments(tables) - if err != nil { - return err - } - sq.table = exql.JoinColumns(fragments...) - sq.tableArgs = args - return nil - }, - ) -} - -func (sel *selector) setColumns(columns ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - sq.columns = nil - return sq.pushColumns(columns...) - }) -} - -func (sel *selector) Columns(columns ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushColumns(columns...) - }) -} - -func (sq *selectorQuery) pushColumns(columns ...interface{}) error { - f, args, err := columnFragments(columns) - if err != nil { - return err - } - - c := exql.JoinColumns(f...) - - if sq.columns != nil { - sq.columns.Append(c) - } else { - sq.columns = c - } - - sq.columnsArgs = append(sq.columnsArgs, args...) - return nil -} - -func (sel *selector) Distinct(exps ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - sq.distinct = true - return sq.pushColumns(exps...) - }) -} - -func (sel *selector) Where(terms ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - if len(terms) == 1 && terms[0] == nil { - sq.where, sq.whereArgs = &exql.Where{}, []interface{}{} - return nil - } - return sq.and(sel.SQL(), terms...) - }) -} - -func (sel *selector) And(terms ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.and(sel.SQL(), terms...) - }) -} - -func (sel *selector) Amend(fn func(string) string) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - sq.amendFn = fn - return nil - }) -} - -func (sel *selector) Arguments() []interface{} { - sq, err := sel.build() - if err != nil { - return nil - } - return sq.arguments() -} - -func (sel *selector) GroupBy(columns ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - fragments, args, err := columnFragments(columns) - if err != nil { - return err - } - - if fragments != nil { - sq.groupBy = exql.GroupByColumns(fragments...) - } - sq.groupByArgs = args - - return nil - }) -} - -func (sel *selector) OrderBy(columns ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - - if len(columns) == 1 && columns[0] == nil { - sq.orderBy = nil - sq.orderByArgs = nil - return nil - } - - var sortColumns exql.SortColumns - - for i := range columns { - var sort *exql.SortColumn - - switch value := columns[i].(type) { - case *adapter.RawExpr: - query, args := Preprocess(value.Raw(), value.Arguments()) - sort = &exql.SortColumn{ - Column: &exql.Raw{Value: query}, - } - sq.orderByArgs = append(sq.orderByArgs, args...) - case *adapter.FuncExpr: - fnName, fnArgs := value.Name(), value.Arguments() - if len(fnArgs) == 0 { - fnName = fnName + "()" - } else { - fnName = fnName + "(?" + strings.Repeat("?, ", len(fnArgs)-1) + ")" - } - fnName, fnArgs = Preprocess(fnName, fnArgs) - sort = &exql.SortColumn{ - Column: &exql.Raw{Value: fnName}, - } - sq.orderByArgs = append(sq.orderByArgs, fnArgs...) - case string: - if strings.HasPrefix(value, "-") { - sort = &exql.SortColumn{ - Column: exql.ColumnWithName(value[1:]), - Order: exql.Order_Descendent, - } - } else { - chunks := strings.SplitN(value, " ", 2) - - order := exql.Order_Ascendent - if len(chunks) > 1 && strings.ToUpper(chunks[1]) == "DESC" { - order = exql.Order_Descendent - } - - sort = &exql.SortColumn{ - Column: exql.ColumnWithName(chunks[0]), - Order: order, - } - } - default: - return fmt.Errorf("Can't sort by type %T", value) - } - sortColumns.Columns = append(sortColumns.Columns, sort) - } - - sq.orderBy = &exql.OrderBy{ - SortColumns: &sortColumns, - } - return nil - }) -} - -func (sel *selector) Using(columns ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - - joins := len(sq.joins) - if joins == 0 { - return errors.New(`cannot use Using() without a preceding Join() expression`) - } - - lastJoin := sq.joins[joins-1] - if lastJoin.On != nil { - return errors.New(`cannot use Using() and On() with the same Join() expression`) - } - - fragments, args, err := columnFragments(columns) - if err != nil { - return err - } - - sq.joinsArgs = append(sq.joinsArgs, args...) - lastJoin.Using = exql.UsingColumns(fragments...) - - return nil - }) -} - -func (sel *selector) FullJoin(tables ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushJoin("FULL", tables) - }) -} - -func (sel *selector) CrossJoin(tables ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushJoin("CROSS", tables) - }) -} - -func (sel *selector) RightJoin(tables ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushJoin("RIGHT", tables) - }) -} - -func (sel *selector) LeftJoin(tables ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushJoin("LEFT", tables) - }) -} - -func (sel *selector) Join(tables ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - return sq.pushJoin("", tables) - }) -} - -func (sel *selector) On(terms ...interface{}) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - joins := len(sq.joins) - - if joins == 0 { - return errors.New(`cannot use On() without a preceding Join() expression`) - } - - lastJoin := sq.joins[joins-1] - if lastJoin.On != nil { - return errors.New(`cannot use Using() and On() with the same Join() expression`) - } - - w, a := sel.SQL().t.toWhereWithArguments(terms) - o := exql.On(w) - - lastJoin.On = &o - - sq.joinsArgs = append(sq.joinsArgs, a...) - - return nil - }) -} - -func (sel *selector) Limit(n int) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - if n < 0 { - n = 0 - } - sq.limit = exql.Limit(n) - return nil - }) -} - -func (sel *selector) Offset(n int) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - if n < 0 { - n = 0 - } - sq.offset = exql.Offset(n) - return nil - }) -} - -func (sel *selector) template() *exql.Template { - return sel.SQL().t.Template -} - -func (sel *selector) As(alias string) db.Selector { - return sel.frame(func(sq *selectorQuery) error { - if sq.table == nil { - return errors.New("Cannot use As() without a preceding From() expression") - } - last := len(sq.table.Columns) - 1 - if raw, ok := sq.table.Columns[last].(*exql.Raw); ok { - compiled, err := exql.ColumnWithName(alias).Compile(sel.template()) - if err != nil { - return err - } - sq.table.Columns[last] = &exql.Raw{Value: raw.Value + " AS " + compiled} - } - return nil - }) -} - -func (sel *selector) statement() *exql.Statement { - sq, _ := sel.build() - return sq.statement() -} - -func (sel *selector) QueryRow() (*sql.Row, error) { - return sel.QueryRowContext(sel.SQL().sess.Context()) -} - -func (sel *selector) QueryRowContext(ctx context.Context) (*sql.Row, error) { - sq, err := sel.build() - if err != nil { - return nil, err - } - - return sel.SQL().sess.StatementQueryRow(ctx, sq.statement(), sq.arguments()...) -} - -func (sel *selector) Prepare() (*sql.Stmt, error) { - return sel.PrepareContext(sel.SQL().sess.Context()) -} - -func (sel *selector) PrepareContext(ctx context.Context) (*sql.Stmt, error) { - sq, err := sel.build() - if err != nil { - return nil, err - } - return sel.SQL().sess.StatementPrepare(ctx, sq.statement()) -} - -func (sel *selector) Query() (*sql.Rows, error) { - return sel.QueryContext(sel.SQL().sess.Context()) -} - -func (sel *selector) QueryContext(ctx context.Context) (*sql.Rows, error) { - sq, err := sel.build() - if err != nil { - return nil, err - } - return sel.SQL().sess.StatementQuery(ctx, sq.statement(), sq.arguments()...) -} - -func (sel *selector) Iterator() db.Iterator { - return sel.IteratorContext(sel.SQL().sess.Context()) -} - -func (sel *selector) IteratorContext(ctx context.Context) db.Iterator { - sess := sel.SQL().sess - sq, err := sel.build() - if err != nil { - return &iterator{sess, nil, err} - } - - rows, err := sess.StatementQuery(ctx, sq.statement(), sq.arguments()...) - return &iterator{sess, rows, err} -} - -func (sel *selector) Paginate(pageSize uint) db.Paginator { - return newPaginator(sel.clone(), pageSize) -} - -func (sel *selector) All(destSlice interface{}) error { - return sel.Iterator().All(destSlice) -} - -func (sel *selector) One(dest interface{}) error { - return sel.Iterator().One(dest) -} - -func (sel *selector) build() (*selectorQuery, error) { - sq, err := immutable.FastForward(sel) - if err != nil { - return nil, err - } - return sq.(*selectorQuery), nil -} - -func (sel *selector) Compile() (string, error) { - return sel.statement().Compile(sel.template()) -} - -func (sel *selector) Prev() immutable.Immutable { - if sel == nil { - return nil - } - return sel.prev -} - -func (sel *selector) Fn(in interface{}) error { - if sel.fn == nil { - return nil - } - return sel.fn(in.(*selectorQuery)) -} - -func (sel *selector) Base() interface{} { - return &selectorQuery{} -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/sqlbuilder.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/sqlbuilder.go deleted file mode 100644 index af90a57c..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/sqlbuilder.go +++ /dev/null @@ -1,40 +0,0 @@ -package sqlbuilder - -import ( - "database/sql" - "fmt" - - "github.com/upper/db/v4" -) - -// Engine represents a SQL database engine. -type Engine interface { - db.Session - - db.SQL -} - -func lookupAdapter(adapterName string) (Adapter, error) { - adapter := db.LookupAdapter(adapterName) - if sqlAdapter, ok := adapter.(Adapter); ok { - return sqlAdapter, nil - } - return nil, fmt.Errorf("%w %q", db.ErrMissingAdapter, adapterName) -} - -func BindTx(adapterName string, tx *sql.Tx) (Tx, error) { - adapter, err := lookupAdapter(adapterName) - if err != nil { - return nil, err - } - return adapter.NewTx(tx) -} - -// Bind creates a binding between an adapter and a *sql.Tx or a *sql.DB. -func BindDB(adapterName string, sess *sql.DB) (db.Session, error) { - adapter, err := lookupAdapter(adapterName) - if err != nil { - return nil, err - } - return adapter.New(sess) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/template.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/template.go deleted file mode 100644 index eca2382d..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/template.go +++ /dev/null @@ -1,323 +0,0 @@ -package sqlbuilder - -import ( - "database/sql/driver" - "fmt" - "strings" - - db "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/adapter" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -type templateWithUtils struct { - *exql.Template -} - -func newTemplateWithUtils(template *exql.Template) *templateWithUtils { - return &templateWithUtils{template} -} - -func (tu *templateWithUtils) PlaceholderValue(in interface{}) (exql.Fragment, []interface{}) { - switch t := in.(type) { - case *adapter.RawExpr: - return &exql.Raw{Value: t.Raw()}, t.Arguments() - case *adapter.FuncExpr: - fnName := t.Name() - fnArgs := []interface{}{} - args, _ := toInterfaceArguments(t.Arguments()) - fragments := []string{} - for i := range args { - frag, args := tu.PlaceholderValue(args[i]) - fragment, err := frag.Compile(tu.Template) - if err == nil { - fragments = append(fragments, fragment) - fnArgs = append(fnArgs, args...) - } - } - return &exql.Raw{Value: fnName + `(` + strings.Join(fragments, `, `) + `)`}, fnArgs - default: - return sqlPlaceholder, []interface{}{in} - } -} - -// toWhereWithArguments converts the given parameters into a exql.Where value. -func (tu *templateWithUtils) toWhereWithArguments(term interface{}) (where exql.Where, args []interface{}) { - args = []interface{}{} - - switch t := term.(type) { - case []interface{}: - if len(t) > 0 { - if s, ok := t[0].(string); ok { - if strings.ContainsAny(s, "?") || len(t) == 1 { - s, args = Preprocess(s, t[1:]) - where.Conditions = []exql.Fragment{&exql.Raw{Value: s}} - } else { - var val interface{} - key := s - if len(t) > 2 { - val = t[1:] - } else { - val = t[1] - } - cv, v := tu.toColumnValues(adapter.NewConstraint(key, val)) - args = append(args, v...) - for i := range cv.ColumnValues { - where.Conditions = append(where.Conditions, cv.ColumnValues[i]) - } - } - return - } - } - for i := range t { - w, v := tu.toWhereWithArguments(t[i]) - if len(w.Conditions) == 0 { - continue - } - args = append(args, v...) - where.Conditions = append(where.Conditions, w.Conditions...) - } - return - case *adapter.RawExpr: - r, v := Preprocess(t.Raw(), t.Arguments()) - where.Conditions = []exql.Fragment{&exql.Raw{Value: r}} - args = append(args, v...) - return - case adapter.Constraints: - for _, c := range t.Constraints() { - w, v := tu.toWhereWithArguments(c) - if len(w.Conditions) == 0 { - continue - } - args = append(args, v...) - where.Conditions = append(where.Conditions, w.Conditions...) - } - return - case adapter.LogicalExpr: - var cond exql.Where - - expressions := t.Expressions() - for i := range expressions { - w, v := tu.toWhereWithArguments(expressions[i]) - if len(w.Conditions) == 0 { - continue - } - args = append(args, v...) - cond.Conditions = append(cond.Conditions, w.Conditions...) - } - if len(cond.Conditions) < 1 { - return - } - - if len(cond.Conditions) <= 1 { - where.Conditions = append(where.Conditions, cond.Conditions...) - return where, args - } - - var frag exql.Fragment - switch t.Operator() { - case adapter.LogicalOperatorNone, adapter.LogicalOperatorAnd: - q := exql.And(cond) - frag = &q - case adapter.LogicalOperatorOr: - q := exql.Or(cond) - frag = &q - default: - panic(fmt.Sprintf("Unknown type %T", t)) - } - where.Conditions = append(where.Conditions, frag) - return - - case db.InsertResult: - return tu.toWhereWithArguments(t.ID()) - - case adapter.Constraint: - cv, v := tu.toColumnValues(t) - args = append(args, v...) - where.Conditions = append(where.Conditions, cv.ColumnValues...) - return where, args - } - - panic(fmt.Sprintf("Unknown condition type %T", term)) -} - -func (tu *templateWithUtils) comparisonOperatorMapper(t adapter.ComparisonOperator) string { - if t == adapter.ComparisonOperatorCustom { - return "" - } - if tu.ComparisonOperator != nil { - if op, ok := tu.ComparisonOperator[t]; ok { - return op - } - } - if op, ok := comparisonOperators[t]; ok { - return op - } - panic(fmt.Sprintf("unsupported comparison operator %v", t)) -} - -func (tu *templateWithUtils) toColumnValues(term interface{}) (cv exql.ColumnValues, args []interface{}) { - args = []interface{}{} - - switch t := term.(type) { - case adapter.Constraint: - columnValue := exql.ColumnValue{} - - // Getting column and operator. - if column, ok := t.Key().(string); ok { - chunks := strings.SplitN(strings.TrimSpace(column), " ", 2) - columnValue.Column = exql.ColumnWithName(chunks[0]) - if len(chunks) > 1 { - columnValue.Operator = chunks[1] - } - } else { - if rawValue, ok := t.Key().(*adapter.RawExpr); ok { - columnValue.Column = &exql.Raw{Value: rawValue.Raw()} - args = append(args, rawValue.Arguments()...) - } else { - columnValue.Column = &exql.Raw{Value: fmt.Sprintf("%v", t.Key())} - } - } - - switch value := t.Value().(type) { - case *db.FuncExpr: - fnName, fnArgs := value.Name(), value.Arguments() - if len(fnArgs) == 0 { - // A function with no arguments. - fnName = fnName + "()" - } else { - // A function with one or more arguments. - fnName = fnName + "(?" + strings.Repeat("?, ", len(fnArgs)-1) + ")" - } - fnName, fnArgs = Preprocess(fnName, fnArgs) - columnValue.Value = &exql.Raw{Value: fnName} - args = append(args, fnArgs...) - case *db.RawExpr: - q, a := Preprocess(value.Raw(), value.Arguments()) - columnValue.Value = &exql.Raw{Value: q} - args = append(args, a...) - case driver.Valuer: - columnValue.Value = sqlPlaceholder - args = append(args, value) - case *db.Comparison: - wrapper := &operatorWrapper{ - tu: tu, - cv: &columnValue, - op: value.Comparison, - } - - q, a := wrapper.preprocess() - q, a = Preprocess(q, a) - - columnValue = exql.ColumnValue{ - Column: &exql.Raw{Value: q}, - } - if a != nil { - args = append(args, a...) - } - - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - return cv, args - default: - wrapper := &operatorWrapper{ - tu: tu, - cv: &columnValue, - v: value, - } - - q, a := wrapper.preprocess() - q, a = Preprocess(q, a) - - columnValue = exql.ColumnValue{ - Column: &exql.Raw{Value: q}, - } - if a != nil { - args = append(args, a...) - } - - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - return cv, args - } - - if columnValue.Operator == "" { - columnValue.Operator = tu.comparisonOperatorMapper(adapter.ComparisonOperatorEqual) - } - - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - return cv, args - - case *adapter.RawExpr: - columnValue := exql.ColumnValue{} - p, q := Preprocess(t.Raw(), t.Arguments()) - columnValue.Column = &exql.Raw{Value: p} - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - args = append(args, q...) - return cv, args - - case adapter.Constraints: - for _, constraint := range t.Constraints() { - p, q := tu.toColumnValues(constraint) - cv.ColumnValues = append(cv.ColumnValues, p.ColumnValues...) - args = append(args, q...) - } - return cv, args - } - - panic(fmt.Sprintf("Unknown term type %T.", term)) -} - -func (tu *templateWithUtils) setColumnValues(term interface{}) (cv exql.ColumnValues, args []interface{}) { - args = []interface{}{} - - switch t := term.(type) { - case []interface{}: - l := len(t) - for i := 0; i < l; i++ { - column, isString := t[i].(string) - - if !isString { - p, q := tu.setColumnValues(t[i]) - cv.ColumnValues = append(cv.ColumnValues, p.ColumnValues...) - args = append(args, q...) - continue - } - - if !strings.ContainsAny(column, tu.AssignmentOperator) { - column = column + " " + tu.AssignmentOperator + " ?" - } - - chunks := strings.SplitN(column, tu.AssignmentOperator, 2) - - column = chunks[0] - format := strings.TrimSpace(chunks[1]) - - columnValue := exql.ColumnValue{ - Column: exql.ColumnWithName(column), - Operator: tu.AssignmentOperator, - Value: &exql.Raw{Value: format}, - } - - ps := strings.Count(format, "?") - if i+ps < l { - for j := 0; j < ps; j++ { - args = append(args, t[i+j+1]) - } - i = i + ps - } else { - panic(fmt.Sprintf("Format string %q has more placeholders than given arguments.", format)) - } - - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - } - return cv, args - case *adapter.RawExpr: - columnValue := exql.ColumnValue{} - p, q := Preprocess(t.Raw(), t.Arguments()) - columnValue.Column = &exql.Raw{Value: p} - cv.ColumnValues = append(cv.ColumnValues, &columnValue) - args = append(args, q...) - return cv, args - } - - panic(fmt.Sprintf("Unknown term type %T.", term)) -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/update.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/update.go deleted file mode 100644 index 8f433a18..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/update.go +++ /dev/null @@ -1,242 +0,0 @@ -package sqlbuilder - -import ( - "context" - "database/sql" - - "github.com/upper/db/v4" - "github.com/upper/db/v4/internal/immutable" - "github.com/upper/db/v4/internal/sqladapter/exql" -) - -type updaterQuery struct { - table string - - columnValues *exql.ColumnValues - columnValuesArgs []interface{} - - limit int - - where *exql.Where - whereArgs []interface{} - - amendFn func(string) string -} - -func (uq *updaterQuery) and(b *sqlBuilder, terms ...interface{}) error { - where, whereArgs := b.t.toWhereWithArguments(terms) - - if uq.where == nil { - uq.where, uq.whereArgs = &exql.Where{}, []interface{}{} - } - uq.where.Append(&where) - uq.whereArgs = append(uq.whereArgs, whereArgs...) - - return nil -} - -func (uq *updaterQuery) statement() *exql.Statement { - stmt := &exql.Statement{ - Type: exql.Update, - Table: exql.TableWithName(uq.table), - ColumnValues: uq.columnValues, - } - - if uq.where != nil { - stmt.Where = uq.where - } - - if uq.limit != 0 { - stmt.Limit = exql.Limit(uq.limit) - } - - stmt.SetAmendment(uq.amendFn) - - return stmt -} - -func (uq *updaterQuery) arguments() []interface{} { - return joinArguments( - uq.columnValuesArgs, - uq.whereArgs, - ) -} - -type updater struct { - builder *sqlBuilder - - fn func(*updaterQuery) error - prev *updater -} - -var _ = immutable.Immutable(&updater{}) - -func (upd *updater) SQL() *sqlBuilder { - if upd.prev == nil { - return upd.builder - } - return upd.prev.SQL() -} - -func (upd *updater) template() *exql.Template { - return upd.SQL().t.Template -} - -func (upd *updater) String() string { - s, err := upd.Compile() - if err != nil { - panic(err.Error()) - } - return prepareQueryForDisplay(s) -} - -func (upd *updater) setTable(table string) *updater { - return upd.frame(func(uq *updaterQuery) error { - uq.table = table - return nil - }) -} - -func (upd *updater) frame(fn func(*updaterQuery) error) *updater { - return &updater{prev: upd, fn: fn} -} - -func (upd *updater) Set(terms ...interface{}) db.Updater { - return upd.frame(func(uq *updaterQuery) error { - if uq.columnValues == nil { - uq.columnValues = &exql.ColumnValues{} - } - - if len(terms) == 1 { - ff, vv, err := Map(terms[0], nil) - if err == nil && len(ff) > 0 { - cvs := make([]exql.Fragment, 0, len(ff)) - args := make([]interface{}, 0, len(vv)) - - for i := range ff { - cv := &exql.ColumnValue{ - Column: exql.ColumnWithName(ff[i]), - Operator: upd.SQL().t.AssignmentOperator, - } - - var localArgs []interface{} - cv.Value, localArgs = upd.SQL().t.PlaceholderValue(vv[i]) - - args = append(args, localArgs...) - cvs = append(cvs, cv) - } - - uq.columnValues.Insert(cvs...) - uq.columnValuesArgs = append(uq.columnValuesArgs, args...) - - return nil - } - } - - cv, arguments := upd.SQL().t.setColumnValues(terms) - uq.columnValues.Insert(cv.ColumnValues...) - uq.columnValuesArgs = append(uq.columnValuesArgs, arguments...) - return nil - }) -} - -func (upd *updater) Amend(fn func(string) string) db.Updater { - return upd.frame(func(uq *updaterQuery) error { - uq.amendFn = fn - return nil - }) -} - -func (upd *updater) Arguments() []interface{} { - uq, err := upd.build() - if err != nil { - return nil - } - return uq.arguments() -} - -func (upd *updater) Where(terms ...interface{}) db.Updater { - return upd.frame(func(uq *updaterQuery) error { - uq.where, uq.whereArgs = &exql.Where{}, []interface{}{} - return uq.and(upd.SQL(), terms...) - }) -} - -func (upd *updater) And(terms ...interface{}) db.Updater { - return upd.frame(func(uq *updaterQuery) error { - return uq.and(upd.SQL(), terms...) - }) -} - -func (upd *updater) Prepare() (*sql.Stmt, error) { - return upd.PrepareContext(upd.SQL().sess.Context()) -} - -func (upd *updater) PrepareContext(ctx context.Context) (*sql.Stmt, error) { - uq, err := upd.build() - if err != nil { - return nil, err - } - return upd.SQL().sess.StatementPrepare(ctx, uq.statement()) -} - -func (upd *updater) Exec() (sql.Result, error) { - return upd.ExecContext(upd.SQL().sess.Context()) -} - -func (upd *updater) ExecContext(ctx context.Context) (sql.Result, error) { - uq, err := upd.build() - if err != nil { - return nil, err - } - return upd.SQL().sess.StatementExec(ctx, uq.statement(), uq.arguments()...) -} - -func (upd *updater) Limit(limit int) db.Updater { - return upd.frame(func(uq *updaterQuery) error { - uq.limit = limit - return nil - }) -} - -func (upd *updater) statement() (*exql.Statement, error) { - iq, err := upd.build() - if err != nil { - return nil, err - } - return iq.statement(), nil -} - -func (upd *updater) build() (*updaterQuery, error) { - uq, err := immutable.FastForward(upd) - if err != nil { - return nil, err - } - return uq.(*updaterQuery), nil -} - -func (upd *updater) Compile() (string, error) { - s, err := upd.statement() - if err != nil { - return "", err - } - return s.Compile(upd.template()) -} - -func (upd *updater) Prev() immutable.Immutable { - if upd == nil { - return nil - } - return upd.prev -} - -func (upd *updater) Fn(in interface{}) error { - if upd.fn == nil { - return nil - } - return upd.fn(in.(*updaterQuery)) -} - -func (upd *updater) Base() interface{} { - return &updaterQuery{} -} diff --git a/vendor/github.com/upper/db/v4/internal/sqlbuilder/wrapper.go b/vendor/github.com/upper/db/v4/internal/sqlbuilder/wrapper.go deleted file mode 100644 index a16c3984..00000000 --- a/vendor/github.com/upper/db/v4/internal/sqlbuilder/wrapper.go +++ /dev/null @@ -1,85 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package sqlbuilder - -import ( - "database/sql" - - db "github.com/upper/db/v4" -) - -// Tx represents a transaction on a SQL database. A transaction is like a -// regular Session except it has two extra methods: Commit and Rollback. -// -// A transaction needs to be committed (with Commit) to make changes permanent, -// changes can be discarded before committing by rolling back (with Rollback). -// After either committing or rolling back a transaction it can not longer be -// used and it's automatically closed. -type Tx interface { - // All db.Session methods are available on transaction sessions. They will - // run on the same transaction. - db.Session - - Commit() error - - Rollback() error -} - -// Adapter represents a SQL adapter. -type Adapter interface { - // New wraps an active *sql.DB session and returns a SQLBuilder database. The - // adapter needs to be imported to the blank namespace in order for it to be - // used here. - // - // This method is internally used by upper-db to create a builder backed by the - // given database. You may want to use your adapter's New function instead of - // this one. - New(*sql.DB) (db.Session, error) - - // NewTx wraps an active *sql.Tx transation and returns a SQLBuilder - // transaction. The adapter needs to be imported to the blank namespace in - // order for it to be used. - // - // This method is internally used by upper-db to create a builder backed by the - // given transaction. You may want to use your adapter's NewTx function - // instead of this one. - NewTx(*sql.Tx) (Tx, error) - - // Open opens a SQL database. - OpenDSN(db.ConnectionURL) (db.Session, error) -} - -type dbAdapter struct { - Adapter -} - -func (d *dbAdapter) Open(conn db.ConnectionURL) (db.Session, error) { - sess, err := d.Adapter.OpenDSN(conn) - if err != nil { - return nil, err - } - return sess.(db.Session), nil -} - -func NewCompatAdapter(adapter Adapter) db.Adapter { - return &dbAdapter{adapter} -} diff --git a/vendor/github.com/upper/db/v4/intersection.go b/vendor/github.com/upper/db/v4/intersection.go deleted file mode 100644 index 3478ba07..00000000 --- a/vendor/github.com/upper/db/v4/intersection.go +++ /dev/null @@ -1,73 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "github.com/upper/db/v4/internal/adapter" -) - -// AndExpr represents an expression joined by a logical conjuction (AND). -type AndExpr struct { - *adapter.LogicalExprGroup -} - -// And adds more expressions to the group. -func (a *AndExpr) And(andConds ...LogicalExpr) *AndExpr { - var fn func(*[]LogicalExpr) error - if len(andConds) > 0 { - fn = func(in *[]LogicalExpr) error { - *in = append(*in, andConds...) - return nil - } - } - return &AndExpr{a.LogicalExprGroup.Frame(fn)} -} - -// Empty returns false if the expressions has zero conditions. -func (a *AndExpr) Empty() bool { - return a.LogicalExprGroup.Empty() -} - -// And joins conditions under logical conjunction. Conditions can be -// represented by `db.Cond{}`, `db.Or()` or `db.And()`. -// -// Examples: -// -// // name = "Peter" AND last_name = "Parker" -// db.And( -// db.Cond{"name": "Peter"}, -// db.Cond{"last_name": "Parker "}, -// ) -// -// // (name = "Peter" OR name = "Mickey") AND last_name = "Mouse" -// db.And( -// db.Or( -// db.Cond{"name": "Peter"}, -// db.Cond{"name": "Mickey"}, -// ), -// db.Cond{"last_name": "Mouse"}, -// ) -func And(conds ...LogicalExpr) *AndExpr { - return &AndExpr{adapter.NewLogicalExprGroup(adapter.LogicalOperatorAnd, conds...)} -} - -var _ = adapter.LogicalExpr(&AndExpr{}) diff --git a/vendor/github.com/upper/db/v4/iterator.go b/vendor/github.com/upper/db/v4/iterator.go deleted file mode 100644 index 4536e46c..00000000 --- a/vendor/github.com/upper/db/v4/iterator.go +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -// Iterator provides methods for iterating over query results. -type Iterator interface { - // ResultMapper provides methods to retrieve and map results. - ResultMapper - - // Scan dumps the current result into the given pointer variable pointers. - Scan(dest ...interface{}) error - - // NextScan advances the iterator and performs Scan. - NextScan(dest ...interface{}) error - - // ScanOne advances the iterator, performs Scan and closes the iterator. - ScanOne(dest ...interface{}) error - - // Next dumps the current element into the given destination, which could be - // a pointer to either a map or a struct. - Next(dest ...interface{}) bool - - // Err returns the last error produced by the cursor. - Err() error - - // Close closes the iterator and frees up the cursor. - Close() error -} diff --git a/vendor/github.com/upper/db/v4/logger.go b/vendor/github.com/upper/db/v4/logger.go deleted file mode 100644 index 929243a2..00000000 --- a/vendor/github.com/upper/db/v4/logger.go +++ /dev/null @@ -1,370 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "context" - "fmt" - "log" - "os" - "regexp" - "runtime" - "strings" - "time" -) - -const ( - fmtLogSessID = `Session ID: %05d` - fmtLogTxID = `Transaction ID: %05d` - fmtLogQuery = `Query: %s` - fmtLogArgs = `Arguments: %#v` - fmtLogRowsAffected = `Rows affected: %d` - fmtLogLastInsertID = `Last insert ID: %d` - fmtLogError = `Error: %v` - fmtLogStack = `Stack: %v` - fmtLogTimeTaken = `Time taken: %0.5fs` - fmtLogContext = `Context: %v` -) - -const ( - maxFrames = 30 - skipFrames = 3 -) - -var ( - reInvisibleChars = regexp.MustCompile(`[\s\r\n\t]+`) -) - -// LogLevel represents a verbosity level for logs -type LogLevel int8 - -// Log levels -const ( - LogLevelTrace LogLevel = -1 - - LogLevelDebug LogLevel = iota - LogLevelInfo - LogLevelWarn - LogLevelError - LogLevelFatal - LogLevelPanic -) - -var logLevels = map[LogLevel]string{ - LogLevelTrace: "TRACE", - LogLevelDebug: "DEBUG", - LogLevelInfo: "INFO", - LogLevelWarn: "WARNING", - LogLevelError: "ERROR", - LogLevelFatal: "FATAL", - LogLevelPanic: "PANIC", -} - -func (ll LogLevel) String() string { - return logLevels[ll] -} - -const ( - defaultLogLevel LogLevel = LogLevelWarn -) - -var defaultLogger Logger = log.New(os.Stdout, "", log.LstdFlags) - -// Logger represents a logging interface that is compatible with the standard -// "log" and with many other logging libraries. -type Logger interface { - Fatal(v ...interface{}) - Fatalf(format string, v ...interface{}) - - Print(v ...interface{}) - Printf(format string, v ...interface{}) - - Panic(v ...interface{}) - Panicf(format string, v ...interface{}) -} - -// LoggingCollector provides different methods for collecting and classifying -// log messages. -type LoggingCollector interface { - Enabled(LogLevel) bool - - Level() LogLevel - - SetLogger(Logger) - SetLevel(LogLevel) - - Trace(v ...interface{}) - Tracef(format string, v ...interface{}) - - Debug(v ...interface{}) - Debugf(format string, v ...interface{}) - - Info(v ...interface{}) - Infof(format string, v ...interface{}) - - Warn(v ...interface{}) - Warnf(format string, v ...interface{}) - - Error(v ...interface{}) - Errorf(format string, v ...interface{}) - - Fatal(v ...interface{}) - Fatalf(format string, v ...interface{}) - - Panic(v ...interface{}) - Panicf(format string, v ...interface{}) -} - -type loggingCollector struct { - level LogLevel - logger Logger -} - -func (c *loggingCollector) Enabled(level LogLevel) bool { - return level >= c.level -} - -func (c *loggingCollector) SetLevel(level LogLevel) { - c.level = level -} - -func (c *loggingCollector) Level() LogLevel { - return c.level -} - -func (c *loggingCollector) Logger() Logger { - if c.logger == nil { - return defaultLogger - } - return c.logger -} - -func (c *loggingCollector) SetLogger(logger Logger) { - c.logger = logger -} - -func (c *loggingCollector) logf(level LogLevel, f string, v ...interface{}) { - if level >= LogLevelPanic { - c.Logger().Panicf(f, v...) - } - if level >= LogLevelFatal { - c.Logger().Fatalf(f, v...) - } - if c.Enabled(level) { - c.Logger().Printf(f, v...) - } -} - -func (c *loggingCollector) log(level LogLevel, v ...interface{}) { - if level >= LogLevelPanic { - c.Logger().Panic(v...) - } - if level >= LogLevelFatal { - c.Logger().Fatal(v...) - } - if c.Enabled(level) { - c.Logger().Print(v...) - } -} - -func (c *loggingCollector) Debugf(format string, v ...interface{}) { - c.logf(LogLevelDebug, format, v...) -} -func (c *loggingCollector) Debug(v ...interface{}) { - c.log(LogLevelDebug, v...) -} - -func (c *loggingCollector) Tracef(format string, v ...interface{}) { - c.logf(LogLevelTrace, format, v...) -} -func (c *loggingCollector) Trace(v ...interface{}) { - c.log(LogLevelDebug, v...) -} - -func (c *loggingCollector) Infof(format string, v ...interface{}) { - c.logf(LogLevelInfo, format, v...) -} -func (c *loggingCollector) Info(v ...interface{}) { - c.log(LogLevelInfo, v...) -} - -func (c *loggingCollector) Warnf(format string, v ...interface{}) { - c.logf(LogLevelWarn, format, v...) -} -func (c *loggingCollector) Warn(v ...interface{}) { - c.log(LogLevelWarn, v...) -} - -func (c *loggingCollector) Errorf(format string, v ...interface{}) { - c.logf(LogLevelError, format, v...) -} -func (c *loggingCollector) Error(v ...interface{}) { - c.log(LogLevelError, v...) -} - -func (c *loggingCollector) Fatalf(format string, v ...interface{}) { - c.logf(LogLevelFatal, format, v...) -} -func (c *loggingCollector) Fatal(v ...interface{}) { - c.log(LogLevelFatal, v...) -} - -func (c *loggingCollector) Panicf(format string, v ...interface{}) { - c.logf(LogLevelPanic, format, v...) -} -func (c *loggingCollector) Panic(v ...interface{}) { - c.log(LogLevelPanic, v...) -} - -var defaultLoggingCollector LoggingCollector = &loggingCollector{ - level: defaultLogLevel, - logger: defaultLogger, -} - -// QueryStatus represents the status of a query after being executed. -type QueryStatus struct { - SessID uint64 - TxID uint64 - - RowsAffected *int64 - LastInsertID *int64 - - RawQuery string - Args []interface{} - - Err error - - Start time.Time - End time.Time - - Context context.Context -} - -func (q *QueryStatus) Query() string { - query := reInvisibleChars.ReplaceAllString(q.RawQuery, " ") - query = strings.TrimSpace(query) - return query -} - -func (q *QueryStatus) Stack() []string { - frames := collectFrames() - lines := make([]string, 0, len(frames)) - - for _, frame := range frames { - lines = append(lines, fmt.Sprintf("%s@%s:%d", frame.Function, frame.File, frame.Line)) - } - return lines -} - -// String returns a formatted log message. -func (q *QueryStatus) String() string { - lines := make([]string, 0, 8) - - if q.SessID > 0 { - lines = append(lines, fmt.Sprintf(fmtLogSessID, q.SessID)) - } - - if q.TxID > 0 { - lines = append(lines, fmt.Sprintf(fmtLogTxID, q.TxID)) - } - - if query := q.RawQuery; query != "" { - lines = append(lines, fmt.Sprintf(fmtLogQuery, q.Query())) - } - - if len(q.Args) > 0 { - lines = append(lines, fmt.Sprintf(fmtLogArgs, q.Args)) - } - - if stack := q.Stack(); len(stack) > 0 { - lines = append(lines, fmt.Sprintf(fmtLogStack, "\n\t"+strings.Join(stack, "\n\t"))) - } - - if q.RowsAffected != nil { - lines = append(lines, fmt.Sprintf(fmtLogRowsAffected, *q.RowsAffected)) - } - if q.LastInsertID != nil { - lines = append(lines, fmt.Sprintf(fmtLogLastInsertID, *q.LastInsertID)) - } - - if q.Err != nil { - lines = append(lines, fmt.Sprintf(fmtLogError, q.Err)) - } - - lines = append(lines, fmt.Sprintf(fmtLogTimeTaken, float64(q.End.UnixNano()-q.Start.UnixNano())/float64(1e9))) - - if q.Context != nil { - lines = append(lines, fmt.Sprintf(fmtLogContext, q.Context)) - } - - return "\t" + strings.Replace(strings.Join(lines, "\n"), "\n", "\n\t", -1) + "\n\n" -} - -// LC returns the logging collector. -func LC() LoggingCollector { - return defaultLoggingCollector -} - -func init() { - if logLevel := strings.ToUpper(os.Getenv("UPPER_DB_LOG")); logLevel != "" { - for ll := range logLevels { - if ll.String() == logLevel { - LC().SetLevel(ll) - break - } - } - } -} - -func collectFrames() []runtime.Frame { - pc := make([]uintptr, maxFrames) - n := runtime.Callers(skipFrames, pc) - if n == 0 { - return nil - } - - pc = pc[:n] - frames := runtime.CallersFrames(pc) - - collectedFrames := make([]runtime.Frame, 0, maxFrames) - discardedFrames := make([]runtime.Frame, 0, maxFrames) - for { - frame, more := frames.Next() - - // collect all frames except those from upper/db and runtime stack - if (strings.Contains(frame.Function, "upper/db") || strings.Contains(frame.Function, "/go/src/")) && !strings.Contains(frame.Function, "test") { - discardedFrames = append(discardedFrames, frame) - } else { - collectedFrames = append(collectedFrames, frame) - } - - if !more { - break - } - } - - if len(collectedFrames) < 1 { - return discardedFrames - } - - return collectedFrames -} diff --git a/vendor/github.com/upper/db/v4/marshal.go b/vendor/github.com/upper/db/v4/marshal.go deleted file mode 100644 index 64c705db..00000000 --- a/vendor/github.com/upper/db/v4/marshal.go +++ /dev/null @@ -1,37 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -// Marshaler is the interface implemented by struct fields that can transform -// themselves into values to be stored in a database. -type Marshaler interface { - // MarshalDB returns the internal database representation of the Go value. - MarshalDB() (interface{}, error) -} - -// Unmarshaler is the interface implemented by struct fields that can transform -// themselves from database values into Go values. -type Unmarshaler interface { - // UnmarshalDB receives an internal database representation of a value and - // transforms it into a Go value. - UnmarshalDB(interface{}) error -} diff --git a/vendor/github.com/upper/db/v4/raw.go b/vendor/github.com/upper/db/v4/raw.go deleted file mode 100644 index b7e268e0..00000000 --- a/vendor/github.com/upper/db/v4/raw.go +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "github.com/upper/db/v4/internal/adapter" -) - -// RawExpr represents a raw (non-filtered) expression. -type RawExpr = adapter.RawExpr - -// Raw marks chunks of data as protected, so they pass directly to the query -// without any filtering. Use with care. -// -// Example: -// -// // SOUNDEX('Hello') -// Raw("SOUNDEX('Hello')") -func Raw(value string, args ...interface{}) *RawExpr { - return adapter.NewRawExpr(value, args) -} diff --git a/vendor/github.com/upper/db/v4/record.go b/vendor/github.com/upper/db/v4/record.go deleted file mode 100644 index 803f95c0..00000000 --- a/vendor/github.com/upper/db/v4/record.go +++ /dev/null @@ -1,82 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - -package db - -// Record is the equivalence between concrete database schemas and Go values. -type Record interface { - Store(sess Session) Store -} - -// HasConstraints is an interface for records that defines a Constraints method -// that returns the record's own constraints. -type HasConstraints interface { - Constraints() Cond -} - -// Validator is an interface for records that defines an (optional) Validate -// method that is called before persisting a record (creating or updating). If -// Validate returns an error the current operation is cancelled and rolled -// back. -type Validator interface { - Validate() error -} - -// BeforeCreateHook is an interface for records that defines an BeforeCreate -// method that is called before creating a record. If BeforeCreate returns an -// error the create process is cancelled and rolled back. -type BeforeCreateHook interface { - BeforeCreate(Session) error -} - -// AfterCreateHook is an interface for records that defines an AfterCreate -// method that is called after creating a record. If AfterCreate returns an -// error the create process is cancelled and rolled back. -type AfterCreateHook interface { - AfterCreate(Session) error -} - -// BeforeUpdateHook is an interface for records that defines a BeforeUpdate -// method that is called before updating a record. If BeforeUpdate returns an -// error the update process is cancelled and rolled back. -type BeforeUpdateHook interface { - BeforeUpdate(Session) error -} - -// AfterUpdateHook is an interface for records that defines an AfterUpdate -// method that is called after updating a record. If AfterUpdate returns an -// error the update process is cancelled and rolled back. -type AfterUpdateHook interface { - AfterUpdate(Session) error -} - -// BeforeDeleteHook is an interface for records that defines a BeforeDelete -// method that is called before removing a record. If BeforeDelete returns an -// error the delete process is cancelled and rolled back. -type BeforeDeleteHook interface { - BeforeDelete(Session) error -} - -// AfterDeleteHook is an interface for records that defines a AfterDelete -// method that is called after removing a record. If AfterDelete returns an -// error the delete process is cancelled and rolled back. -type AfterDeleteHook interface { - AfterDelete(Session) error -} diff --git a/vendor/github.com/upper/db/v4/result.go b/vendor/github.com/upper/db/v4/result.go deleted file mode 100644 index 32e748ed..00000000 --- a/vendor/github.com/upper/db/v4/result.go +++ /dev/null @@ -1,214 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "database/sql/driver" -) - -// Result is an interface that defines methods for result sets. -type Result interface { - - // String returns the SQL statement to be used in the query. - String() string - - // Limit defines the maximum number of results for this set. It only has - // effect on `One()`, `All()` and `Next()`. A negative limit cancels any - // previous limit settings. - Limit(int) Result - - // Offset ignores the first n results. It only has effect on `One()`, `All()` - // and `Next()`. A negative offset cancels any previous offset settings. - Offset(int) Result - - // OrderBy receives one or more field names that define the order in which - // elements will be returned in a query, field names may be prefixed with a - // minus sign (-) indicating descending order, ascending order will be used - // otherwise. - OrderBy(...interface{}) Result - - // Select defines specific columns to be fetched on every column in the - // result set. - Select(...interface{}) Result - - // And adds more filtering conditions on top of the existing constraints. - // - // res := col.Find(...).And(...) - And(...interface{}) Result - - // GroupBy is used to group results that have the same value in the same column - // or columns. - GroupBy(...interface{}) Result - - // Delete deletes all items within the result set. `Offset()` and `Limit()` - // are not honoured by `Delete()`. - Delete() error - - // Update modifies all items within the result set. `Offset()` and `Limit()` - // are not honoured by `Update()`. - Update(interface{}) error - - // Count returns the number of items that match the set conditions. - // `Offset()` and `Limit()` are not honoured by `Count()` - Count() (uint64, error) - - // Exists returns true if at least one item on the collection exists. False - // otherwise. - Exists() (bool, error) - - // Next fetches the next result within the result set and dumps it into the - // given pointer to struct or pointer to map. You must call - // `Close()` after finishing using `Next()`. - Next(ptrToStruct interface{}) bool - - // Err returns the last error that has happened with the result set, nil - // otherwise. - Err() error - - // One fetches the first result within the result set and dumps it into the - // given pointer to struct or pointer to map. The result set is automatically - // closed after picking the element, so there is no need to call Close() - // after using One(). - One(ptrToStruct interface{}) error - - // All fetches all results within the result set and dumps them into the - // given pointer to slice of maps or structs. The result set is - // automatically closed, so there is no need to call Close() after - // using All(). - All(sliceOfStructs interface{}) error - - // Paginate splits the results of the query into pages containing pageSize - // items. When using pagination previous settings for `Limit()` and - // `Offset()` are ignored. Page numbering starts at 1. - // - // Use `Page()` to define the specific page to get results from. - // - // Example: - // - // r = q.Paginate(12) - // - // You can provide constraints an order settings when using pagination: - // - // Example: - // - // res := q.Where(conds).OrderBy("-id").Paginate(12) - // err := res.Page(4).All(&items) - Paginate(pageSize uint) Result - - // Page makes the result set return results only from the page identified by - // pageNumber. Page numbering starts from 1. - // - // Example: - // - // r = q.Paginate(12).Page(4) - Page(pageNumber uint) Result - - // Cursor defines the column that is going to be taken as basis for - // cursor-based pagination. - // - // Example: - // - // a = q.Paginate(10).Cursor("id") - // b = q.Paginate(12).Cursor("-id") - // - // You can set "" as cursorColumn to disable cursors. - Cursor(cursorColumn string) Result - - // NextPage returns the next results page according to the cursor. It expects - // a cursorValue, which is the value the cursor column had on the last item - // of the current result set (lower bound). - // - // Example: - // - // cursor = q.Paginate(12).Cursor("id") - // res = cursor.NextPage(items[len(items)-1].ID) - // - // Note that `NextPage()` requires a cursor, any column with an absolute - // order (given two values one always precedes the other) can be a cursor. - // - // You can define the pagination order and add constraints to your result: - // - // cursor = q.Where(...).OrderBy("id").Paginate(10).Cursor("id") - // res = cursor.NextPage(lowerBound) - NextPage(cursorValue interface{}) Result - - // PrevPage returns the previous results page according to the cursor. It - // expects a cursorValue, which is the value the cursor column had on the - // fist item of the current result set. - // - // Example: - // - // current = current.PrevPage(items[0].ID) - // - // Note that PrevPage requires a cursor, any column with an absolute order - // (given two values one always precedes the other) can be a cursor. - // - // You can define the pagination order and add constraints to your result: - // - // cursor = q.Where(...).OrderBy("id").Paginate(10).Cursor("id") - // res = cursor.PrevPage(upperBound) - PrevPage(cursorValue interface{}) Result - - // TotalPages returns the total number of pages the result set could produce. - // If no pagination parameters have been set this value equals 1. - TotalPages() (uint, error) - - // TotalEntries returns the total number of matching items in the result set. - TotalEntries() (uint64, error) - - // Close closes the result set and frees all locked resources. - Close() error -} - -// InsertResult provides infomation about an insert operation. -type InsertResult interface { - // ID returns the ID of the newly inserted record. - ID() ID -} - -type insertResult struct { - id interface{} -} - -func (r *insertResult) ID() ID { - return r.id -} - -// ConstraintValue satisfies adapter.ConstraintValuer -func (r *insertResult) ConstraintValue() interface{} { - return r.id -} - -// Value satisfies driver.Valuer -func (r *insertResult) Value() (driver.Value, error) { - return r.id, nil -} - -// NewInsertResult creates an InsertResult -func NewInsertResult(id interface{}) InsertResult { - return &insertResult{id: id} -} - -// ID represents a record ID -type ID interface{} - -var _ = driver.Valuer(&insertResult{}) diff --git a/vendor/github.com/upper/db/v4/session.go b/vendor/github.com/upper/db/v4/session.go deleted file mode 100644 index 80b8105c..00000000 --- a/vendor/github.com/upper/db/v4/session.go +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "context" - "database/sql" -) - -// Session is an interface that defines methods for database adapters. -type Session interface { - // ConnectionURL returns the DSN that was used to set up the adapter. - ConnectionURL() ConnectionURL - - // Name returns the name of the database. - Name() string - - // Ping returns an error if the DBMS could not be reached. - Ping() error - - // Collection receives a table name and returns a collection reference. The - // information retrieved from a collection is cached. - Collection(name string) Collection - - // Collections returns a collection reference of all non system tables on the - // database. - Collections() ([]Collection, error) - - // Save creates or updates a record. - Save(record Record) error - - // Get retrieves a record that matches the given condition. - Get(record Record, cond interface{}) error - - // Delete deletes a record. - Delete(record Record) error - - // Reset resets all the caching mechanisms the adapter is using. - Reset() - - // Close terminates the currently active connection to the DBMS and clears - // all caches. - Close() error - - // Driver returns the underlying driver of the adapter as an interface. - // - // In order to actually use the driver, the `interface{}` value needs to be - // casted into the appropriate type. - // - // Example: - // internalSQLDriver := sess.Driver().(*sql.DB) - Driver() interface{} - - // SQL returns a special interface for SQL databases. - SQL() SQL - - // Tx creates a transaction block on the default database context and passes - // it to the function fn. If fn returns no error the transaction is commited, - // else the transaction is rolled back. After being commited or rolled back - // the transaction is closed automatically. - Tx(fn func(sess Session) error) error - - // TxContext creates a transaction block on the given context and passes it to - // the function fn. If fn returns no error the transaction is commited, else - // the transaction is rolled back. After being commited or rolled back the - // transaction is closed automatically. - TxContext(ctx context.Context, fn func(sess Session) error, opts *sql.TxOptions) error - - // Context returns the context used as default for queries on this session - // and for new transactions. If no context has been set, a default - // context.Background() is returned. - Context() context.Context - - // WithContext returns the same session on a different default context. The - // session is identical to the original one in all ways except for the - // context. - WithContext(ctx context.Context) Session - - Settings -} diff --git a/vendor/github.com/upper/db/v4/settings.go b/vendor/github.com/upper/db/v4/settings.go deleted file mode 100644 index d9b4177f..00000000 --- a/vendor/github.com/upper/db/v4/settings.go +++ /dev/null @@ -1,200 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "sync" - "sync/atomic" - "time" -) - -// Settings defines methods to get or set configuration values. -type Settings interface { - // SetPreparedStatementCache enables or disables the prepared statement - // cache. - SetPreparedStatementCache(bool) - - // PreparedStatementCacheEnabled returns true if the prepared statement cache - // is enabled, false otherwise. - PreparedStatementCacheEnabled() bool - - // SetConnMaxLifetime sets the default maximum amount of time a connection - // may be reused. - SetConnMaxLifetime(time.Duration) - - // ConnMaxLifetime returns the default maximum amount of time a connection - // may be reused. - ConnMaxLifetime() time.Duration - - // SetConnMaxIdleTime sets the default maximum amount of time a connection - // may remain idle. - SetConnMaxIdleTime(time.Duration) - - // ConnMaxIdleTime returns the default maximum amount of time a connection - // may remain idle. - ConnMaxIdleTime() time.Duration - - // SetMaxIdleConns sets the default maximum number of connections in the idle - // connection pool. - SetMaxIdleConns(int) - - // MaxIdleConns returns the default maximum number of connections in the idle - // connection pool. - MaxIdleConns() int - - // SetMaxOpenConns sets the default maximum number of open connections to the - // database. - SetMaxOpenConns(int) - - // MaxOpenConns returns the default maximum number of open connections to the - // database. - MaxOpenConns() int - - // SetMaxTransactionRetries sets the number of times a transaction can - // be retried. - SetMaxTransactionRetries(int) - - // MaxTransactionRetries returns the maximum number of times a - // transaction can be retried. - MaxTransactionRetries() int -} - -type settings struct { - sync.RWMutex - - preparedStatementCacheEnabled uint32 - - connMaxLifetime time.Duration - connMaxIdleTime time.Duration - maxOpenConns int - maxIdleConns int - - maxTransactionRetries int -} - -func (c *settings) binaryOption(opt *uint32) bool { - return atomic.LoadUint32(opt) == 1 -} - -func (c *settings) setBinaryOption(opt *uint32, value bool) { - if value { - atomic.StoreUint32(opt, 1) - return - } - atomic.StoreUint32(opt, 0) -} - -func (c *settings) SetPreparedStatementCache(value bool) { - c.setBinaryOption(&c.preparedStatementCacheEnabled, value) -} - -func (c *settings) PreparedStatementCacheEnabled() bool { - return c.binaryOption(&c.preparedStatementCacheEnabled) -} - -func (c *settings) SetConnMaxLifetime(t time.Duration) { - c.Lock() - c.connMaxLifetime = t - c.Unlock() -} - -func (c *settings) ConnMaxLifetime() time.Duration { - c.RLock() - defer c.RUnlock() - return c.connMaxLifetime -} - -func (c *settings) SetConnMaxIdleTime(t time.Duration) { - c.Lock() - c.connMaxIdleTime = t - c.Unlock() -} - -func (c *settings) ConnMaxIdleTime() time.Duration { - c.RLock() - defer c.RUnlock() - return c.connMaxIdleTime -} - -func (c *settings) SetMaxIdleConns(n int) { - c.Lock() - c.maxIdleConns = n - c.Unlock() -} - -func (c *settings) MaxIdleConns() int { - c.RLock() - defer c.RUnlock() - return c.maxIdleConns -} - -func (c *settings) SetMaxTransactionRetries(n int) { - c.Lock() - c.maxTransactionRetries = n - c.Unlock() -} - -func (c *settings) MaxTransactionRetries() int { - c.RLock() - defer c.RUnlock() - if c.maxTransactionRetries < 1 { - return 1 - } - return c.maxTransactionRetries -} - -func (c *settings) SetMaxOpenConns(n int) { - c.Lock() - c.maxOpenConns = n - c.Unlock() -} - -func (c *settings) MaxOpenConns() int { - c.RLock() - defer c.RUnlock() - return c.maxOpenConns -} - -// NewSettings returns a new settings value prefilled with the current default -// settings. -func NewSettings() Settings { - def := DefaultSettings.(*settings) - return &settings{ - preparedStatementCacheEnabled: def.preparedStatementCacheEnabled, - connMaxLifetime: def.connMaxLifetime, - connMaxIdleTime: def.connMaxIdleTime, - maxIdleConns: def.maxIdleConns, - maxOpenConns: def.maxOpenConns, - maxTransactionRetries: def.maxTransactionRetries, - } -} - -// DefaultSettings provides default global configuration settings for database -// sessions. -var DefaultSettings Settings = &settings{ - preparedStatementCacheEnabled: 0, - connMaxLifetime: time.Duration(0), - connMaxIdleTime: time.Duration(0), - maxIdleConns: 10, - maxOpenConns: 0, - maxTransactionRetries: 1, -} diff --git a/vendor/github.com/upper/db/v4/sql.go b/vendor/github.com/upper/db/v4/sql.go deleted file mode 100644 index a4bc18b9..00000000 --- a/vendor/github.com/upper/db/v4/sql.go +++ /dev/null @@ -1,212 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "context" - "database/sql" -) - -// SQL defines methods that can be used to build a SQL query with chainable -// method calls. -// -// Queries are immutable, so every call to any method will return a new -// pointer, if you want to build a query using variables you need to reassign -// them, like this: -// -// a = builder.Select("name").From("foo") // "a" is created -// -// a.Where(...) // No effect, the value returned from Where is ignored. -// -// a = a.Where(...) // "a" is reassigned and points to a different address. -// -type SQL interface { - - // Select initializes and returns a Selector, it accepts column names as - // parameters. - // - // The returned Selector does not initially point to any table, a call to - // From() is required after Select() to complete a valid query. - // - // Example: - // - // q := sqlbuilder.Select("first_name", "last_name").From("people").Where(...) - Select(columns ...interface{}) Selector - - // SelectFrom creates a Selector that selects all columns (like SELECT *) - // from the given table. - // - // Example: - // - // q := sqlbuilder.SelectFrom("people").Where(...) - SelectFrom(table ...interface{}) Selector - - // InsertInto prepares and returns an Inserter targeted at the given table. - // - // Example: - // - // q := sqlbuilder.InsertInto("books").Columns(...).Values(...) - InsertInto(table string) Inserter - - // DeleteFrom prepares a Deleter targeted at the given table. - // - // Example: - // - // q := sqlbuilder.DeleteFrom("tasks").Where(...) - DeleteFrom(table string) Deleter - - // Update prepares and returns an Updater targeted at the given table. - // - // Example: - // - // q := sqlbuilder.Update("profile").Set(...).Where(...) - Update(table string) Updater - - // Exec executes a SQL query that does not return any rows, like sql.Exec. - // Queries can be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.Exec(`INSERT INTO books (title) VALUES("La Ciudad y los Perros")`) - Exec(query interface{}, args ...interface{}) (sql.Result, error) - - // ExecContext executes a SQL query that does not return any rows, like sql.ExecContext. - // Queries can be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.ExecContext(ctx, `INSERT INTO books (title) VALUES(?)`, "La Ciudad y los Perros") - ExecContext(ctx context.Context, query interface{}, args ...interface{}) (sql.Result, error) - - // Prepare creates a prepared statement for later queries or executions. The - // caller must call the statement's Close method when the statement is no - // longer needed. - Prepare(query interface{}) (*sql.Stmt, error) - - // Prepare creates a prepared statement on the guiven context for later - // queries or executions. The caller must call the statement's Close method - // when the statement is no longer needed. - PrepareContext(ctx context.Context, query interface{}) (*sql.Stmt, error) - - // Query executes a SQL query that returns rows, like sql.Query. Queries can - // be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.Query(`SELECT * FROM people WHERE name = "Mateo"`) - Query(query interface{}, args ...interface{}) (*sql.Rows, error) - - // QueryContext executes a SQL query that returns rows, like - // sql.QueryContext. Queries can be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.QueryContext(ctx, `SELECT * FROM people WHERE name = ?`, "Mateo") - QueryContext(ctx context.Context, query interface{}, args ...interface{}) (*sql.Rows, error) - - // QueryRow executes a SQL query that returns one row, like sql.QueryRow. - // Queries can be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.QueryRow(`SELECT * FROM people WHERE name = "Haruki" AND last_name = "Murakami" LIMIT 1`) - QueryRow(query interface{}, args ...interface{}) (*sql.Row, error) - - // QueryRowContext executes a SQL query that returns one row, like - // sql.QueryRowContext. Queries can be either strings or upper-db statements. - // - // Example: - // - // sqlbuilder.QueryRowContext(ctx, `SELECT * FROM people WHERE name = "Haruki" AND last_name = "Murakami" LIMIT 1`) - QueryRowContext(ctx context.Context, query interface{}, args ...interface{}) (*sql.Row, error) - - // Iterator executes a SQL query that returns rows and creates an Iterator - // with it. - // - // Example: - // - // sqlbuilder.Iterator(`SELECT * FROM people WHERE name LIKE "M%"`) - Iterator(query interface{}, args ...interface{}) Iterator - - // IteratorContext executes a SQL query that returns rows and creates an Iterator - // with it. - // - // Example: - // - // sqlbuilder.IteratorContext(ctx, `SELECT * FROM people WHERE name LIKE "M%"`) - IteratorContext(ctx context.Context, query interface{}, args ...interface{}) Iterator - - // NewIterator converts a *sql.Rows value into an Iterator. - NewIterator(rows *sql.Rows) Iterator - - // NewIteratorContext converts a *sql.Rows value into an Iterator. - NewIteratorContext(ctx context.Context, rows *sql.Rows) Iterator -} - -// SQLExecer provides methods for executing statements that do not return -// results. -type SQLExecer interface { - // Exec executes a statement and returns sql.Result. - Exec() (sql.Result, error) - - // ExecContext executes a statement and returns sql.Result. - ExecContext(context.Context) (sql.Result, error) -} - -// SQLPreparer provides the Prepare and PrepareContext methods for creating -// prepared statements. -type SQLPreparer interface { - // Prepare creates a prepared statement. - Prepare() (*sql.Stmt, error) - - // PrepareContext creates a prepared statement. - PrepareContext(context.Context) (*sql.Stmt, error) -} - -// SQLGetter provides methods for executing statements that return results. -type SQLGetter interface { - // Query returns *sql.Rows. - Query() (*sql.Rows, error) - - // QueryContext returns *sql.Rows. - QueryContext(context.Context) (*sql.Rows, error) - - // QueryRow returns only one row. - QueryRow() (*sql.Row, error) - - // QueryRowContext returns only one row. - QueryRowContext(ctx context.Context) (*sql.Row, error) -} - -// SQLEngine represents a SQL engine that can execute SQL queries. This is -// compatible with *sql.DB. -type SQLEngine interface { - Exec(string, ...interface{}) (sql.Result, error) - Prepare(string) (*sql.Stmt, error) - Query(string, ...interface{}) (*sql.Rows, error) - QueryRow(string, ...interface{}) *sql.Row - - ExecContext(context.Context, string, ...interface{}) (sql.Result, error) - PrepareContext(context.Context, string) (*sql.Stmt, error) - QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error) - QueryRowContext(context.Context, string, ...interface{}) *sql.Row -} diff --git a/vendor/github.com/upper/db/v4/store.go b/vendor/github.com/upper/db/v4/store.go deleted file mode 100644 index 7227e15f..00000000 --- a/vendor/github.com/upper/db/v4/store.go +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -// Store represents a data store. -type Store interface { - Collection -} - -// StoreSaver is an interface for data stores that defines a Save method that -// has the task of persisting a record. -type StoreSaver interface { - Save(record Record) error -} - -// StoreCreator is an interface for data stores that defines a Create method -// that has the task of creating a new record. -type StoreCreator interface { - Create(record Record) error -} - -// StoreDeleter is an interface for data stores that defines a Delete method -// that has the task of removing a record. -type StoreDeleter interface { - Delete(record Record) error -} - -// StoreUpdater is an interface for data stores that defines a Update method -// that has the task of updating a record. -type StoreUpdater interface { - Update(record Record) error -} - -// StoreGetter is an interface for data stores that defines a Get method that -// has the task of retrieving a record. -type StoreGetter interface { - Get(record Record, id interface{}) error -} diff --git a/vendor/github.com/upper/db/v4/union.go b/vendor/github.com/upper/db/v4/union.go deleted file mode 100644 index 0216ab1e..00000000 --- a/vendor/github.com/upper/db/v4/union.go +++ /dev/null @@ -1,64 +0,0 @@ -// Copyright (c) 2012-present The upper.io/db authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -package db - -import ( - "github.com/upper/db/v4/internal/adapter" -) - -// OrExpr represents a logical expression joined by logical disjunction (OR). -type OrExpr struct { - *adapter.LogicalExprGroup -} - -// Or adds more expressions to the group. -func (o *OrExpr) Or(orConds ...LogicalExpr) *OrExpr { - var fn func(*[]LogicalExpr) error - if len(orConds) > 0 { - fn = func(in *[]LogicalExpr) error { - *in = append(*in, orConds...) - return nil - } - } - return &OrExpr{o.LogicalExprGroup.Frame(fn)} -} - -// Empty returns false if the expressions has zero conditions. -func (o *OrExpr) Empty() bool { - return o.LogicalExprGroup.Empty() -} - -// Or joins conditions under logical disjunction. Conditions can be represented -// by `db.Cond{}`, `db.Or()` or `db.And()`. -// -// Example: -// -// // year = 2012 OR year = 1987 -// db.Or( -// db.Cond{"year": 2012}, -// db.Cond{"year": 1987}, -// ) -func Or(conds ...LogicalExpr) *OrExpr { - return &OrExpr{adapter.NewLogicalExprGroup(adapter.LogicalOperatorOr, defaultJoin(conds...)...)} -} - -var _ = adapter.LogicalExpr(&OrExpr{}) diff --git a/vendor/github.com/uptrace/bun/.gitignore b/vendor/github.com/uptrace/bun/.gitignore deleted file mode 100644 index 174474c5..00000000 --- a/vendor/github.com/uptrace/bun/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -# Patterns for files created by this project. -# For other files, use global gitignore. -*.s3db diff --git a/vendor/github.com/uptrace/bun/.prettierrc.yml b/vendor/github.com/uptrace/bun/.prettierrc.yml deleted file mode 100644 index decea563..00000000 --- a/vendor/github.com/uptrace/bun/.prettierrc.yml +++ /dev/null @@ -1,6 +0,0 @@ -trailingComma: all -tabWidth: 2 -semi: false -singleQuote: true -proseWrap: always -printWidth: 100 diff --git a/vendor/github.com/uptrace/bun/CHANGELOG.md b/vendor/github.com/uptrace/bun/CHANGELOG.md deleted file mode 100644 index 7350d15e..00000000 --- a/vendor/github.com/uptrace/bun/CHANGELOG.md +++ /dev/null @@ -1,687 +0,0 @@ -## [1.1.12](https://github.com/uptrace/bun/compare/v1.1.11...v1.1.12) (2023-02-20) - - - -## [1.1.11](https://github.com/uptrace/bun/compare/v1.1.10...v1.1.11) (2023-02-01) - - -### Bug Fixes - -* add support for inserting values with unicode encoding for mssql dialect ([e98c6c0](https://github.com/uptrace/bun/commit/e98c6c0f033b553bea3bbc783aa56c2eaa17718f)) -* fix relation tag ([a3eedff](https://github.com/uptrace/bun/commit/a3eedff49700490d4998dcdcdc04f554d8f17166)) - - - -## [1.1.10](https://github.com/uptrace/bun/compare/v1.1.9...v1.1.10) (2023-01-16) - - -### Bug Fixes - -* allow QueryEvent to better detect operations in raw queries ([8e44735](https://github.com/uptrace/bun/commit/8e4473538364bae6562055d35e94c3e9c0b77691)) -* append default VARCHAR length instead of hardcoding it in the type definition ([e5079c7](https://github.com/uptrace/bun/commit/e5079c70343ba8c8b410aed23ac1d1ae5a2c9ff6)) -* prevent panic when use pg array with custom database type ([67e4412](https://github.com/uptrace/bun/commit/67e4412a972a9ed5f3a1d07c66957beedbc8a8a3)) -* properly return sql.ErrNoRows when scanning []byte ([996fead](https://github.com/uptrace/bun/commit/996fead2595fbcaff4878b77befe6709a54b3a4d)) - - -### Features - -* mssql output support for update or delete query ([#718](https://github.com/uptrace/bun/issues/718)) ([08876b4](https://github.com/uptrace/bun/commit/08876b4d420e761cbfa658aa6bb89b3f7c62c240)) -* add Err method to query builder ([c722c90](https://github.com/uptrace/bun/commit/c722c90f3dce2642ca4f4c2ab3f9a35cd496b557)) -* add support for time.Time array in Postgres ([3dd6f3b](https://github.com/uptrace/bun/commit/3dd6f3b2ac1bfbcda08240dc1676647b61715a9c)) -* mssql and pg merge query ([#723](https://github.com/uptrace/bun/issues/723)) ([deea764](https://github.com/uptrace/bun/commit/deea764d9380b16aad34228aa32717d10f2a4bab)) -* setError on attempt to set non-positive .Varchar() ([3335e0b](https://github.com/uptrace/bun/commit/3335e0b9d6d3f424145e1f715223a0fffe773d9a)) - - -### Reverts - -* go 1.18 ([67a4488](https://github.com/uptrace/bun/commit/67a448897eaaf1ebc54d629dfd3b2509b35da352)) - - - -## [1.1.9](https://github.com/uptrace/bun/compare/v1.1.8...v1.1.9) (2022-11-23) - - -### Bug Fixes - -* addng dialect override for append-bool ([#695](https://github.com/uptrace/bun/issues/695)) ([338f2f0](https://github.com/uptrace/bun/commit/338f2f04105ad89e64530db86aeb387e2ad4789e)) -* don't call hooks twice for whereExists ([9057857](https://github.com/uptrace/bun/commit/90578578e717f248e4b6eb114c5b495fd8d4ed41)) -* don't lock migrations when running Migrate and Rollback ([69a7354](https://github.com/uptrace/bun/commit/69a7354d987ff2ed5338c9ef5f4ce320724299ab)) -* **query:** make WhereDeleted compatible with ForceDelete ([299c3fd](https://github.com/uptrace/bun/commit/299c3fd57866aaecd127a8f219c95332898475db)), closes [#673](https://github.com/uptrace/bun/issues/673) -* relation join soft delete SQL generate ([a98f4e9](https://github.com/uptrace/bun/commit/a98f4e9f2bbdbc2b81cd13aa228a1a91eb905ba2)) - - -### Features - -* add migrate.Exec ([d368bbe](https://github.com/uptrace/bun/commit/d368bbe52bb1ee3dabf0aada190bf967eec10255)) -* **update:** "skipupdate" while bulk ([1a32b2f](https://github.com/uptrace/bun/commit/1a32b2ffbd5bc9a8d8b5978dd0f16c9fb79242ee)) -* **zerolog:** added zerolog hook ([9d2267d](https://github.com/uptrace/bun/commit/9d2267d414b47164ab6ceada55bf311ad548a6b0)) - - - -## [1.1.8](https://github.com/uptrace/bun/compare/v1.1.7...v1.1.8) (2022-08-29) - - -### Bug Fixes - -* **bunotel:** handle option attributes ([#656](https://github.com/uptrace/bun/issues/656)) ([9f1e0bd](https://github.com/uptrace/bun/commit/9f1e0bd19fc0300f12996b3e6595f093024e06b6)) -* driver.Valuer returns itself causes stackoverflow ([c9f51d3](https://github.com/uptrace/bun/commit/c9f51d3e2dabed0c29c26a4221abbc426a7206f3)), closes [#657](https://github.com/uptrace/bun/issues/657) -* **pgdriver:** return FATAL and PANIC errors immediately ([4595e38](https://github.com/uptrace/bun/commit/4595e385d3706116e47bf9dc295186ec7a2ab0f9)) -* quote m2m table name fixes [#649](https://github.com/uptrace/bun/issues/649) ([61a634e](https://github.com/uptrace/bun/commit/61a634e4cd5c18df4b75f756d4b0f06ea94bc3c8)) -* support multi-level embed column ([177ec4c](https://github.com/uptrace/bun/commit/177ec4c6e04f92957614ad4724bc82c422649a4b)), closes [#643](https://github.com/uptrace/bun/issues/643) - - -### Features - -* conditions not supporting composite in ([e5d78d4](https://github.com/uptrace/bun/commit/e5d78d464b94b78438cf275b4c35f713d129961d)) -* **idb:** support raw query ([be4e688](https://github.com/uptrace/bun/commit/be4e6886ad94b4b6ca42f24f73d79a15b1ac3188)) -* **migrate:** add MissingMigrations ([42567d0](https://github.com/uptrace/bun/commit/42567d052280f2c412d4796df7178915e537e6d9)) -* **pgdriver:** implement database/sql/driver.SessionResetter ([bda298a](https://github.com/uptrace/bun/commit/bda298ac66305e5b00ba67d72d3973625930c6b9)) -* **pgdriver:** provide access to the underlying net.Conn ([d07ea0e](https://github.com/uptrace/bun/commit/d07ea0ed1541225b5f08e59a4c87383811f7f051)) - - - -## [1.1.7](https://github.com/uptrace/bun/compare/v1.1.6...v1.1.7) (2022-07-29) - - -### Bug Fixes - -* change ScanAndCount without a limit to select all rows ([de5c570](https://github.com/uptrace/bun/commit/de5c5704166563aea41a82f7863f2db88ff108e2)) - - - -## [1.1.6](https://github.com/uptrace/bun/compare/v1.1.5...v1.1.6) (2022-07-10) - - -### Bug Fixes - -* bunotel add set attributes to query metrics ([dae82cc](https://github.com/uptrace/bun/commit/dae82cc0e3af49be1e474027b55c34364676985d)) -* **db.ScanRows:** ensure rows.Close is called ([9ffbc6a](https://github.com/uptrace/bun/commit/9ffbc6a46e24b908742b6973f33ef8e5b17cc12b)) -* merge apply ([3081849](https://github.com/uptrace/bun/commit/30818499eacddd3b1a3e749091ba6a1468125641)) -* **migrate:** close conn/tx on error ([7b168ea](https://github.com/uptrace/bun/commit/7b168eabfe0f844bcbf8dc89629d04c385b9f58c)) -* **migrate:** type Migration should be used as a value rather than a pointer ([fb43935](https://github.com/uptrace/bun/commit/fb4393582b49fe528800a66aac5fb1c9a6033048)) -* **migrate:** type MigrationGroup should be used as a value rather than a pointer ([649da1b](https://github.com/uptrace/bun/commit/649da1b3c158060add9b61b32c289260daafa65a)) -* mssql cursor pagination ([#589](https://github.com/uptrace/bun/issues/589)) ([b34ec97](https://github.com/uptrace/bun/commit/b34ec97ddda95629f73762721d60fd3e00e7e99f)) - - -### Features - -* "skipupdate" model field tag ([#565](https://github.com/uptrace/bun/issues/565)) ([9288294](https://github.com/uptrace/bun/commit/928829482c718a0c215aa4f4adfa6f3fb3ed4302)) -* add pgdriver write error to log ([5ddda3d](https://github.com/uptrace/bun/commit/5ddda3de31cd08ceee4bdea64ceae8d15eace07b)) -* add query string representation ([520da7e](https://github.com/uptrace/bun/commit/520da7e1d6dbf7b06846f6b39a7f99e8753c1466)) -* add relation condition with tag ([fe5bbf6](https://github.com/uptrace/bun/commit/fe5bbf64f33d25b310e5510ece7d705b9eb3bfea)) -* add support for ON UPDATE and ON DELETE rules on belongs-to relationships from struct tags ([#533](https://github.com/uptrace/bun/issues/533)) ([a327b2a](https://github.com/uptrace/bun/commit/a327b2ae216abb55a705626296c0cdbf8d648697)) -* add tx methods to IDB ([#587](https://github.com/uptrace/bun/issues/587)) ([feab313](https://github.com/uptrace/bun/commit/feab313c0358200b6e270ac70f4551b011ab5276)) -* added raw query calls ([#596](https://github.com/uptrace/bun/issues/596)) ([127644d](https://github.com/uptrace/bun/commit/127644d2eea443736fbd6bed3417595d439e4639)) -* **bunotel:** add option to enable formatting of queries ([#547](https://github.com/uptrace/bun/issues/547)) ([b9c768c](https://github.com/uptrace/bun/commit/b9c768cec3b5dea36c3c9c344d1e76e0ffad1369)) -* **config.go:** add sslrootcert support to DSN parameters ([3bd5d69](https://github.com/uptrace/bun/commit/3bd5d692d7df4f30d07b835d6a46fc7af382489a)) -* create an extra module for newrelic ([#599](https://github.com/uptrace/bun/issues/599)) ([6c676ce](https://github.com/uptrace/bun/commit/6c676ce13f05fe763471fbec2d5a2db48bc88650)) -* **migrate:** add WithMarkAppliedOnSuccess ([31b2cc4](https://github.com/uptrace/bun/commit/31b2cc4f5ccd794a436d081073d4974835d3780d)) -* **pgdialect:** add hstore support ([66b44f7](https://github.com/uptrace/bun/commit/66b44f7c0edc205927fb8be96aaf263b31828fa1)) -* **pgdialect:** add identity support ([646251e](https://github.com/uptrace/bun/commit/646251ec02a1e2ec717e907e6f128d8b51f17c6d)) -* **pgdriver:** expose pgdriver.ParseTime ([405a7d7](https://github.com/uptrace/bun/commit/405a7d78d8f60cf27e8f175deaf95db5877d84be)) - - - -## [1.1.5](https://github.com/uptrace/bun/compare/v1.1.4...v1.1.5) (2022-05-12) - - -### Bug Fixes - -* **driver/sqliteshim:** make it work with recent version of modernc sqlite ([2360584](https://github.com/uptrace/bun/commit/23605846c20684e39bf1eaac50a2147a1b68a729)) - - - -## [1.1.4](https://github.com/uptrace/bun/compare/v1.1.3...v1.1.4) (2022-04-20) - - -### Bug Fixes - -* automatically set nullzero when there is default:value option ([72c44ae](https://github.com/uptrace/bun/commit/72c44aebbeec3a83ed97ea25a3262174d744df65)) -* fix ForceDelete on live/undeleted rows ([1a33250](https://github.com/uptrace/bun/commit/1a33250f27f00e752a735ce10311ac95dcb0c968)) -* fix OmitZero and value overriding ([087ea07](https://github.com/uptrace/bun/commit/087ea0730551f1e841bacb6ad2fa3afd512a1df8)) -* rename Query to QueryBuilder ([98d111b](https://github.com/uptrace/bun/commit/98d111b7cc00fa61b6b2cec147f43285f4baadb4)) - - -### Features - -* add ApplyQueryBuilder ([582eca0](https://github.com/uptrace/bun/commit/582eca09cf2b59e67c2e4a2ad24f1a74cb53addd)) -* **config.go:** add connect_timeout to DSN parsable params ([998b04d](https://github.com/uptrace/bun/commit/998b04d51a9a4f182ac3458f90db8dbf9185c4ba)), closes [#505](https://github.com/uptrace/bun/issues/505) - - - -# [1.1.3](https://github.com/uptrace/bun/compare/v1.1.2...v) (2022-03-29) - -### Bug Fixes - -- fix panic message when has-many encounter an error - ([cfd2747](https://github.com/uptrace/bun/commit/cfd27475fac89a1c8cf798bfa64898bd77bbba79)) -- **migrate:** change rollback to match migrate behavior - ([df5af9c](https://github.com/uptrace/bun/commit/df5af9c9cbdf54ce243e037bbb2c7b154f8422b3)) - -### Features - -- added QueryBuilder interface for SelectQuery, UpdateQuery, DeleteQuery - ([#499](https://github.com/uptrace/bun/issues/499)) - ([59fef48](https://github.com/uptrace/bun/commit/59fef48f6b3ec7f32bdda779b6693c333ff1dfdb)) - -# [1.1.2](https://github.com/uptrace/bun/compare/v1.1.2...v) (2022-03-22) - -### Bug Fixes - -- correctly handle bun.In([][]byte{...}) - ([800616e](https://github.com/uptrace/bun/commit/800616ed28ca600ad676319a10adb970b2b4daf6)) - -### Features - -- accept extend option to allow extending existing models - ([48b80e4](https://github.com/uptrace/bun/commit/48b80e4f7e3ed8a28fd305f7853ebe7ab984a497)) - -# [1.1.0](https://github.com/uptrace/bun/compare/v1.1.0-beta.1...v1.1.0) (2022-02-28) - -### Features - -- Added [MSSQL](https://bun.uptrace.dev/guide/drivers.html#mssql) support as a 4th fully supported - DBMS. -- Added `SetColumn("col_name", "upper(?)", "hello")` in addition to - `Set("col_name = upper(?)", "hello")` which works for all 4 supported DBMS. - -* improve nil ptr values handling - ([b398e6b](https://github.com/uptrace/bun/commit/b398e6bea840ea2fd3e001b7879c0b00b6dcd6f7)) - -### Breaking changes - -- Bun no longer automatically marks some fields like `ID int64` as `pk` and `autoincrement`. You - need to manually add those options: - -```diff -type Model struct { -- ID int64 -+ ID int64 `bun:",pk,autoincrement"` -} -``` - -Bun [v1.0.25](#1024-2022-02-22) prints warnings for models with missing options so you are -recommended to upgrade to v1.0.24 before upgrading to v1.1.x. - -- Also, Bun no longer adds `nullzero` option to `soft_delete` fields. - -- Removed `nopk` and `allowzero` options. - -### Bug Fixes - -- append slice values - ([4a65129](https://github.com/uptrace/bun/commit/4a651294fb0f1e73079553024810c3ead9777311)) -- check for nils when appeding driver.Value - ([7bb1640](https://github.com/uptrace/bun/commit/7bb1640a00fceca1e1075fe6544b9a4842ab2b26)) -- cleanup soft deletes for mssql - ([e72e2c5](https://github.com/uptrace/bun/commit/e72e2c5d0a85f3d26c3fa22c7284c2de1dcfda8e)) -- **dbfixture:** apply cascade option. Fixes [#447](https://github.com/uptrace/bun/issues/447) - ([d32d988](https://github.com/uptrace/bun/commit/d32d98840bc23e74c836f8192cb4bc9529aa9233)) -- create table WithForeignKey() and has-many relation - ([3cf5649](https://github.com/uptrace/bun/commit/3cf56491706b5652c383dbe007ff2389ad64922e)) -- do not emit m2m relations in WithForeignKeys() - ([56c8c5e](https://github.com/uptrace/bun/commit/56c8c5ed44c0d6d734c3d3161c642ce8437e2248)) -- accept dest in select queries - ([33b5b6f](https://github.com/uptrace/bun/commit/33b5b6ff660b77238a737a543ca12675c7f0c284)) - -## [1.0.25](https://github.com/uptrace/bun/compare/v1.0.23...v1.0.25) (2022-02-22) - -### Bug Fixes - -### Deprecated - -In the comming v1.1.x release, Bun will stop automatically adding `,pk,autoincrement` options on -`ID int64/int32` fields. This version (v1.0.23) only prints a warning when it encounters such -fields, but the code will continue working as before. - -To fix warnings, add missing options: - -```diff -type Model struct { -- ID int64 -+ ID int64 `bun:",pk,autoincrement"` -} -``` - -To silence warnings: - -```go -bun.SetWarnLogger(log.New(ioutil.Discard, "", log.LstdFlags)) -``` - -Bun will also print a warning on [soft delete](https://bun.uptrace.dev/guide/soft-deletes.html) -fields without a `,nullzero` option. You can fix the warning by adding missing `,nullzero` or -`,allowzero` options. - -In v1.1.x, such options as `,nopk` and `,allowzero` will not be necessary and will be removed. - -### Bug Fixes - -- fix missing autoincrement warning - ([3bc9c72](https://github.com/uptrace/bun/commit/3bc9c721e1c1c5104c256a0c01c4525df6ecefc2)) - -* append slice values - ([4a65129](https://github.com/uptrace/bun/commit/4a651294fb0f1e73079553024810c3ead9777311)) -* don't automatically set pk, nullzero, and autoincrement options - ([519a0df](https://github.com/uptrace/bun/commit/519a0df9707de01a418aba0d6b7482cfe4c9a532)) - -### Features - -- add CreateTableQuery.DetectForeignKeys - ([a958fcb](https://github.com/uptrace/bun/commit/a958fcbab680b0c5ad7980f369c7b73f7673db87)) - -## [1.0.22](https://github.com/uptrace/bun/compare/v1.0.21...v1.0.22) (2022-01-28) - -### Bug Fixes - -- improve scan error message - ([54048b2](https://github.com/uptrace/bun/commit/54048b296b9648fd62107ce6fa6fd7e6e2a648c7)) -- properly discover json.Marshaler on ptr field - ([3b321b0](https://github.com/uptrace/bun/commit/3b321b08601c4b8dc6bcaa24adea20875883ac14)) - -### Breaking (MySQL, MariaDB) - -- **insert:** get last insert id only with pk support auto increment - ([79e7c79](https://github.com/uptrace/bun/commit/79e7c797beea54bfc9dc1cb0141a7520ff941b4d)). Make - sure your MySQL models have `bun:",pk,autoincrement"` options if you are using autoincrements. - -### Features - -- refuse to start when version check does not pass - ([ff8d767](https://github.com/uptrace/bun/commit/ff8d76794894eeaebede840e5199720f3f5cf531)) -- support Column in ValuesQuery - ([0707679](https://github.com/uptrace/bun/commit/0707679b075cac57efa8e6fe9019b57b2da4bcc7)) - -## [1.0.21](https://github.com/uptrace/bun/compare/v1.0.20...v1.0.21) (2022-01-06) - -### Bug Fixes - -- append where to index create - ([1de6cea](https://github.com/uptrace/bun/commit/1de6ceaa8bba59b69fbe0cc6916d1b27da5586d8)) -- check if slice is nil when calling BeforeAppendModel - ([938d9da](https://github.com/uptrace/bun/commit/938d9dadb72ceeeb906064d9575278929d20cbbe)) -- **dbfixture:** directly set matching types via reflect - ([780504c](https://github.com/uptrace/bun/commit/780504cf1da687fc51a22d002ea66e2ccc41e1a3)) -- properly handle driver.Valuer and type:json - ([a17454a](https://github.com/uptrace/bun/commit/a17454ac6b95b2a2e927d0c4e4aee96494108389)) -- support scanning string into uint64 - ([73cc117](https://github.com/uptrace/bun/commit/73cc117a9f7a623ced1fdaedb4546e8e7470e4d3)) -- unique module name for opentelemetry example - ([f2054fe](https://github.com/uptrace/bun/commit/f2054fe1d11cea3b21d69dab6f6d6d7d97ba06bb)) - -### Features - -- add anonymous fields with type name - ([508375b](https://github.com/uptrace/bun/commit/508375b8f2396cb088fd4399a9259584353eb7e5)) -- add baseQuery.GetConn() - ([81a9bee](https://github.com/uptrace/bun/commit/81a9beecb74fed7ec3574a1d42acdf10a74e0b00)) -- create new queries from baseQuery - ([ae1dd61](https://github.com/uptrace/bun/commit/ae1dd611a91c2b7c79bc2bc12e9a53e857791e71)) -- support INSERT ... RETURNING for MariaDB >= 10.5.0 - ([b6531c0](https://github.com/uptrace/bun/commit/b6531c00ecbd4c7ec56b4131fab213f9313edc1b)) - -## [1.0.20](https://github.com/uptrace/bun/compare/v1.0.19...v1.0.20) (2021-12-19) - -### Bug Fixes - -- add Event.QueryTemplate and change Event.Query to be always formatted - ([52b1ccd](https://github.com/uptrace/bun/commit/52b1ccdf3578418aa427adef9dcf942d90ae4fdd)) -- change GetTableName to return formatted table name in case ModelTableExpr - ([95144dd](https://github.com/uptrace/bun/commit/95144dde937b4ac88b36b0bd8b01372421069b44)) -- change ScanAndCount to work with transactions - ([5b3f2c0](https://github.com/uptrace/bun/commit/5b3f2c021c424da366caffd33589e8adde821403)) -- **dbfixture:** directly call funcs bypassing template eval - ([a61974b](https://github.com/uptrace/bun/commit/a61974ba2d24361c5357fb9bda1f3eceec5a45cd)) -- don't append CASCADE by default in drop table/column queries - ([26457ea](https://github.com/uptrace/bun/commit/26457ea5cb20862d232e6e5fa4dbdeac5d444bf1)) -- **migrate:** mark migrations as applied on error so the migration can be rolled back - ([8ce33fb](https://github.com/uptrace/bun/commit/8ce33fbbac8e33077c20daf19a14c5ff2291bcae)) -- respect nullzero when appending struct fields. Fixes - [#339](https://github.com/uptrace/bun/issues/339) - ([ffd02f3](https://github.com/uptrace/bun/commit/ffd02f3170b3cccdd670a48d563cfb41094c05d6)) -- reuse tx for relation join ([#366](https://github.com/uptrace/bun/issues/366)) - ([60bdb1a](https://github.com/uptrace/bun/commit/60bdb1ac84c0a699429eead3b7fdfbf14fe69ac6)) - -### Features - -- add `Dialect()` to Transaction and IDB interface - ([693f1e1](https://github.com/uptrace/bun/commit/693f1e135999fc31cf83b99a2530a695b20f4e1b)) -- add model embedding via embed:prefix\_ - ([9a2cedc](https://github.com/uptrace/bun/commit/9a2cedc8b08fa8585d4bfced338bd0a40d736b1d)) -- change the default logoutput to stderr - ([4bf5773](https://github.com/uptrace/bun/commit/4bf577382f19c64457cbf0d64490401450954654)), - closes [#349](https://github.com/uptrace/bun/issues/349) - -## [1.0.19](https://github.com/uptrace/bun/compare/v1.0.18...v1.0.19) (2021-11-30) - -### Features - -- add support for column:name to specify column name - ([e37b460](https://github.com/uptrace/bun/commit/e37b4602823babc8221970e086cfed90c6ad4cf4)) - -## [1.0.18](https://github.com/uptrace/bun/compare/v1.0.17...v1.0.18) (2021-11-24) - -### Bug Fixes - -- use correct operation for UpdateQuery - ([687a004](https://github.com/uptrace/bun/commit/687a004ef7ec6fe1ef06c394965dd2c2d822fc82)) - -### Features - -- add pgdriver.Notify - ([7ee443d](https://github.com/uptrace/bun/commit/7ee443d1b869d8ddc4746850f7425d0a9ccd012b)) -- CreateTableQuery.PartitionBy and CreateTableQuery.TableSpace - ([cd3ab4d](https://github.com/uptrace/bun/commit/cd3ab4d8f3682f5a30b87c2ebc2d7e551d739078)) -- **pgdriver:** add CopyFrom and CopyTo - ([0b97703](https://github.com/uptrace/bun/commit/0b977030b5c05f509e11d13550b5f99dfd62358d)) -- support InsertQuery.Ignore on PostgreSQL - ([1aa9d14](https://github.com/uptrace/bun/commit/1aa9d149da8e46e63ff79192e394fde4d18d9b60)) - -## [1.0.17](https://github.com/uptrace/bun/compare/v1.0.16...v1.0.17) (2021-11-11) - -### Bug Fixes - -- don't call rollback when tx is already done - ([8246c2a](https://github.com/uptrace/bun/commit/8246c2a63e2e6eba314201c6ba87f094edf098b9)) -- **mysql:** escape backslash char in strings - ([fb32029](https://github.com/uptrace/bun/commit/fb32029ea7604d066800b16df21f239b71bf121d)) - -## [1.0.16](https://github.com/uptrace/bun/compare/v1.0.15...v1.0.16) (2021-11-07) - -### Bug Fixes - -- call query hook when tx is started, committed, or rolled back - ([30e85b5](https://github.com/uptrace/bun/commit/30e85b5366b2e51951ef17a0cf362b58f708dab1)) -- **pgdialect:** auto-enable array support if the sql type is an array - ([62c1012](https://github.com/uptrace/bun/commit/62c1012b2482e83969e5c6f5faf89e655ce78138)) - -### Features - -- support multiple tag options join:left_col1=right_col1,join:left_col2=right_col2 - ([78cd5aa](https://github.com/uptrace/bun/commit/78cd5aa60a5c7d1323bb89081db2b2b811113052)) -- **tag:** log with bad tag name - ([4e82d75](https://github.com/uptrace/bun/commit/4e82d75be2dabdba1a510df4e1fbb86092f92f4c)) - -## [1.0.15](https://github.com/uptrace/bun/compare/v1.0.14...v1.0.15) (2021-10-29) - -### Bug Fixes - -- fixed bug creating table when model has no columns - ([042c50b](https://github.com/uptrace/bun/commit/042c50bfe41caaa6e279e02c887c3a84a3acd84f)) -- init table with dialect once - ([9a1ce1e](https://github.com/uptrace/bun/commit/9a1ce1e492602742bb2f587e9ed24e50d7d07cad)) - -### Features - -- accept columns in WherePK - ([b3e7035](https://github.com/uptrace/bun/commit/b3e70356db1aa4891115a10902316090fccbc8bf)) -- support ADD COLUMN IF NOT EXISTS - ([ca7357c](https://github.com/uptrace/bun/commit/ca7357cdfe283e2f0b94eb638372e18401c486e9)) - -## [1.0.14](https://github.com/uptrace/bun/compare/v1.0.13...v1.0.14) (2021-10-24) - -### Bug Fixes - -- correct binary serialization for mysql ([#259](https://github.com/uptrace/bun/issues/259)) - ([e899f50](https://github.com/uptrace/bun/commit/e899f50b22ef6759ef8c029a6cd3f25f2bde17ef)) -- correctly escape single quotes in pg arrays - ([3010847](https://github.com/uptrace/bun/commit/3010847f5c2c50bce1969689a0b77fd8a6fb7e55)) -- use BLOB sql type to encode []byte in MySQL and SQLite - ([725ec88](https://github.com/uptrace/bun/commit/725ec8843824a7fc8f4058ead75ab0e62a78192a)) - -### Features - -- warn when there are args but no placeholders - ([06dde21](https://github.com/uptrace/bun/commit/06dde215c8d0bde2b2364597190729a160e536a1)) - -## [1.0.13](https://github.com/uptrace/bun/compare/v1.0.12...v1.0.13) (2021-10-17) - -### Breaking Change - -- **pgdriver:** enable TLS by default with InsecureSkipVerify=true - ([15ec635](https://github.com/uptrace/bun/commit/15ec6356a04d5cf62d2efbeb189610532dc5eb31)) - -### Features - -- add BeforeAppendModelHook - ([0b55de7](https://github.com/uptrace/bun/commit/0b55de77aaffc1ed0894ef16f45df77bca7d93c1)) -- **pgdriver:** add support for unix socket DSN - ([f398cec](https://github.com/uptrace/bun/commit/f398cec1c3873efdf61ac0b94ebe06c657f0cf91)) - -## [1.0.12](https://github.com/uptrace/bun/compare/v1.0.11...v1.0.12) (2021-10-14) - -### Bug Fixes - -- add InsertQuery.ColumnExpr to specify columns - ([60ffe29](https://github.com/uptrace/bun/commit/60ffe293b37912d95f28e69734ff51edf4b27da7)) -- **bundebug:** change WithVerbose to accept a bool flag - ([b2f8b91](https://github.com/uptrace/bun/commit/b2f8b912de1dc29f40c79066de1e9d6379db666c)) -- **pgdialect:** fix bytea[] handling - ([a5ca013](https://github.com/uptrace/bun/commit/a5ca013742c5a2e947b43d13f9c2fc0cf6a65d9c)) -- **pgdriver:** rename DriverOption to Option - ([51c1702](https://github.com/uptrace/bun/commit/51c1702431787d7369904b2624e346bf3e59c330)) -- support allowzero on the soft delete field - ([d0abec7](https://github.com/uptrace/bun/commit/d0abec71a9a546472a83bd70ed4e6a7357659a9b)) - -### Features - -- **bundebug:** allow to configure the hook using env var, for example, BUNDEBUG={0,1,2} - ([ce92852](https://github.com/uptrace/bun/commit/ce928524cab9a83395f3772ae9dd5d7732af281d)) -- **bunotel:** report DBStats metrics - ([b9b1575](https://github.com/uptrace/bun/commit/b9b15750f405cdbd345b776f5a56c6f742bc7361)) -- **pgdriver:** add Error.StatementTimeout - ([8a7934d](https://github.com/uptrace/bun/commit/8a7934dd788057828bb2b0983732b4394b74e960)) -- **pgdriver:** allow setting Network in config - ([b24b5d8](https://github.com/uptrace/bun/commit/b24b5d8014195a56ad7a4c634c10681038e6044d)) - -## [1.0.11](https://github.com/uptrace/bun/compare/v1.0.10...v1.0.11) (2021-10-05) - -### Bug Fixes - -- **mysqldialect:** remove duplicate AppendTime - ([8d42090](https://github.com/uptrace/bun/commit/8d42090af34a1760004482c7fc0923b114d79937)) - -## [1.0.10](https://github.com/uptrace/bun/compare/v1.0.9...v1.0.10) (2021-10-05) - -### Bug Fixes - -- add UpdateQuery.OmitZero - ([2294db6](https://github.com/uptrace/bun/commit/2294db61d228711435fff1075409a30086b37555)) -- make ExcludeColumn work with many-to-many queries - ([300e12b](https://github.com/uptrace/bun/commit/300e12b993554ff839ec4fa6bbea97e16aca1b55)) -- **mysqldialect:** append time in local timezone - ([e763cc8](https://github.com/uptrace/bun/commit/e763cc81eac4b11fff4e074ad3ff6cd970a71697)) -- **tagparser:** improve parsing options with brackets - ([0daa61e](https://github.com/uptrace/bun/commit/0daa61edc3c4d927ed260332b99ee09f4bb6b42f)) - -### Features - -- add timetz parsing - ([6e415c4](https://github.com/uptrace/bun/commit/6e415c4c5fa2c8caf4bb4aed4e5897fe5676f5a5)) - -## [1.0.9](https://github.com/uptrace/bun/compare/v1.0.8...v1.0.9) (2021-09-27) - -### Bug Fixes - -- change DBStats to use uint32 instead of uint64 to make it work on i386 - ([caca2a7](https://github.com/uptrace/bun/commit/caca2a7130288dec49fa26b49c8550140ee52f4c)) - -### Features - -- add IQuery and QueryEvent.IQuery - ([b762942](https://github.com/uptrace/bun/commit/b762942fa3b1d8686d0a559f93f2a6847b83d9c1)) -- add QueryEvent.Model - ([7688201](https://github.com/uptrace/bun/commit/7688201b485d14d3e393956f09a3200ea4d4e31d)) -- **bunotel:** add experimental bun.query.timing metric - ([2cdb384](https://github.com/uptrace/bun/commit/2cdb384678631ccadac0fb75f524bd5e91e96ee2)) -- **pgdriver:** add Config.ConnParams to session config params - ([408caf0](https://github.com/uptrace/bun/commit/408caf0bb579e23e26fc6149efd6851814c22517)) -- **pgdriver:** allow specifying timeout in DSN - ([7dbc71b](https://github.com/uptrace/bun/commit/7dbc71b3494caddc2e97d113f00067071b9e19da)) - -## [1.0.8](https://github.com/uptrace/bun/compare/v1.0.7...v1.0.8) (2021-09-18) - -### Bug Fixes - -- don't append soft delete where for insert queries with on conflict clause - ([27c477c](https://github.com/uptrace/bun/commit/27c477ce071d4c49c99a2531d638ed9f20e33461)) -- improve bun.NullTime to accept string - ([73ad6f5](https://github.com/uptrace/bun/commit/73ad6f5640a0a9b09f8df2bc4ab9cb510021c50c)) -- make allowzero work with auto-detected primary keys - ([82ca87c](https://github.com/uptrace/bun/commit/82ca87c7c49797d507b31fdaacf8343716d4feff)) -- support soft deletes on nil model - ([0556e3c](https://github.com/uptrace/bun/commit/0556e3c63692a7f4e48659d52b55ffd9cca0202a)) - -## [1.0.7](https://github.com/uptrace/bun/compare/v1.0.6...v1.0.7) (2021-09-15) - -### Bug Fixes - -- don't append zero time as NULL without nullzero tag - ([3b8d9cb](https://github.com/uptrace/bun/commit/3b8d9cb4e39eb17f79a618396bbbe0adbc66b07b)) -- **pgdriver:** return PostgreSQL DATE as a string - ([40be0e8](https://github.com/uptrace/bun/commit/40be0e8ea85f8932b7a410a6fc2dd3acd2d18ebc)) -- specify table alias for soft delete where - ([5fff1dc](https://github.com/uptrace/bun/commit/5fff1dc1dd74fa48623a24fa79e358a544dfac0b)) - -### Features - -- add SelectQuery.Exists helper - ([c3e59c1](https://github.com/uptrace/bun/commit/c3e59c1bc58b43c4b8e33e7d170ad33a08fbc3c7)) - -## [1.0.6](https://github.com/uptrace/bun/compare/v1.0.5...v1.0.6) (2021-09-11) - -### Bug Fixes - -- change unique tag to create a separate unique constraint - ([8401615](https://github.com/uptrace/bun/commit/84016155a77ca77613cc054277fefadae3098757)) -- improve zero checker for ptr values - ([2b3623d](https://github.com/uptrace/bun/commit/2b3623dd665d873911fd20ca707016929921e862)) - -## v1.0.5 - Sep 09 2021 - -- chore: tweak bundebug colors -- fix: check if table is present when appending columns -- fix: copy []byte when scanning - -## v1.0.4 - Sep 08 2021 - -- Added support for MariaDB. -- Restored default `SET` for `ON CONFLICT DO UPDATE` queries. - -## v1.0.3 - Sep 06 2021 - -- Fixed bulk soft deletes. -- pgdialect: fixed scanning into an array pointer. - -## v1.0.2 - Sep 04 2021 - -- Changed to completely ignore fields marked with `bun:"-"`. If you want to be able to scan into - such columns, use `bun:",scanonly"`. -- pgdriver: fixed SASL authentication handling. - -## v1.0.1 - Sep 02 2021 - -- pgdriver: added erroneous zero writes retry. -- Improved column handling in Relation callback. - -## v1.0.0 - Sep 01 2021 - -- First stable release. - -## v0.4.1 - Aug 18 2021 - -- Fixed migrate package to properly rollback migrations. -- Added `allowzero` tag option that undoes `nullzero` option. - -## v0.4.0 - Aug 11 2021 - -- Changed `WhereGroup` function to accept `*SelectQuery`. -- Fixed query hooks for count queries. - -## v0.3.4 - Jul 19 2021 - -- Renamed `migrate.CreateGo` to `CreateGoMigration`. -- Added `migrate.WithPackageName` to customize the Go package name in generated migrations. -- Renamed `migrate.CreateSQL` to `CreateSQLMigrations` and changed `CreateSQLMigrations` to create - both up and down migration files. - -## v0.3.1 - Jul 12 2021 - -- Renamed `alias` field struct tag to `alt` so it is not confused with column alias. -- Reworked migrate package API. See - [migrate](https://github.com/uptrace/bun/tree/master/example/migrate) example for details. - -## v0.3.0 - Jul 09 2021 - -- Changed migrate package to return structured data instead of logging the progress. See - [migrate](https://github.com/uptrace/bun/tree/master/example/migrate) example for details. - -## v0.2.14 - Jul 01 2021 - -- Added [sqliteshim](https://pkg.go.dev/github.com/uptrace/bun/driver/sqliteshim) by - [Ivan Trubach](https://github.com/tie). -- Added support for MySQL 5.7 in addition to MySQL 8. - -## v0.2.12 - Jun 29 2021 - -- Fixed scanners for net.IP and net.IPNet. - -## v0.2.10 - Jun 29 2021 - -- Fixed pgdriver to format passed query args. - -## v0.2.9 - Jun 27 2021 - -- Added support for prepared statements in pgdriver. - -## v0.2.7 - Jun 26 2021 - -- Added `UpdateQuery.Bulk` helper to generate bulk-update queries. - - Before: - - ```go - models := []Model{ - {42, "hello"}, - {43, "world"}, - } - return db.NewUpdate(). - With("_data", db.NewValues(&models)). - Model(&models). - Table("_data"). - Set("model.str = _data.str"). - Where("model.id = _data.id") - ``` - - Now: - - ```go - db.NewUpdate(). - Model(&models). - Bulk() - ``` - -## v0.2.5 - Jun 25 2021 - -- Changed time.Time to always append zero time as `NULL`. -- Added `db.RunInTx` helper. - -## v0.2.4 - Jun 21 2021 - -- Added SSL support to pgdriver. - -## v0.2.3 - Jun 20 2021 - -- Replaced `ForceDelete(ctx)` with `ForceDelete().Exec(ctx)` for soft deletes. - -## v0.2.1 - Jun 17 2021 - -- Renamed `DBI` to `IConn`. `IConn` is a common interface for `*sql.DB`, `*sql.Conn`, and `*sql.Tx`. -- Added `IDB`. `IDB` is a common interface for `*bun.DB`, `bun.Conn`, and `bun.Tx`. - -## v0.2.0 - Jun 16 2021 - -- Changed [model hooks](https://bun.uptrace.dev/guide/hooks.html#model-hooks). See - [model-hooks](example/model-hooks) example. -- Renamed `has-one` to `belongs-to`. Renamed `belongs-to` to `has-one`. Previously Bun used - incorrect names for these relations. diff --git a/vendor/github.com/uptrace/bun/CONTRIBUTING.md b/vendor/github.com/uptrace/bun/CONTRIBUTING.md deleted file mode 100644 index 579b96f8..00000000 --- a/vendor/github.com/uptrace/bun/CONTRIBUTING.md +++ /dev/null @@ -1,34 +0,0 @@ -## Running tests - -To run tests, you need Docker which starts PostgreSQL and MySQL servers: - -```shell -cd internal/dbtest -./test.sh -``` - -To ease debugging, you can run tests and print all executed queries: - -```shell -BUNDEBUG=2 TZ= go test -run=TestName -``` - -## Releasing - -1. Run `release.sh` script which updates versions in go.mod files and pushes a new branch to GitHub: - -```shell -TAG=v1.0.0 ./scripts/release.sh -``` - -2. Open a pull request and wait for the build to finish. - -3. Merge the pull request and run `tag.sh` to create tags for packages: - -```shell -TAG=v1.0.0 ./scripts/tag.sh -``` - -## Documentation - -To contribute to the docs visit https://github.com/go-bun/bun-docs diff --git a/vendor/github.com/uptrace/bun/LICENSE b/vendor/github.com/uptrace/bun/LICENSE deleted file mode 100644 index 7ec81810..00000000 --- a/vendor/github.com/uptrace/bun/LICENSE +++ /dev/null @@ -1,24 +0,0 @@ -Copyright (c) 2021 Vladimir Mihailenco. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/uptrace/bun/Makefile b/vendor/github.com/uptrace/bun/Makefile deleted file mode 100644 index 4961a8ab..00000000 --- a/vendor/github.com/uptrace/bun/Makefile +++ /dev/null @@ -1,30 +0,0 @@ -ALL_GO_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | sort) -EXAMPLE_GO_MOD_DIRS := $(shell find ./example/ -type f -name 'go.mod' -exec dirname {} \; | sort) - -test: - set -e; for dir in $(ALL_GO_MOD_DIRS); do \ - echo "go test in $${dir}"; \ - (cd "$${dir}" && \ - go test && \ - env GOOS=linux GOARCH=386 go test && \ - go vet); \ - done - -go_mod_tidy: - go get -u && go mod tidy -go=1.18 - set -e; for dir in $(ALL_GO_MOD_DIRS); do \ - echo "go mod tidy in $${dir}"; \ - (cd "$${dir}" && \ - go get -u ./... && \ - go mod tidy -go=1.18); \ - done - -fmt: - gofmt -w -s ./ - goimports -w -local github.com/uptrace/bun ./ - -run-examples: - set -e; for dir in $(EXAMPLE_GO_MOD_DIRS); do \ - echo "go run . in $${dir}"; \ - (cd "$${dir}" && go run .); \ - done diff --git a/vendor/github.com/uptrace/bun/README.md b/vendor/github.com/uptrace/bun/README.md deleted file mode 100644 index 07a01aa6..00000000 --- a/vendor/github.com/uptrace/bun/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# SQL-first Golang ORM for PostgreSQL, MySQL, MSSQL, and SQLite - -[![build workflow](https://github.com/uptrace/bun/actions/workflows/build.yml/badge.svg)](https://github.com/uptrace/bun/actions) -[![PkgGoDev](https://pkg.go.dev/badge/github.com/uptrace/bun)](https://pkg.go.dev/github.com/uptrace/bun) -[![Documentation](https://img.shields.io/badge/bun-documentation-informational)](https://bun.uptrace.dev/) -[![Chat](https://discordapp.com/api/guilds/752070105847955518/widget.png)](https://discord.gg/rWtp5Aj) - -> Bun is brought to you by :star: [**uptrace/uptrace**](https://github.com/uptrace/uptrace). Uptrace -> is an open-source APM tool that supports distributed tracing, metrics, and logs. You can use it to -> monitor applications and set up automatic alerts to receive notifications via email, Slack, -> Telegram, and others. -> -> See [OpenTelemetry](example/opentelemetry) example which demonstrates how you can use Uptrace to -> monitor Bun. - -## Features - -- Works with [PostgreSQL](https://bun.uptrace.dev/guide/drivers.html#postgresql), - [MySQL](https://bun.uptrace.dev/guide/drivers.html#mysql) (including MariaDB), - [MSSQL](https://bun.uptrace.dev/guide/drivers.html#mssql), - [SQLite](https://bun.uptrace.dev/guide/drivers.html#sqlite). -- [ORM-like](/example/basic/) experience using good old SQL. Bun supports structs, map, scalars, and - slices of map/structs/scalars. -- [Bulk inserts](https://bun.uptrace.dev/guide/query-insert.html). -- [Bulk updates](https://bun.uptrace.dev/guide/query-update.html) using common table expressions. -- [Bulk deletes](https://bun.uptrace.dev/guide/query-delete.html). -- [Fixtures](https://bun.uptrace.dev/guide/fixtures.html). -- [Migrations](https://bun.uptrace.dev/guide/migrations.html). -- [Soft deletes](https://bun.uptrace.dev/guide/soft-deletes.html). - -### Resources - -- [**Get started**](https://bun.uptrace.dev/guide/golang-orm.html) -- [Examples](https://github.com/uptrace/bun/tree/master/example) -- [Discussions](https://github.com/uptrace/bun/discussions) -- [Chat](https://discord.gg/rWtp5Aj) -- [Reference](https://pkg.go.dev/github.com/uptrace/bun) -- [Starter kit](https://github.com/go-bun/bun-starter-kit) - -### Tutorials - -Wrote a tutorial for Bun? Create a PR to add here and on [Bun](https://bun.uptrace.dev/) site. - -### Featured projects using Bun - -- [uptrace](https://github.com/uptrace/uptrace) - Distributed tracing and metrics. -- [paralus](https://github.com/paralus/paralus) - All-in-one Kubernetes access manager. -- [inovex/scrumlr.io](https://github.com/inovex/scrumlr.io) - Webapp for collaborative online - retrospectives. -- [gotosocial](https://github.com/superseriousbusiness/gotosocial) - Golang fediverse server. -- [lorawan-stack](https://github.com/TheThingsNetwork/lorawan-stack) - The Things Stack, an Open - Source LoRaWAN Network Server. -- [anti-phishing-bot](https://github.com/Benricheson101/anti-phishing-bot) - Discord bot for - deleting Steam/Discord phishing links. -- [emerald-web3-gateway](https://github.com/oasisprotocol/emerald-web3-gateway) - Web3 Gateway for - the Oasis Emerald paratime. -- [lndhub.go](https://github.com/getAlby/lndhub.go) - accounting wrapper for the Lightning Network. -- [penguin-statistics](https://github.com/penguin-statistics/backend-next) - Penguin Statistics v3 - Backend. -- And - [hundreds more](https://github.com/uptrace/bun/network/dependents?package_id=UGFja2FnZS0yMjkxOTc4OTA4). - -## Why another database client? - -So you can elegantly write complex queries: - -```go -regionalSales := db.NewSelect(). - ColumnExpr("region"). - ColumnExpr("SUM(amount) AS total_sales"). - TableExpr("orders"). - GroupExpr("region") - -topRegions := db.NewSelect(). - ColumnExpr("region"). - TableExpr("regional_sales"). - Where("total_sales > (SELECT SUM(total_sales) / 10 FROM regional_sales)") - -var items []map[string]interface{} -err := db.NewSelect(). - With("regional_sales", regionalSales). - With("top_regions", topRegions). - ColumnExpr("region"). - ColumnExpr("product"). - ColumnExpr("SUM(quantity) AS product_units"). - ColumnExpr("SUM(amount) AS product_sales"). - TableExpr("orders"). - Where("region IN (SELECT region FROM top_regions)"). - GroupExpr("region"). - GroupExpr("product"). - Scan(ctx, &items) -``` - -```sql -WITH regional_sales AS ( - SELECT region, SUM(amount) AS total_sales - FROM orders - GROUP BY region -), top_regions AS ( - SELECT region - FROM regional_sales - WHERE total_sales > (SELECT SUM(total_sales)/10 FROM regional_sales) -) -SELECT region, - product, - SUM(quantity) AS product_units, - SUM(amount) AS product_sales -FROM orders -WHERE region IN (SELECT region FROM top_regions) -GROUP BY region, product -``` - -And scan results into scalars, structs, maps, slices of structs/maps/scalars: - -```go -users := make([]User, 0) -if err := db.NewSelect().Model(&users).OrderExpr("id ASC").Scan(ctx); err != nil { - panic(err) -} - -user1 := new(User) -if err := db.NewSelect().Model(user1).Where("id = ?", 1).Scan(ctx); err != nil { - panic(err) -} -``` - -See [**Getting started**](https://bun.uptrace.dev/guide/golang-orm.html) guide and check -[examples](example). - -## See also - -- [Golang HTTP router](https://github.com/uptrace/bunrouter) -- [Golang ClickHouse ORM](https://github.com/uptrace/go-clickhouse) -- [Golang msgpack](https://github.com/vmihailenco/msgpack) - -## Contributing - -See [CONTRIBUTING.md](CONTRIBUTING.md) for some hints. - -And thanks to all the people who already contributed! - - - - diff --git a/vendor/github.com/uptrace/bun/bun.go b/vendor/github.com/uptrace/bun/bun.go deleted file mode 100644 index 923be311..00000000 --- a/vendor/github.com/uptrace/bun/bun.go +++ /dev/null @@ -1,84 +0,0 @@ -package bun - -import ( - "context" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type ( - Safe = schema.Safe - Ident = schema.Ident - - NullTime = schema.NullTime - BaseModel = schema.BaseModel - Query = schema.Query - - BeforeAppendModelHook = schema.BeforeAppendModelHook - - BeforeScanRowHook = schema.BeforeScanRowHook - AfterScanRowHook = schema.AfterScanRowHook - - // DEPRECATED. Use BeforeScanRowHook instead. - BeforeScanHook = schema.BeforeScanHook - // DEPRECATED. Use AfterScanRowHook instead. - AfterScanHook = schema.AfterScanHook -) - -type BeforeSelectHook interface { - BeforeSelect(ctx context.Context, query *SelectQuery) error -} - -type AfterSelectHook interface { - AfterSelect(ctx context.Context, query *SelectQuery) error -} - -type BeforeInsertHook interface { - BeforeInsert(ctx context.Context, query *InsertQuery) error -} - -type AfterInsertHook interface { - AfterInsert(ctx context.Context, query *InsertQuery) error -} - -type BeforeUpdateHook interface { - BeforeUpdate(ctx context.Context, query *UpdateQuery) error -} - -type AfterUpdateHook interface { - AfterUpdate(ctx context.Context, query *UpdateQuery) error -} - -type BeforeDeleteHook interface { - BeforeDelete(ctx context.Context, query *DeleteQuery) error -} - -type AfterDeleteHook interface { - AfterDelete(ctx context.Context, query *DeleteQuery) error -} - -type BeforeCreateTableHook interface { - BeforeCreateTable(ctx context.Context, query *CreateTableQuery) error -} - -type AfterCreateTableHook interface { - AfterCreateTable(ctx context.Context, query *CreateTableQuery) error -} - -type BeforeDropTableHook interface { - BeforeDropTable(ctx context.Context, query *DropTableQuery) error -} - -type AfterDropTableHook interface { - AfterDropTable(ctx context.Context, query *DropTableQuery) error -} - -// SetLogger overwriters default Bun logger. -func SetLogger(logger internal.Logging) { - internal.Logger = logger -} - -func In(slice interface{}) schema.QueryAppender { - return schema.In(slice) -} diff --git a/vendor/github.com/uptrace/bun/commitlint.config.js b/vendor/github.com/uptrace/bun/commitlint.config.js deleted file mode 100644 index 4fedde6d..00000000 --- a/vendor/github.com/uptrace/bun/commitlint.config.js +++ /dev/null @@ -1 +0,0 @@ -module.exports = { extends: ['@commitlint/config-conventional'] } diff --git a/vendor/github.com/uptrace/bun/db.go b/vendor/github.com/uptrace/bun/db.go deleted file mode 100644 index 106dfe90..00000000 --- a/vendor/github.com/uptrace/bun/db.go +++ /dev/null @@ -1,708 +0,0 @@ -package bun - -import ( - "context" - "crypto/rand" - "database/sql" - "encoding/hex" - "fmt" - "reflect" - "strings" - "sync/atomic" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -const ( - discardUnknownColumns internal.Flag = 1 << iota -) - -type DBStats struct { - Queries uint32 - Errors uint32 -} - -type DBOption func(db *DB) - -func WithDiscardUnknownColumns() DBOption { - return func(db *DB) { - db.flags = db.flags.Set(discardUnknownColumns) - } -} - -type DB struct { - *sql.DB - - dialect schema.Dialect - features feature.Feature - - queryHooks []QueryHook - - fmter schema.Formatter - flags internal.Flag - - stats DBStats -} - -func NewDB(sqldb *sql.DB, dialect schema.Dialect, opts ...DBOption) *DB { - dialect.Init(sqldb) - - db := &DB{ - DB: sqldb, - dialect: dialect, - features: dialect.Features(), - fmter: schema.NewFormatter(dialect), - } - - for _, opt := range opts { - opt(db) - } - - return db -} - -func (db *DB) String() string { - var b strings.Builder - b.WriteString("DB") - return b.String() -} - -func (db *DB) DBStats() DBStats { - return DBStats{ - Queries: atomic.LoadUint32(&db.stats.Queries), - Errors: atomic.LoadUint32(&db.stats.Errors), - } -} - -func (db *DB) NewValues(model interface{}) *ValuesQuery { - return NewValuesQuery(db, model) -} - -func (db *DB) NewMerge() *MergeQuery { - return NewMergeQuery(db) -} - -func (db *DB) NewSelect() *SelectQuery { - return NewSelectQuery(db) -} - -func (db *DB) NewInsert() *InsertQuery { - return NewInsertQuery(db) -} - -func (db *DB) NewUpdate() *UpdateQuery { - return NewUpdateQuery(db) -} - -func (db *DB) NewDelete() *DeleteQuery { - return NewDeleteQuery(db) -} - -func (db *DB) NewRaw(query string, args ...interface{}) *RawQuery { - return NewRawQuery(db, query, args...) -} - -func (db *DB) NewCreateTable() *CreateTableQuery { - return NewCreateTableQuery(db) -} - -func (db *DB) NewDropTable() *DropTableQuery { - return NewDropTableQuery(db) -} - -func (db *DB) NewCreateIndex() *CreateIndexQuery { - return NewCreateIndexQuery(db) -} - -func (db *DB) NewDropIndex() *DropIndexQuery { - return NewDropIndexQuery(db) -} - -func (db *DB) NewTruncateTable() *TruncateTableQuery { - return NewTruncateTableQuery(db) -} - -func (db *DB) NewAddColumn() *AddColumnQuery { - return NewAddColumnQuery(db) -} - -func (db *DB) NewDropColumn() *DropColumnQuery { - return NewDropColumnQuery(db) -} - -func (db *DB) ResetModel(ctx context.Context, models ...interface{}) error { - for _, model := range models { - if _, err := db.NewDropTable().Model(model).IfExists().Cascade().Exec(ctx); err != nil { - return err - } - if _, err := db.NewCreateTable().Model(model).Exec(ctx); err != nil { - return err - } - } - return nil -} - -func (db *DB) Dialect() schema.Dialect { - return db.dialect -} - -func (db *DB) ScanRows(ctx context.Context, rows *sql.Rows, dest ...interface{}) error { - defer rows.Close() - - model, err := newModel(db, dest) - if err != nil { - return err - } - - _, err = model.ScanRows(ctx, rows) - if err != nil { - return err - } - - return rows.Err() -} - -func (db *DB) ScanRow(ctx context.Context, rows *sql.Rows, dest ...interface{}) error { - model, err := newModel(db, dest) - if err != nil { - return err - } - - rs, ok := model.(rowScanner) - if !ok { - return fmt.Errorf("bun: %T does not support ScanRow", model) - } - - return rs.ScanRow(ctx, rows) -} - -type queryHookIniter interface { - Init(db *DB) -} - -func (db *DB) AddQueryHook(hook QueryHook) { - if initer, ok := hook.(queryHookIniter); ok { - initer.Init(db) - } - db.queryHooks = append(db.queryHooks, hook) -} - -func (db *DB) Table(typ reflect.Type) *schema.Table { - return db.dialect.Tables().Get(typ) -} - -// RegisterModel registers models by name so they can be referenced in table relations -// and fixtures. -func (db *DB) RegisterModel(models ...interface{}) { - db.dialect.Tables().Register(models...) -} - -func (db *DB) clone() *DB { - clone := *db - - l := len(clone.queryHooks) - clone.queryHooks = clone.queryHooks[:l:l] - - return &clone -} - -func (db *DB) WithNamedArg(name string, value interface{}) *DB { - clone := db.clone() - clone.fmter = clone.fmter.WithNamedArg(name, value) - return clone -} - -func (db *DB) Formatter() schema.Formatter { - return db.fmter -} - -// UpdateFQN returns a fully qualified column name. For MySQL, it returns the column name with -// the table alias. For other RDBMS, it returns just the column name. -func (db *DB) UpdateFQN(alias, column string) Ident { - if db.HasFeature(feature.UpdateMultiTable) { - return Ident(alias + "." + column) - } - return Ident(column) -} - -// HasFeature uses feature package to report whether the underlying DBMS supports this feature. -func (db *DB) HasFeature(feat feature.Feature) bool { - return db.fmter.HasFeature(feat) -} - -//------------------------------------------------------------------------------ - -func (db *DB) Exec(query string, args ...interface{}) (sql.Result, error) { - return db.ExecContext(context.Background(), query, args...) -} - -func (db *DB) ExecContext( - ctx context.Context, query string, args ...interface{}, -) (sql.Result, error) { - formattedQuery := db.format(query, args) - ctx, event := db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - res, err := db.DB.ExecContext(ctx, formattedQuery) - db.afterQuery(ctx, event, res, err) - return res, err -} - -func (db *DB) Query(query string, args ...interface{}) (*sql.Rows, error) { - return db.QueryContext(context.Background(), query, args...) -} - -func (db *DB) QueryContext( - ctx context.Context, query string, args ...interface{}, -) (*sql.Rows, error) { - formattedQuery := db.format(query, args) - ctx, event := db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - rows, err := db.DB.QueryContext(ctx, formattedQuery) - db.afterQuery(ctx, event, nil, err) - return rows, err -} - -func (db *DB) QueryRow(query string, args ...interface{}) *sql.Row { - return db.QueryRowContext(context.Background(), query, args...) -} - -func (db *DB) QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row { - formattedQuery := db.format(query, args) - ctx, event := db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - row := db.DB.QueryRowContext(ctx, formattedQuery) - db.afterQuery(ctx, event, nil, row.Err()) - return row -} - -func (db *DB) format(query string, args []interface{}) string { - return db.fmter.FormatQuery(query, args...) -} - -//------------------------------------------------------------------------------ - -type Conn struct { - db *DB - *sql.Conn -} - -func (db *DB) Conn(ctx context.Context) (Conn, error) { - conn, err := db.DB.Conn(ctx) - if err != nil { - return Conn{}, err - } - return Conn{ - db: db, - Conn: conn, - }, nil -} - -func (c Conn) ExecContext( - ctx context.Context, query string, args ...interface{}, -) (sql.Result, error) { - formattedQuery := c.db.format(query, args) - ctx, event := c.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - res, err := c.Conn.ExecContext(ctx, formattedQuery) - c.db.afterQuery(ctx, event, res, err) - return res, err -} - -func (c Conn) QueryContext( - ctx context.Context, query string, args ...interface{}, -) (*sql.Rows, error) { - formattedQuery := c.db.format(query, args) - ctx, event := c.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - rows, err := c.Conn.QueryContext(ctx, formattedQuery) - c.db.afterQuery(ctx, event, nil, err) - return rows, err -} - -func (c Conn) QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row { - formattedQuery := c.db.format(query, args) - ctx, event := c.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - row := c.Conn.QueryRowContext(ctx, formattedQuery) - c.db.afterQuery(ctx, event, nil, row.Err()) - return row -} - -func (c Conn) Dialect() schema.Dialect { - return c.db.Dialect() -} - -func (c Conn) NewValues(model interface{}) *ValuesQuery { - return NewValuesQuery(c.db, model).Conn(c) -} - -func (c Conn) NewMerge() *MergeQuery { - return NewMergeQuery(c.db).Conn(c) -} - -func (c Conn) NewSelect() *SelectQuery { - return NewSelectQuery(c.db).Conn(c) -} - -func (c Conn) NewInsert() *InsertQuery { - return NewInsertQuery(c.db).Conn(c) -} - -func (c Conn) NewUpdate() *UpdateQuery { - return NewUpdateQuery(c.db).Conn(c) -} - -func (c Conn) NewDelete() *DeleteQuery { - return NewDeleteQuery(c.db).Conn(c) -} - -func (c Conn) NewRaw(query string, args ...interface{}) *RawQuery { - return NewRawQuery(c.db, query, args...).Conn(c) -} - -func (c Conn) NewCreateTable() *CreateTableQuery { - return NewCreateTableQuery(c.db).Conn(c) -} - -func (c Conn) NewDropTable() *DropTableQuery { - return NewDropTableQuery(c.db).Conn(c) -} - -func (c Conn) NewCreateIndex() *CreateIndexQuery { - return NewCreateIndexQuery(c.db).Conn(c) -} - -func (c Conn) NewDropIndex() *DropIndexQuery { - return NewDropIndexQuery(c.db).Conn(c) -} - -func (c Conn) NewTruncateTable() *TruncateTableQuery { - return NewTruncateTableQuery(c.db).Conn(c) -} - -func (c Conn) NewAddColumn() *AddColumnQuery { - return NewAddColumnQuery(c.db).Conn(c) -} - -func (c Conn) NewDropColumn() *DropColumnQuery { - return NewDropColumnQuery(c.db).Conn(c) -} - -// RunInTx runs the function in a transaction. If the function returns an error, -// the transaction is rolled back. Otherwise, the transaction is committed. -func (c Conn) RunInTx( - ctx context.Context, opts *sql.TxOptions, fn func(ctx context.Context, tx Tx) error, -) error { - tx, err := c.BeginTx(ctx, opts) - if err != nil { - return err - } - - var done bool - - defer func() { - if !done { - _ = tx.Rollback() - } - }() - - if err := fn(ctx, tx); err != nil { - return err - } - - done = true - return tx.Commit() -} - -func (c Conn) BeginTx(ctx context.Context, opts *sql.TxOptions) (Tx, error) { - ctx, event := c.db.beforeQuery(ctx, nil, "BEGIN", nil, "BEGIN", nil) - tx, err := c.Conn.BeginTx(ctx, opts) - c.db.afterQuery(ctx, event, nil, err) - if err != nil { - return Tx{}, err - } - return Tx{ - ctx: ctx, - db: c.db, - Tx: tx, - }, nil -} - -//------------------------------------------------------------------------------ - -type Stmt struct { - *sql.Stmt -} - -func (db *DB) Prepare(query string) (Stmt, error) { - return db.PrepareContext(context.Background(), query) -} - -func (db *DB) PrepareContext(ctx context.Context, query string) (Stmt, error) { - stmt, err := db.DB.PrepareContext(ctx, query) - if err != nil { - return Stmt{}, err - } - return Stmt{Stmt: stmt}, nil -} - -//------------------------------------------------------------------------------ - -type Tx struct { - ctx context.Context - db *DB - // name is the name of a savepoint - name string - *sql.Tx -} - -// RunInTx runs the function in a transaction. If the function returns an error, -// the transaction is rolled back. Otherwise, the transaction is committed. -func (db *DB) RunInTx( - ctx context.Context, opts *sql.TxOptions, fn func(ctx context.Context, tx Tx) error, -) error { - tx, err := db.BeginTx(ctx, opts) - if err != nil { - return err - } - - var done bool - - defer func() { - if !done { - _ = tx.Rollback() - } - }() - - if err := fn(ctx, tx); err != nil { - return err - } - - done = true - return tx.Commit() -} - -func (db *DB) Begin() (Tx, error) { - return db.BeginTx(context.Background(), nil) -} - -func (db *DB) BeginTx(ctx context.Context, opts *sql.TxOptions) (Tx, error) { - ctx, event := db.beforeQuery(ctx, nil, "BEGIN", nil, "BEGIN", nil) - tx, err := db.DB.BeginTx(ctx, opts) - db.afterQuery(ctx, event, nil, err) - if err != nil { - return Tx{}, err - } - return Tx{ - ctx: ctx, - db: db, - Tx: tx, - }, nil -} - -func (tx Tx) Commit() error { - if tx.name == "" { - return tx.commitTX() - } - return tx.commitSP() -} - -func (tx Tx) commitTX() error { - ctx, event := tx.db.beforeQuery(tx.ctx, nil, "COMMIT", nil, "COMMIT", nil) - err := tx.Tx.Commit() - tx.db.afterQuery(ctx, event, nil, err) - return err -} - -func (tx Tx) commitSP() error { - if tx.Dialect().Features().Has(feature.MSSavepoint) { - return nil - } - query := "RELEASE SAVEPOINT " + tx.name - _, err := tx.ExecContext(tx.ctx, query) - return err -} - -func (tx Tx) Rollback() error { - if tx.name == "" { - return tx.rollbackTX() - } - return tx.rollbackSP() -} - -func (tx Tx) rollbackTX() error { - ctx, event := tx.db.beforeQuery(tx.ctx, nil, "ROLLBACK", nil, "ROLLBACK", nil) - err := tx.Tx.Rollback() - tx.db.afterQuery(ctx, event, nil, err) - return err -} - -func (tx Tx) rollbackSP() error { - query := "ROLLBACK TO SAVEPOINT " + tx.name - if tx.Dialect().Features().Has(feature.MSSavepoint) { - query = "ROLLBACK TRANSACTION " + tx.name - } - _, err := tx.ExecContext(tx.ctx, query) - return err -} - -func (tx Tx) Exec(query string, args ...interface{}) (sql.Result, error) { - return tx.ExecContext(context.TODO(), query, args...) -} - -func (tx Tx) ExecContext( - ctx context.Context, query string, args ...interface{}, -) (sql.Result, error) { - formattedQuery := tx.db.format(query, args) - ctx, event := tx.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - res, err := tx.Tx.ExecContext(ctx, formattedQuery) - tx.db.afterQuery(ctx, event, res, err) - return res, err -} - -func (tx Tx) Query(query string, args ...interface{}) (*sql.Rows, error) { - return tx.QueryContext(context.TODO(), query, args...) -} - -func (tx Tx) QueryContext( - ctx context.Context, query string, args ...interface{}, -) (*sql.Rows, error) { - formattedQuery := tx.db.format(query, args) - ctx, event := tx.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - rows, err := tx.Tx.QueryContext(ctx, formattedQuery) - tx.db.afterQuery(ctx, event, nil, err) - return rows, err -} - -func (tx Tx) QueryRow(query string, args ...interface{}) *sql.Row { - return tx.QueryRowContext(context.TODO(), query, args...) -} - -func (tx Tx) QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row { - formattedQuery := tx.db.format(query, args) - ctx, event := tx.db.beforeQuery(ctx, nil, query, args, formattedQuery, nil) - row := tx.Tx.QueryRowContext(ctx, formattedQuery) - tx.db.afterQuery(ctx, event, nil, row.Err()) - return row -} - -//------------------------------------------------------------------------------ - -func (tx Tx) Begin() (Tx, error) { - return tx.BeginTx(tx.ctx, nil) -} - -// BeginTx will save a point in the running transaction. -func (tx Tx) BeginTx(ctx context.Context, _ *sql.TxOptions) (Tx, error) { - // mssql savepoint names are limited to 32 characters - sp := make([]byte, 14) - _, err := rand.Read(sp) - if err != nil { - return Tx{}, err - } - - qName := "SP_" + hex.EncodeToString(sp) - query := "SAVEPOINT " + qName - if tx.Dialect().Features().Has(feature.MSSavepoint) { - query = "SAVE TRANSACTION " + qName - } - _, err = tx.ExecContext(ctx, query) - if err != nil { - return Tx{}, err - } - return Tx{ - ctx: ctx, - db: tx.db, - Tx: tx.Tx, - name: qName, - }, nil -} - -func (tx Tx) RunInTx( - ctx context.Context, _ *sql.TxOptions, fn func(ctx context.Context, tx Tx) error, -) error { - sp, err := tx.BeginTx(ctx, nil) - if err != nil { - return err - } - - var done bool - - defer func() { - if !done { - _ = sp.Rollback() - } - }() - - if err := fn(ctx, sp); err != nil { - return err - } - - done = true - return sp.Commit() -} - -func (tx Tx) Dialect() schema.Dialect { - return tx.db.Dialect() -} - -func (tx Tx) NewValues(model interface{}) *ValuesQuery { - return NewValuesQuery(tx.db, model).Conn(tx) -} - -func (tx Tx) NewMerge() *MergeQuery { - return NewMergeQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewSelect() *SelectQuery { - return NewSelectQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewInsert() *InsertQuery { - return NewInsertQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewUpdate() *UpdateQuery { - return NewUpdateQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewDelete() *DeleteQuery { - return NewDeleteQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewRaw(query string, args ...interface{}) *RawQuery { - return NewRawQuery(tx.db, query, args...).Conn(tx) -} - -func (tx Tx) NewCreateTable() *CreateTableQuery { - return NewCreateTableQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewDropTable() *DropTableQuery { - return NewDropTableQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewCreateIndex() *CreateIndexQuery { - return NewCreateIndexQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewDropIndex() *DropIndexQuery { - return NewDropIndexQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewTruncateTable() *TruncateTableQuery { - return NewTruncateTableQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewAddColumn() *AddColumnQuery { - return NewAddColumnQuery(tx.db).Conn(tx) -} - -func (tx Tx) NewDropColumn() *DropColumnQuery { - return NewDropColumnQuery(tx.db).Conn(tx) -} - -//------------------------------------------------------------------------------ - -func (db *DB) makeQueryBytes() []byte { - // TODO: make this configurable? - return make([]byte, 0, 4096) -} diff --git a/vendor/github.com/uptrace/bun/dialect/append.go b/vendor/github.com/uptrace/bun/dialect/append.go deleted file mode 100644 index 0a25ee22..00000000 --- a/vendor/github.com/uptrace/bun/dialect/append.go +++ /dev/null @@ -1,88 +0,0 @@ -package dialect - -import ( - "math" - "strconv" - - "github.com/uptrace/bun/internal" -) - -func AppendError(b []byte, err error) []byte { - b = append(b, "?!("...) - b = append(b, err.Error()...) - b = append(b, ')') - return b -} - -func AppendNull(b []byte) []byte { - return append(b, "NULL"...) -} - -func AppendBool(b []byte, v bool) []byte { - if v { - return append(b, "TRUE"...) - } - return append(b, "FALSE"...) -} - -func AppendFloat32(b []byte, v float32) []byte { - return appendFloat(b, float64(v), 32) -} - -func AppendFloat64(b []byte, v float64) []byte { - return appendFloat(b, v, 64) -} - -func appendFloat(b []byte, v float64, bitSize int) []byte { - switch { - case math.IsNaN(v): - return append(b, "'NaN'"...) - case math.IsInf(v, 1): - return append(b, "'Infinity'"...) - case math.IsInf(v, -1): - return append(b, "'-Infinity'"...) - default: - return strconv.AppendFloat(b, v, 'f', -1, bitSize) - } -} - -//------------------------------------------------------------------------------ - -func AppendIdent(b []byte, field string, quote byte) []byte { - return appendIdent(b, internal.Bytes(field), quote) -} - -func appendIdent(b, src []byte, quote byte) []byte { - var quoted bool -loop: - for _, c := range src { - switch c { - case '*': - if !quoted { - b = append(b, '*') - continue loop - } - case '.': - if quoted { - b = append(b, quote) - quoted = false - } - b = append(b, '.') - continue loop - } - - if !quoted { - b = append(b, quote) - quoted = true - } - if c == quote { - b = append(b, quote, quote) - } else { - b = append(b, c) - } - } - if quoted { - b = append(b, quote) - } - return b -} diff --git a/vendor/github.com/uptrace/bun/dialect/dialect.go b/vendor/github.com/uptrace/bun/dialect/dialect.go deleted file mode 100644 index 03b81fbb..00000000 --- a/vendor/github.com/uptrace/bun/dialect/dialect.go +++ /dev/null @@ -1,26 +0,0 @@ -package dialect - -type Name int - -func (n Name) String() string { - switch n { - case PG: - return "pg" - case SQLite: - return "sqlite" - case MySQL: - return "mysql" - case MSSQL: - return "mssql" - default: - return "invalid" - } -} - -const ( - Invalid Name = iota - PG - SQLite - MySQL - MSSQL -) diff --git a/vendor/github.com/uptrace/bun/dialect/feature/feature.go b/vendor/github.com/uptrace/bun/dialect/feature/feature.go deleted file mode 100644 index e311394d..00000000 --- a/vendor/github.com/uptrace/bun/dialect/feature/feature.go +++ /dev/null @@ -1,35 +0,0 @@ -package feature - -import "github.com/uptrace/bun/internal" - -type Feature = internal.Flag - -const ( - CTE Feature = 1 << iota - WithValues - Returning - InsertReturning - Output // mssql - DefaultPlaceholder - DoubleColonCast - ValuesRow - UpdateMultiTable - InsertTableAlias - UpdateTableAlias - DeleteTableAlias - AutoIncrement - Identity - TableCascade - TableIdentity - TableTruncate - InsertOnConflict // INSERT ... ON CONFLICT - InsertOnDuplicateKey // INSERT ... ON DUPLICATE KEY - InsertIgnore // INSERT IGNORE ... - TableNotExists - OffsetFetch - SelectExists - UpdateFromTable - MSSavepoint - GeneratedIdentity - CompositeIn // ... WHERE (A,B) IN ((N, NN), (N, NN)...) -) diff --git a/vendor/github.com/uptrace/bun/dialect/mysqldialect/LICENSE b/vendor/github.com/uptrace/bun/dialect/mysqldialect/LICENSE deleted file mode 100644 index 7ec81810..00000000 --- a/vendor/github.com/uptrace/bun/dialect/mysqldialect/LICENSE +++ /dev/null @@ -1,24 +0,0 @@ -Copyright (c) 2021 Vladimir Mihailenco. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/uptrace/bun/dialect/mysqldialect/dialect.go b/vendor/github.com/uptrace/bun/dialect/mysqldialect/dialect.go deleted file mode 100644 index 9e9032e2..00000000 --- a/vendor/github.com/uptrace/bun/dialect/mysqldialect/dialect.go +++ /dev/null @@ -1,184 +0,0 @@ -package mysqldialect - -import ( - "database/sql" - "encoding/hex" - "fmt" - "log" - "strings" - "time" - "unicode/utf8" - - "golang.org/x/mod/semver" - - "github.com/uptrace/bun" - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/schema" -) - -const datetimeType = "DATETIME" - -func init() { - if Version() != bun.Version() { - panic(fmt.Errorf("mysqldialect and Bun must have the same version: v%s != v%s", - Version(), bun.Version())) - } -} - -type Dialect struct { - schema.BaseDialect - - tables *schema.Tables - features feature.Feature -} - -func New() *Dialect { - d := new(Dialect) - d.tables = schema.NewTables(d) - d.features = feature.AutoIncrement | - feature.DefaultPlaceholder | - feature.UpdateMultiTable | - feature.ValuesRow | - feature.TableTruncate | - feature.TableNotExists | - feature.InsertIgnore | - feature.InsertOnDuplicateKey | - feature.SelectExists - return d -} - -func (d *Dialect) Init(db *sql.DB) { - var version string - if err := db.QueryRow("SELECT version()").Scan(&version); err != nil { - log.Printf("can't discover MySQL version: %s", err) - return - } - - if strings.Contains(version, "MariaDB") { - version = semver.MajorMinor("v" + cleanupVersion(version)) - if semver.Compare(version, "v10.5.0") >= 0 { - d.features |= feature.InsertReturning - } - return - } - - version = semver.MajorMinor("v" + cleanupVersion(version)) - if semver.Compare(version, "v8.0") >= 0 { - d.features |= feature.CTE | feature.WithValues | feature.DeleteTableAlias - } -} - -func cleanupVersion(s string) string { - if i := strings.IndexByte(s, '-'); i >= 0 { - return s[:i] - } - return s -} - -func (d *Dialect) Name() dialect.Name { - return dialect.MySQL -} - -func (d *Dialect) Features() feature.Feature { - return d.features -} - -func (d *Dialect) Tables() *schema.Tables { - return d.tables -} - -func (d *Dialect) OnTable(table *schema.Table) { - for _, field := range table.FieldMap { - field.DiscoveredSQLType = sqlType(field) - } -} - -func (d *Dialect) IdentQuote() byte { - return '`' -} - -func (*Dialect) AppendTime(b []byte, tm time.Time) []byte { - b = append(b, '\'') - b = tm.AppendFormat(b, "2006-01-02 15:04:05.999999") - b = append(b, '\'') - return b -} - -func (*Dialect) AppendString(b []byte, s string) []byte { - b = append(b, '\'') -loop: - for _, r := range s { - switch r { - case '\000': - continue loop - case '\'': - b = append(b, "''"...) - continue loop - case '\\': - b = append(b, '\\', '\\') - continue loop - } - - if r < utf8.RuneSelf { - b = append(b, byte(r)) - continue - } - - l := len(b) - if cap(b)-l < utf8.UTFMax { - b = append(b, make([]byte, utf8.UTFMax)...) - } - n := utf8.EncodeRune(b[l:l+utf8.UTFMax], r) - b = b[:l+n] - } - b = append(b, '\'') - return b -} - -func (*Dialect) AppendBytes(b []byte, bs []byte) []byte { - if bs == nil { - return dialect.AppendNull(b) - } - - b = append(b, `X'`...) - - s := len(b) - b = append(b, make([]byte, hex.EncodedLen(len(bs)))...) - hex.Encode(b[s:], bs) - - b = append(b, '\'') - - return b -} - -func (*Dialect) AppendJSON(b, jsonb []byte) []byte { - b = append(b, '\'') - - for _, c := range jsonb { - switch c { - case '\'': - b = append(b, "''"...) - case '\\': - b = append(b, `\\`...) - default: - b = append(b, c) - } - } - - b = append(b, '\'') - - return b -} - -func (d *Dialect) DefaultVarcharLen() int { - return 255 -} - -func sqlType(field *schema.Field) string { - if field.DiscoveredSQLType == sqltype.Timestamp { - return datetimeType - } - return field.DiscoveredSQLType -} diff --git a/vendor/github.com/uptrace/bun/dialect/mysqldialect/scan.go b/vendor/github.com/uptrace/bun/dialect/mysqldialect/scan.go deleted file mode 100644 index 7a6af7f4..00000000 --- a/vendor/github.com/uptrace/bun/dialect/mysqldialect/scan.go +++ /dev/null @@ -1,11 +0,0 @@ -package mysqldialect - -import ( - "reflect" - - "github.com/uptrace/bun/schema" -) - -func scanner(typ reflect.Type) schema.ScannerFunc { - return schema.Scanner(typ) -} diff --git a/vendor/github.com/uptrace/bun/dialect/mysqldialect/version.go b/vendor/github.com/uptrace/bun/dialect/mysqldialect/version.go deleted file mode 100644 index e8181baa..00000000 --- a/vendor/github.com/uptrace/bun/dialect/mysqldialect/version.go +++ /dev/null @@ -1,6 +0,0 @@ -package mysqldialect - -// Version is the current release version. -func Version() string { - return "1.1.12" -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/LICENSE b/vendor/github.com/uptrace/bun/dialect/pgdialect/LICENSE deleted file mode 100644 index 7ec81810..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/LICENSE +++ /dev/null @@ -1,24 +0,0 @@ -Copyright (c) 2021 Vladimir Mihailenco. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/append.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/append.go deleted file mode 100644 index 75798b38..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/append.go +++ /dev/null @@ -1,395 +0,0 @@ -package pgdialect - -import ( - "database/sql/driver" - "encoding/hex" - "fmt" - "reflect" - "strconv" - "time" - "unicode/utf8" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/schema" -) - -var ( - driverValuerType = reflect.TypeOf((*driver.Valuer)(nil)).Elem() - - stringType = reflect.TypeOf((*string)(nil)).Elem() - sliceStringType = reflect.TypeOf([]string(nil)) - - intType = reflect.TypeOf((*int)(nil)).Elem() - sliceIntType = reflect.TypeOf([]int(nil)) - - int64Type = reflect.TypeOf((*int64)(nil)).Elem() - sliceInt64Type = reflect.TypeOf([]int64(nil)) - - float64Type = reflect.TypeOf((*float64)(nil)).Elem() - sliceFloat64Type = reflect.TypeOf([]float64(nil)) - - timeType = reflect.TypeOf((*time.Time)(nil)).Elem() - sliceTimeType = reflect.TypeOf([]time.Time(nil)) -) - -func arrayAppend(fmter schema.Formatter, b []byte, v interface{}) []byte { - switch v := v.(type) { - case int64: - return strconv.AppendInt(b, v, 10) - case float64: - return dialect.AppendFloat64(b, v) - case bool: - return dialect.AppendBool(b, v) - case []byte: - return arrayAppendBytes(b, v) - case string: - return arrayAppendString(b, v) - case time.Time: - return fmter.Dialect().AppendTime(b, v) - default: - err := fmt.Errorf("pgdialect: can't append %T", v) - return dialect.AppendError(b, err) - } -} - -func arrayAppendStringValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - return arrayAppendString(b, v.String()) -} - -func arrayAppendBytesValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - return arrayAppendBytes(b, v.Bytes()) -} - -func arrayAppendDriverValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - iface, err := v.Interface().(driver.Valuer).Value() - if err != nil { - return dialect.AppendError(b, err) - } - return arrayAppend(fmter, b, iface) -} - -//------------------------------------------------------------------------------ - -func (d *Dialect) arrayAppender(typ reflect.Type) schema.AppenderFunc { - kind := typ.Kind() - - switch kind { - case reflect.Ptr: - if fn := d.arrayAppender(typ.Elem()); fn != nil { - return schema.PtrAppender(fn) - } - case reflect.Slice, reflect.Array: - // ok: - default: - return nil - } - - elemType := typ.Elem() - - if kind == reflect.Slice { - switch elemType { - case stringType: - return appendStringSliceValue - case intType: - return appendIntSliceValue - case int64Type: - return appendInt64SliceValue - case float64Type: - return appendFloat64SliceValue - case timeType: - return appendTimeSliceValue - } - } - - appendElem := d.arrayElemAppender(elemType) - if appendElem == nil { - panic(fmt.Errorf("pgdialect: %s is not supported", typ)) - } - - return func(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - kind := v.Kind() - switch kind { - case reflect.Ptr, reflect.Slice: - if v.IsNil() { - return dialect.AppendNull(b) - } - } - - if kind == reflect.Ptr { - v = v.Elem() - } - - b = append(b, '\'') - - b = append(b, '{') - for i := 0; i < v.Len(); i++ { - elem := v.Index(i) - b = appendElem(fmter, b, elem) - b = append(b, ',') - } - if v.Len() > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - - b = append(b, '\'') - - return b - } -} - -func (d *Dialect) arrayElemAppender(typ reflect.Type) schema.AppenderFunc { - if typ.Implements(driverValuerType) { - return arrayAppendDriverValue - } - switch typ.Kind() { - case reflect.String: - return arrayAppendStringValue - case reflect.Slice: - if typ.Elem().Kind() == reflect.Uint8 { - return arrayAppendBytesValue - } - } - return schema.Appender(d, typ) -} - -func appendStringSliceValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - ss := v.Convert(sliceStringType).Interface().([]string) - return appendStringSlice(b, ss) -} - -func appendStringSlice(b []byte, ss []string) []byte { - if ss == nil { - return dialect.AppendNull(b) - } - - b = append(b, '\'') - - b = append(b, '{') - for _, s := range ss { - b = arrayAppendString(b, s) - b = append(b, ',') - } - if len(ss) > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - - b = append(b, '\'') - - return b -} - -func appendIntSliceValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - ints := v.Convert(sliceIntType).Interface().([]int) - return appendIntSlice(b, ints) -} - -func appendIntSlice(b []byte, ints []int) []byte { - if ints == nil { - return dialect.AppendNull(b) - } - - b = append(b, '\'') - - b = append(b, '{') - for _, n := range ints { - b = strconv.AppendInt(b, int64(n), 10) - b = append(b, ',') - } - if len(ints) > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - - b = append(b, '\'') - - return b -} - -func appendInt64SliceValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - ints := v.Convert(sliceInt64Type).Interface().([]int64) - return appendInt64Slice(b, ints) -} - -func appendInt64Slice(b []byte, ints []int64) []byte { - if ints == nil { - return dialect.AppendNull(b) - } - - b = append(b, '\'') - - b = append(b, '{') - for _, n := range ints { - b = strconv.AppendInt(b, n, 10) - b = append(b, ',') - } - if len(ints) > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - - b = append(b, '\'') - - return b -} - -func appendFloat64SliceValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - floats := v.Convert(sliceFloat64Type).Interface().([]float64) - return appendFloat64Slice(b, floats) -} - -func appendFloat64Slice(b []byte, floats []float64) []byte { - if floats == nil { - return dialect.AppendNull(b) - } - - b = append(b, '\'') - - b = append(b, '{') - for _, n := range floats { - b = dialect.AppendFloat64(b, n) - b = append(b, ',') - } - if len(floats) > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - - b = append(b, '\'') - - return b -} - -//------------------------------------------------------------------------------ - -func arrayAppendBytes(b []byte, bs []byte) []byte { - if bs == nil { - return dialect.AppendNull(b) - } - - b = append(b, `"\\x`...) - - s := len(b) - b = append(b, make([]byte, hex.EncodedLen(len(bs)))...) - hex.Encode(b[s:], bs) - - b = append(b, '"') - - return b -} - -func arrayAppendString(b []byte, s string) []byte { - b = append(b, '"') - for _, r := range s { - switch r { - case 0: - // ignore - case '\'': - b = append(b, "''"...) - case '"': - b = append(b, '\\', '"') - case '\\': - b = append(b, '\\', '\\') - default: - if r < utf8.RuneSelf { - b = append(b, byte(r)) - break - } - l := len(b) - if cap(b)-l < utf8.UTFMax { - b = append(b, make([]byte, utf8.UTFMax)...) - } - n := utf8.EncodeRune(b[l:l+utf8.UTFMax], r) - b = b[:l+n] - } - } - b = append(b, '"') - return b -} - -func appendTimeSliceValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - ts := v.Convert(sliceTimeType).Interface().([]time.Time) - return appendTimeSlice(fmter, b, ts) -} - -func appendTimeSlice(fmter schema.Formatter, b []byte, ts []time.Time) []byte { - if ts == nil { - return dialect.AppendNull(b) - } - b = append(b, '\'') - b = append(b, '{') - for _, t := range ts { - b = append(b, '"') - b = t.UTC().AppendFormat(b, "2006-01-02 15:04:05.999999-07:00") - b = append(b, '"') - b = append(b, ',') - } - if len(ts) > 0 { - b[len(b)-1] = '}' // Replace trailing comma. - } else { - b = append(b, '}') - } - b = append(b, '\'') - return b -} - -//------------------------------------------------------------------------------ - -var mapStringStringType = reflect.TypeOf(map[string]string(nil)) - -func (d *Dialect) hstoreAppender(typ reflect.Type) schema.AppenderFunc { - kind := typ.Kind() - - switch kind { - case reflect.Ptr: - if fn := d.hstoreAppender(typ.Elem()); fn != nil { - return schema.PtrAppender(fn) - } - case reflect.Map: - // ok: - default: - return nil - } - - if typ.Key() == stringType && typ.Elem() == stringType { - return appendMapStringStringValue - } - - return func(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - err := fmt.Errorf("bun: Hstore(unsupported %s)", v.Type()) - return dialect.AppendError(b, err) - } -} - -func appendMapStringString(b []byte, m map[string]string) []byte { - if m == nil { - return dialect.AppendNull(b) - } - - b = append(b, '\'') - - for key, value := range m { - b = arrayAppendString(b, key) - b = append(b, '=', '>') - b = arrayAppendString(b, value) - b = append(b, ',') - } - if len(m) > 0 { - b = b[:len(b)-1] // Strip trailing comma. - } - - b = append(b, '\'') - - return b -} - -func appendMapStringStringValue(fmter schema.Formatter, b []byte, v reflect.Value) []byte { - m := v.Convert(mapStringStringType).Interface().(map[string]string) - return appendMapStringString(b, m) -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/array.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/array.go deleted file mode 100644 index 281cff73..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/array.go +++ /dev/null @@ -1,65 +0,0 @@ -package pgdialect - -import ( - "database/sql" - "fmt" - "reflect" - - "github.com/uptrace/bun/schema" -) - -type ArrayValue struct { - v reflect.Value - - append schema.AppenderFunc - scan schema.ScannerFunc -} - -// Array accepts a slice and returns a wrapper for working with PostgreSQL -// array data type. -// -// For struct fields you can use array tag: -// -// Emails []string `bun:",array"` -func Array(vi interface{}) *ArrayValue { - v := reflect.ValueOf(vi) - if !v.IsValid() { - panic(fmt.Errorf("bun: Array(nil)")) - } - - return &ArrayValue{ - v: v, - - append: pgDialect.arrayAppender(v.Type()), - scan: arrayScanner(v.Type()), - } -} - -var ( - _ schema.QueryAppender = (*ArrayValue)(nil) - _ sql.Scanner = (*ArrayValue)(nil) -) - -func (a *ArrayValue) AppendQuery(fmter schema.Formatter, b []byte) ([]byte, error) { - if a.append == nil { - panic(fmt.Errorf("bun: Array(unsupported %s)", a.v.Type())) - } - return a.append(fmter, b, a.v), nil -} - -func (a *ArrayValue) Scan(src interface{}) error { - if a.scan == nil { - return fmt.Errorf("bun: Array(unsupported %s)", a.v.Type()) - } - if a.v.Kind() != reflect.Ptr { - return fmt.Errorf("bun: Array(non-pointer %s)", a.v.Type()) - } - return a.scan(a.v, src) -} - -func (a *ArrayValue) Value() interface{} { - if a.v.IsValid() { - return a.v.Interface() - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/array_parser.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/array_parser.go deleted file mode 100644 index a8358337..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/array_parser.go +++ /dev/null @@ -1,133 +0,0 @@ -package pgdialect - -import ( - "bytes" - "encoding/hex" - "fmt" - "io" -) - -type arrayParser struct { - *streamParser - err error -} - -func newArrayParser(b []byte) *arrayParser { - p := &arrayParser{ - streamParser: newStreamParser(b, 1), - } - if len(b) < 2 || b[0] != '{' || b[len(b)-1] != '}' { - p.err = fmt.Errorf("bun: can't parse array: %q", b) - } - return p -} - -func (p *arrayParser) NextElem() ([]byte, error) { - if p.err != nil { - return nil, p.err - } - - c, err := p.readByte() - if err != nil { - return nil, err - } - - switch c { - case '}': - return nil, io.EOF - case '"': - b, err := p.readSubstring() - if err != nil { - return nil, err - } - - if p.peek() == ',' { - p.skipNext() - } - - return b, nil - default: - b := p.readSimple() - if bytes.Equal(b, []byte("NULL")) { - b = nil - } - - if p.peek() == ',' { - p.skipNext() - } - - return b, nil - } -} - -func (p *arrayParser) readSimple() []byte { - p.unreadByte() - - if i := bytes.IndexByte(p.b[p.i:], ','); i >= 0 { - b := p.b[p.i : p.i+i] - p.i += i - return b - } - - b := p.b[p.i : len(p.b)-1] - p.i = len(p.b) - 1 - return b -} - -func (p *arrayParser) readSubstring() ([]byte, error) { - c, err := p.readByte() - if err != nil { - return nil, err - } - - p.buf = p.buf[:0] - for { - if c == '"' { - break - } - - next, err := p.readByte() - if err != nil { - return nil, err - } - - if c == '\\' { - switch next { - case '\\', '"': - p.buf = append(p.buf, next) - - c, err = p.readByte() - if err != nil { - return nil, err - } - default: - p.buf = append(p.buf, '\\') - c = next - } - continue - } - if c == '\'' && next == '\'' { - p.buf = append(p.buf, next) - c, err = p.readByte() - if err != nil { - return nil, err - } - continue - } - - p.buf = append(p.buf, c) - c = next - } - - if bytes.HasPrefix(p.buf, []byte("\\x")) && len(p.buf)%2 == 0 { - data := p.buf[2:] - buf := make([]byte, hex.DecodedLen(len(data))) - n, err := hex.Decode(buf, data) - if err != nil { - return nil, err - } - return buf[:n], nil - } - - return p.buf, nil -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/array_scan.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/array_scan.go deleted file mode 100644 index a8ff2971..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/array_scan.go +++ /dev/null @@ -1,302 +0,0 @@ -package pgdialect - -import ( - "fmt" - "io" - "reflect" - "strconv" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -func arrayScanner(typ reflect.Type) schema.ScannerFunc { - kind := typ.Kind() - - switch kind { - case reflect.Ptr: - if fn := arrayScanner(typ.Elem()); fn != nil { - return schema.PtrScanner(fn) - } - case reflect.Slice, reflect.Array: - // ok: - default: - return nil - } - - elemType := typ.Elem() - - if kind == reflect.Slice { - switch elemType { - case stringType: - return scanStringSliceValue - case intType: - return scanIntSliceValue - case int64Type: - return scanInt64SliceValue - case float64Type: - return scanFloat64SliceValue - } - } - - scanElem := schema.Scanner(elemType) - return func(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - kind := dest.Kind() - - if src == nil { - if kind != reflect.Slice || !dest.IsNil() { - dest.Set(reflect.Zero(dest.Type())) - } - return nil - } - - if kind == reflect.Slice { - if dest.IsNil() { - dest.Set(reflect.MakeSlice(dest.Type(), 0, 0)) - } else if dest.Len() > 0 { - dest.Set(dest.Slice(0, 0)) - } - } - - b, err := toBytes(src) - if err != nil { - return err - } - - p := newArrayParser(b) - nextValue := internal.MakeSliceNextElemFunc(dest) - for { - elem, err := p.NextElem() - if err != nil { - if err == io.EOF { - break - } - return err - } - - elemValue := nextValue() - if err := scanElem(elemValue, elem); err != nil { - return err - } - } - - return nil - } -} - -func scanStringSliceValue(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - slice, err := decodeStringSlice(src) - if err != nil { - return err - } - - dest.Set(reflect.ValueOf(slice)) - return nil -} - -func decodeStringSlice(src interface{}) ([]string, error) { - if src == nil { - return nil, nil - } - - b, err := toBytes(src) - if err != nil { - return nil, err - } - - slice := make([]string, 0) - - p := newArrayParser(b) - for { - elem, err := p.NextElem() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - slice = append(slice, string(elem)) - } - - return slice, nil -} - -func scanIntSliceValue(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - slice, err := decodeIntSlice(src) - if err != nil { - return err - } - - dest.Set(reflect.ValueOf(slice)) - return nil -} - -func decodeIntSlice(src interface{}) ([]int, error) { - if src == nil { - return nil, nil - } - - b, err := toBytes(src) - if err != nil { - return nil, err - } - - slice := make([]int, 0) - - p := newArrayParser(b) - for { - elem, err := p.NextElem() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - - if elem == nil { - slice = append(slice, 0) - continue - } - - n, err := strconv.Atoi(bytesToString(elem)) - if err != nil { - return nil, err - } - - slice = append(slice, n) - } - - return slice, nil -} - -func scanInt64SliceValue(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - slice, err := decodeInt64Slice(src) - if err != nil { - return err - } - - dest.Set(reflect.ValueOf(slice)) - return nil -} - -func decodeInt64Slice(src interface{}) ([]int64, error) { - if src == nil { - return nil, nil - } - - b, err := toBytes(src) - if err != nil { - return nil, err - } - - slice := make([]int64, 0) - - p := newArrayParser(b) - for { - elem, err := p.NextElem() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - - if elem == nil { - slice = append(slice, 0) - continue - } - - n, err := strconv.ParseInt(bytesToString(elem), 10, 64) - if err != nil { - return nil, err - } - - slice = append(slice, n) - } - - return slice, nil -} - -func scanFloat64SliceValue(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - slice, err := scanFloat64Slice(src) - if err != nil { - return err - } - - dest.Set(reflect.ValueOf(slice)) - return nil -} - -func scanFloat64Slice(src interface{}) ([]float64, error) { - if src == -1 { - return nil, nil - } - - b, err := toBytes(src) - if err != nil { - return nil, err - } - - slice := make([]float64, 0) - - p := newArrayParser(b) - for { - elem, err := p.NextElem() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - - if elem == nil { - slice = append(slice, 0) - continue - } - - n, err := strconv.ParseFloat(bytesToString(elem), 64) - if err != nil { - return nil, err - } - - slice = append(slice, n) - } - - return slice, nil -} - -func toBytes(src interface{}) ([]byte, error) { - switch src := src.(type) { - case string: - return stringToBytes(src), nil - case []byte: - return src, nil - default: - return nil, fmt.Errorf("bun: got %T, wanted []byte or string", src) - } -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/dialect.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/dialect.go deleted file mode 100644 index 6ff85e16..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/dialect.go +++ /dev/null @@ -1,110 +0,0 @@ -package pgdialect - -import ( - "database/sql" - "fmt" - "strconv" - "strings" - - "github.com/uptrace/bun" - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/schema" -) - -var pgDialect = New() - -func init() { - if Version() != bun.Version() { - panic(fmt.Errorf("pgdialect and Bun must have the same version: v%s != v%s", - Version(), bun.Version())) - } -} - -type Dialect struct { - schema.BaseDialect - - tables *schema.Tables - features feature.Feature -} - -func New() *Dialect { - d := new(Dialect) - d.tables = schema.NewTables(d) - d.features = feature.CTE | - feature.WithValues | - feature.Returning | - feature.InsertReturning | - feature.DefaultPlaceholder | - feature.DoubleColonCast | - feature.InsertTableAlias | - feature.UpdateTableAlias | - feature.DeleteTableAlias | - feature.TableCascade | - feature.TableIdentity | - feature.TableTruncate | - feature.TableNotExists | - feature.InsertOnConflict | - feature.SelectExists | - feature.GeneratedIdentity | - feature.CompositeIn - return d -} - -func (d *Dialect) Init(*sql.DB) {} - -func (d *Dialect) Name() dialect.Name { - return dialect.PG -} - -func (d *Dialect) Features() feature.Feature { - return d.features -} - -func (d *Dialect) Tables() *schema.Tables { - return d.tables -} - -func (d *Dialect) OnTable(table *schema.Table) { - for _, field := range table.FieldMap { - d.onField(field) - } -} - -func (d *Dialect) onField(field *schema.Field) { - field.DiscoveredSQLType = fieldSQLType(field) - - if field.AutoIncrement && !field.Identity { - switch field.DiscoveredSQLType { - case sqltype.SmallInt: - field.CreateTableSQLType = pgTypeSmallSerial - case sqltype.Integer: - field.CreateTableSQLType = pgTypeSerial - case sqltype.BigInt: - field.CreateTableSQLType = pgTypeBigSerial - } - } - - if field.Tag.HasOption("array") || strings.HasSuffix(field.UserSQLType, "[]") { - field.Append = d.arrayAppender(field.StructField.Type) - field.Scan = arrayScanner(field.StructField.Type) - } - - if field.DiscoveredSQLType == sqltype.HSTORE { - field.Append = d.hstoreAppender(field.StructField.Type) - field.Scan = hstoreScanner(field.StructField.Type) - } -} - -func (d *Dialect) IdentQuote() byte { - return '"' -} - -func (d *Dialect) AppendUint32(b []byte, n uint32) []byte { - return strconv.AppendInt(b, int64(int32(n)), 10) -} - -func (d *Dialect) AppendUint64(b []byte, n uint64) []byte { - return strconv.AppendInt(b, int64(n), 10) -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore.go deleted file mode 100644 index 029f7cb6..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore.go +++ /dev/null @@ -1,73 +0,0 @@ -package pgdialect - -import ( - "database/sql" - "fmt" - "reflect" - - "github.com/uptrace/bun/schema" -) - -type HStoreValue struct { - v reflect.Value - - append schema.AppenderFunc - scan schema.ScannerFunc -} - -// HStore accepts a map[string]string and returns a wrapper for working with PostgreSQL -// hstore data type. -// -// For struct fields you can use hstore tag: -// -// Attrs map[string]string `bun:",hstore"` -func HStore(vi interface{}) *HStoreValue { - v := reflect.ValueOf(vi) - if !v.IsValid() { - panic(fmt.Errorf("bun: HStore(nil)")) - } - - typ := v.Type() - if typ.Kind() == reflect.Ptr { - typ = typ.Elem() - } - if typ.Kind() != reflect.Map { - panic(fmt.Errorf("bun: Hstore(unsupported %s)", typ)) - } - - return &HStoreValue{ - v: v, - - append: pgDialect.hstoreAppender(v.Type()), - scan: hstoreScanner(v.Type()), - } -} - -var ( - _ schema.QueryAppender = (*HStoreValue)(nil) - _ sql.Scanner = (*HStoreValue)(nil) -) - -func (h *HStoreValue) AppendQuery(fmter schema.Formatter, b []byte) ([]byte, error) { - if h.append == nil { - panic(fmt.Errorf("bun: HStore(unsupported %s)", h.v.Type())) - } - return h.append(fmter, b, h.v), nil -} - -func (h *HStoreValue) Scan(src interface{}) error { - if h.scan == nil { - return fmt.Errorf("bun: HStore(unsupported %s)", h.v.Type()) - } - if h.v.Kind() != reflect.Ptr { - return fmt.Errorf("bun: HStore(non-pointer %s)", h.v.Type()) - } - return h.scan(h.v.Elem(), src) -} - -func (h *HStoreValue) Value() interface{} { - if h.v.IsValid() { - return h.v.Interface() - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_parser.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_parser.go deleted file mode 100644 index 7a18b50b..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_parser.go +++ /dev/null @@ -1,142 +0,0 @@ -package pgdialect - -import ( - "bytes" - "fmt" -) - -type hstoreParser struct { - *streamParser - err error -} - -func newHStoreParser(b []byte) *hstoreParser { - p := &hstoreParser{ - streamParser: newStreamParser(b, 0), - } - if len(b) < 6 || b[0] != '"' { - p.err = fmt.Errorf("bun: can't parse hstore: %q", b) - } - return p -} - -func (p *hstoreParser) NextKey() (string, error) { - if p.err != nil { - return "", p.err - } - - err := p.skipByte('"') - if err != nil { - return "", err - } - - key, err := p.readSubstring() - if err != nil { - return "", err - } - - const separator = "=>" - - for i := range separator { - err = p.skipByte(separator[i]) - if err != nil { - return "", err - } - } - - return string(key), nil -} - -func (p *hstoreParser) NextValue() (string, error) { - if p.err != nil { - return "", p.err - } - - c, err := p.readByte() - if err != nil { - return "", err - } - - switch c { - case '"': - value, err := p.readSubstring() - if err != nil { - return "", err - } - - if p.peek() == ',' { - p.skipNext() - } - - if p.peek() == ' ' { - p.skipNext() - } - - return string(value), nil - default: - value := p.readSimple() - if bytes.Equal(value, []byte("NULL")) { - value = nil - } - - if p.peek() == ',' { - p.skipNext() - } - - return string(value), nil - } -} - -func (p *hstoreParser) readSimple() []byte { - p.unreadByte() - - if i := bytes.IndexByte(p.b[p.i:], ','); i >= 0 { - b := p.b[p.i : p.i+i] - p.i += i - return b - } - - b := p.b[p.i:len(p.b)] - p.i = len(p.b) - return b -} - -func (p *hstoreParser) readSubstring() ([]byte, error) { - c, err := p.readByte() - if err != nil { - return nil, err - } - - p.buf = p.buf[:0] - for { - if c == '"' { - break - } - - next, err := p.readByte() - if err != nil { - return nil, err - } - - if c == '\\' { - switch next { - case '\\', '"': - p.buf = append(p.buf, next) - - c, err = p.readByte() - if err != nil { - return nil, err - } - default: - p.buf = append(p.buf, '\\') - c = next - } - continue - } - - p.buf = append(p.buf, c) - c = next - } - - return p.buf, nil -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_scan.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_scan.go deleted file mode 100644 index b10b06b8..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/hstore_scan.go +++ /dev/null @@ -1,82 +0,0 @@ -package pgdialect - -import ( - "fmt" - "io" - "reflect" - - "github.com/uptrace/bun/schema" -) - -func hstoreScanner(typ reflect.Type) schema.ScannerFunc { - kind := typ.Kind() - - switch kind { - case reflect.Ptr: - if fn := hstoreScanner(typ.Elem()); fn != nil { - return schema.PtrScanner(fn) - } - case reflect.Map: - // ok: - default: - return nil - } - - if typ.Key() == stringType && typ.Elem() == stringType { - return scanMapStringStringValue - } - return func(dest reflect.Value, src interface{}) error { - return fmt.Errorf("bun: Hstore(unsupported %s)", dest.Type()) - } -} - -func scanMapStringStringValue(dest reflect.Value, src interface{}) error { - dest = reflect.Indirect(dest) - if !dest.CanSet() { - return fmt.Errorf("bun: Scan(non-settable %s)", dest.Type()) - } - - m, err := decodeMapStringString(src) - if err != nil { - return err - } - - dest.Set(reflect.ValueOf(m)) - return nil -} - -func decodeMapStringString(src interface{}) (map[string]string, error) { - if src == nil { - return nil, nil - } - - b, err := toBytes(src) - if err != nil { - return nil, err - } - - m := make(map[string]string) - - p := newHStoreParser(b) - for { - key, err := p.NextKey() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - - value, err := p.NextValue() - if err != nil { - if err == io.EOF { - break - } - return nil, err - } - - m[key] = value - } - - return m, nil -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/safe.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/safe.go deleted file mode 100644 index dff30b9c..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/safe.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build appengine - -package pgdialect - -func bytesToString(b []byte) string { - return string(b) -} - -func stringToBytes(s string) []byte { - return []byte(s) -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/scan.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/scan.go deleted file mode 100644 index e06bb8bc..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/scan.go +++ /dev/null @@ -1,11 +0,0 @@ -package pgdialect - -import ( - "reflect" - - "github.com/uptrace/bun/schema" -) - -func scanner(typ reflect.Type) schema.ScannerFunc { - return schema.Scanner(typ) -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/sqltype.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/sqltype.go deleted file mode 100644 index dadea5c1..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/sqltype.go +++ /dev/null @@ -1,109 +0,0 @@ -package pgdialect - -import ( - "encoding/json" - "net" - "reflect" - - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/schema" -) - -const ( - // Date / Time - pgTypeTimestampTz = "TIMESTAMPTZ" // Timestamp with a time zone - pgTypeDate = "DATE" // Date - pgTypeTime = "TIME" // Time without a time zone - pgTypeTimeTz = "TIME WITH TIME ZONE" // Time with a time zone - pgTypeInterval = "INTERVAL" // Time Interval - - // Network Addresses - pgTypeInet = "INET" // IPv4 or IPv6 hosts and networks - pgTypeCidr = "CIDR" // IPv4 or IPv6 networks - pgTypeMacaddr = "MACADDR" // MAC addresses - - // Serial Types - pgTypeSmallSerial = "SMALLSERIAL" // 2 byte autoincrementing integer - pgTypeSerial = "SERIAL" // 4 byte autoincrementing integer - pgTypeBigSerial = "BIGSERIAL" // 8 byte autoincrementing integer - - // Character Types - pgTypeChar = "CHAR" // fixed length string (blank padded) - pgTypeText = "TEXT" // variable length string without limit - - // JSON Types - pgTypeJSON = "JSON" // text representation of json data - pgTypeJSONB = "JSONB" // binary representation of json data - - // Binary Data Types - pgTypeBytea = "BYTEA" // binary string -) - -var ( - ipType = reflect.TypeOf((*net.IP)(nil)).Elem() - ipNetType = reflect.TypeOf((*net.IPNet)(nil)).Elem() - jsonRawMessageType = reflect.TypeOf((*json.RawMessage)(nil)).Elem() -) - -func (d *Dialect) DefaultVarcharLen() int { - return 0 -} - -func fieldSQLType(field *schema.Field) string { - if field.UserSQLType != "" { - return field.UserSQLType - } - - if v, ok := field.Tag.Option("composite"); ok { - return v - } - if field.Tag.HasOption("hstore") { - return sqltype.HSTORE - } - - if field.Tag.HasOption("array") { - switch field.IndirectType.Kind() { - case reflect.Slice, reflect.Array: - sqlType := sqlType(field.IndirectType.Elem()) - return sqlType + "[]" - } - } - - if field.DiscoveredSQLType == sqltype.Blob { - return pgTypeBytea - } - - return sqlType(field.IndirectType) -} - -func sqlType(typ reflect.Type) string { - switch typ { - case ipType: - return pgTypeInet - case ipNetType: - return pgTypeCidr - case jsonRawMessageType: - return pgTypeJSONB - } - - sqlType := schema.DiscoverSQLType(typ) - switch sqlType { - case sqltype.Timestamp: - sqlType = pgTypeTimestampTz - } - - switch typ.Kind() { - case reflect.Map, reflect.Struct: - if sqlType == sqltype.VarChar { - return pgTypeJSONB - } - return sqlType - case reflect.Array, reflect.Slice: - if typ.Elem().Kind() == reflect.Uint8 { - return pgTypeBytea - } - return pgTypeJSONB - } - - return sqlType -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/stream_parser.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/stream_parser.go deleted file mode 100644 index 7b9a15f6..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/stream_parser.go +++ /dev/null @@ -1,60 +0,0 @@ -package pgdialect - -import ( - "fmt" - "io" -) - -type streamParser struct { - b []byte - i int - - buf []byte -} - -func newStreamParser(b []byte, start int) *streamParser { - return &streamParser{ - b: b, - i: start, - } -} - -func (p *streamParser) valid() bool { - return p.i < len(p.b) -} - -func (p *streamParser) skipByte(skip byte) error { - c, err := p.readByte() - if err != nil { - return err - } - if c == skip { - return nil - } - p.unreadByte() - return fmt.Errorf("got %q, wanted %q", c, skip) -} - -func (p *streamParser) readByte() (byte, error) { - if p.valid() { - c := p.b[p.i] - p.i++ - return c, nil - } - return 0, io.EOF -} - -func (p *streamParser) unreadByte() { - p.i-- -} - -func (p *streamParser) peek() byte { - if p.valid() { - return p.b[p.i] - } - return 0 -} - -func (p *streamParser) skipNext() { - p.i++ -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/unsafe.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/unsafe.go deleted file mode 100644 index 2a02a20b..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/unsafe.go +++ /dev/null @@ -1,18 +0,0 @@ -// +build !appengine - -package pgdialect - -import "unsafe" - -func bytesToString(b []byte) string { - return *(*string)(unsafe.Pointer(&b)) -} - -func stringToBytes(s string) []byte { - return *(*[]byte)(unsafe.Pointer( - &struct { - string - Cap int - }{s, len(s)}, - )) -} diff --git a/vendor/github.com/uptrace/bun/dialect/pgdialect/version.go b/vendor/github.com/uptrace/bun/dialect/pgdialect/version.go deleted file mode 100644 index c3402a72..00000000 --- a/vendor/github.com/uptrace/bun/dialect/pgdialect/version.go +++ /dev/null @@ -1,6 +0,0 @@ -package pgdialect - -// Version is the current release version. -func Version() string { - return "1.1.12" -} diff --git a/vendor/github.com/uptrace/bun/dialect/sqltype/sqltype.go b/vendor/github.com/uptrace/bun/dialect/sqltype/sqltype.go deleted file mode 100644 index 1031fd35..00000000 --- a/vendor/github.com/uptrace/bun/dialect/sqltype/sqltype.go +++ /dev/null @@ -1,16 +0,0 @@ -package sqltype - -const ( - Boolean = "BOOLEAN" - SmallInt = "SMALLINT" - Integer = "INTEGER" - BigInt = "BIGINT" - Real = "REAL" - DoublePrecision = "DOUBLE PRECISION" - VarChar = "VARCHAR" - Blob = "BLOB" - Timestamp = "TIMESTAMP" - JSON = "JSON" - JSONB = "JSONB" - HSTORE = "HSTORE" -) diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/LICENSE b/vendor/github.com/uptrace/bun/driver/pgdriver/LICENSE deleted file mode 100644 index 7ec81810..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/LICENSE +++ /dev/null @@ -1,24 +0,0 @@ -Copyright (c) 2021 Vladimir Mihailenco. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/README.md b/vendor/github.com/uptrace/bun/driver/pgdriver/README.md deleted file mode 100644 index a6974ee4..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# pgdriver - -[![PkgGoDev](https://pkg.go.dev/badge/github.com/uptrace/bun/driver/pgdriver)](https://pkg.go.dev/github.com/uptrace/bun/driver/pgdriver) - -pgdriver is a database/sql driver for PostgreSQL based on [go-pg](https://github.com/go-pg/pg) code. - -You can install it with: - -```shell -go get github.com/uptrace/bun/driver/pgdriver -``` - -And then create a `sql.DB` using it: - -```go -import _ "github.com/uptrace/bun/driver/pgdriver" - -dsn := "postgres://postgres:@localhost:5432/test" -db, err := sql.Open("pg", dsn) -``` - -Alternatively: - -```go -dsn := "postgres://postgres:@localhost:5432/test" -db := sql.OpenDB(pgdriver.NewConnector(pgdriver.WithDSN(dsn))) -``` - -[Benchmark](https://github.com/go-bun/bun-benchmark): - -``` -BenchmarkInsert/pg-12 7254 148380 ns/op 900 B/op 13 allocs/op -BenchmarkInsert/pgx-12 6494 166391 ns/op 2076 B/op 26 allocs/op -BenchmarkSelect/pg-12 9100 132952 ns/op 1417 B/op 18 allocs/op -BenchmarkSelect/pgx-12 8199 154920 ns/op 3679 B/op 60 allocs/op -``` - -See [documentation](https://bun.uptrace.dev/postgres/) for more details. diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/column.go b/vendor/github.com/uptrace/bun/driver/pgdriver/column.go deleted file mode 100644 index 710d707a..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/column.go +++ /dev/null @@ -1,193 +0,0 @@ -package pgdriver - -import ( - "encoding/hex" - "fmt" - "io" - "strconv" - "strings" - "time" -) - -const ( - pgBool = 16 - - pgInt2 = 21 - pgInt4 = 23 - pgInt8 = 20 - - pgFloat4 = 700 - pgFloat8 = 701 - - pgText = 25 - pgVarchar = 1043 - pgBytea = 17 - - pgDate = 1082 - pgTimestamp = 1114 - pgTimestamptz = 1184 -) - -func readColumnValue(rd *reader, dataType int32, dataLen int) (interface{}, error) { - if dataLen == -1 { - return nil, nil - } - - switch dataType { - case pgBool: - return readBoolCol(rd, dataLen) - case pgInt2: - return readIntCol(rd, dataLen, 16) - case pgInt4: - return readIntCol(rd, dataLen, 32) - case pgInt8: - return readIntCol(rd, dataLen, 64) - case pgFloat4: - return readFloatCol(rd, dataLen, 32) - case pgFloat8: - return readFloatCol(rd, dataLen, 64) - case pgTimestamp: - return readTimeCol(rd, dataLen) - case pgTimestamptz: - return readTimeCol(rd, dataLen) - case pgDate: - // Return a string and let the scanner to convert string to time.Time if necessary. - return readStringCol(rd, dataLen) - case pgText, pgVarchar: - return readStringCol(rd, dataLen) - case pgBytea: - return readBytesCol(rd, dataLen) - } - - b := make([]byte, dataLen) - if _, err := io.ReadFull(rd, b); err != nil { - return nil, err - } - return b, nil -} - -func readBoolCol(rd *reader, n int) (interface{}, error) { - tmp, err := rd.ReadTemp(n) - if err != nil { - return nil, err - } - return len(tmp) == 1 && (tmp[0] == 't' || tmp[0] == '1'), nil -} - -func readIntCol(rd *reader, n int, bitSize int) (interface{}, error) { - if n <= 0 { - return 0, nil - } - - tmp, err := rd.ReadTemp(n) - if err != nil { - return 0, err - } - - return strconv.ParseInt(bytesToString(tmp), 10, bitSize) -} - -func readFloatCol(rd *reader, n int, bitSize int) (interface{}, error) { - if n <= 0 { - return 0, nil - } - - tmp, err := rd.ReadTemp(n) - if err != nil { - return 0, err - } - - return strconv.ParseFloat(bytesToString(tmp), bitSize) -} - -func readStringCol(rd *reader, n int) (interface{}, error) { - if n <= 0 { - return "", nil - } - - b := make([]byte, n) - - if _, err := io.ReadFull(rd, b); err != nil { - return nil, err - } - - return bytesToString(b), nil -} - -func readBytesCol(rd *reader, n int) (interface{}, error) { - if n <= 0 { - return []byte{}, nil - } - - tmp, err := rd.ReadTemp(n) - if err != nil { - return nil, err - } - - if len(tmp) < 2 || tmp[0] != '\\' || tmp[1] != 'x' { - return nil, fmt.Errorf("pgdriver: can't parse bytea: %q", tmp) - } - tmp = tmp[2:] // Cut off "\x". - - b := make([]byte, hex.DecodedLen(len(tmp))) - if _, err := hex.Decode(b, tmp); err != nil { - return nil, err - } - return b, nil -} - -func readTimeCol(rd *reader, n int) (interface{}, error) { - if n <= 0 { - return time.Time{}, nil - } - - tmp, err := rd.ReadTemp(n) - if err != nil { - return time.Time{}, err - } - - tm, err := ParseTime(bytesToString(tmp)) - if err != nil { - return time.Time{}, err - } - return tm, nil -} - -const ( - dateFormat = "2006-01-02" - timeFormat = "15:04:05.999999999" - timestampFormat = "2006-01-02 15:04:05.999999999" - timestamptzFormat = "2006-01-02 15:04:05.999999999-07:00:00" - timestamptzFormat2 = "2006-01-02 15:04:05.999999999-07:00" - timestamptzFormat3 = "2006-01-02 15:04:05.999999999-07" -) - -func ParseTime(s string) (time.Time, error) { - switch l := len(s); { - case l < len("15:04:05"): - return time.Time{}, fmt.Errorf("pgdriver: can't parse time=%q", s) - case l <= len(timeFormat): - if s[2] == ':' { - return time.ParseInLocation(timeFormat, s, time.UTC) - } - return time.ParseInLocation(dateFormat, s, time.UTC) - default: - if s[10] == 'T' { - return time.Parse(time.RFC3339Nano, s) - } - if c := s[l-9]; c == '+' || c == '-' { - return time.Parse(timestamptzFormat, s) - } - if c := s[l-6]; c == '+' || c == '-' { - return time.Parse(timestamptzFormat2, s) - } - if c := s[l-3]; c == '+' || c == '-' { - if strings.HasSuffix(s, "+00") { - s = s[:len(s)-3] - return time.ParseInLocation(timestampFormat, s, time.UTC) - } - return time.Parse(timestamptzFormat3, s) - } - return time.ParseInLocation(timestampFormat, s, time.UTC) - } -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/config.go b/vendor/github.com/uptrace/bun/driver/pgdriver/config.go deleted file mode 100644 index 27f37804..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/config.go +++ /dev/null @@ -1,419 +0,0 @@ -package pgdriver - -import ( - "context" - "crypto/tls" - "crypto/x509" - "errors" - "fmt" - "io/ioutil" - "net" - "net/url" - "os" - "strconv" - "strings" - "time" -) - -type Config struct { - // Network type, either tcp or unix. - // Default is tcp. - Network string - // TCP host:port or Unix socket depending on Network. - Addr string - // Dial timeout for establishing new connections. - // Default is 5 seconds. - DialTimeout time.Duration - // Dialer creates new network connection and has priority over - // Network and Addr options. - Dialer func(ctx context.Context, network, addr string) (net.Conn, error) - - // TLS config for secure connections. - TLSConfig *tls.Config - - User string - Password string - Database string - AppName string - // PostgreSQL session parameters updated with `SET` command when a connection is created. - ConnParams map[string]interface{} - - // Timeout for socket reads. If reached, commands fail with a timeout instead of blocking. - ReadTimeout time.Duration - // Timeout for socket writes. If reached, commands fail with a timeout instead of blocking. - WriteTimeout time.Duration - - // ResetSessionFunc is called prior to executing a query on a connection that has been used before. - ResetSessionFunc func(context.Context, *Conn) error -} - -func newDefaultConfig() *Config { - host := env("PGHOST", "localhost") - port := env("PGPORT", "5432") - - cfg := &Config{ - Network: "tcp", - Addr: net.JoinHostPort(host, port), - DialTimeout: 5 * time.Second, - TLSConfig: &tls.Config{InsecureSkipVerify: true}, - - User: env("PGUSER", "postgres"), - Database: env("PGDATABASE", "postgres"), - - ReadTimeout: 10 * time.Second, - WriteTimeout: 5 * time.Second, - } - - cfg.Dialer = func(ctx context.Context, network, addr string) (net.Conn, error) { - netDialer := &net.Dialer{ - Timeout: cfg.DialTimeout, - KeepAlive: 5 * time.Minute, - } - return netDialer.DialContext(ctx, network, addr) - } - - return cfg -} - -type Option func(cfg *Config) - -// Deprecated. Use Option instead. -type DriverOption = Option - -func WithNetwork(network string) Option { - if network == "" { - panic("network is empty") - } - return func(cfg *Config) { - cfg.Network = network - } -} - -func WithAddr(addr string) Option { - if addr == "" { - panic("addr is empty") - } - return func(cfg *Config) { - cfg.Addr = addr - } -} - -func WithTLSConfig(tlsConfig *tls.Config) Option { - return func(cfg *Config) { - cfg.TLSConfig = tlsConfig - } -} - -func WithInsecure(on bool) Option { - return func(cfg *Config) { - if on { - cfg.TLSConfig = nil - } else { - cfg.TLSConfig = &tls.Config{InsecureSkipVerify: true} - } - } -} - -func WithUser(user string) Option { - if user == "" { - panic("user is empty") - } - return func(cfg *Config) { - cfg.User = user - } -} - -func WithPassword(password string) Option { - return func(cfg *Config) { - cfg.Password = password - } -} - -func WithDatabase(database string) Option { - if database == "" { - panic("database is empty") - } - return func(cfg *Config) { - cfg.Database = database - } -} - -func WithApplicationName(appName string) Option { - return func(cfg *Config) { - cfg.AppName = appName - } -} - -func WithConnParams(params map[string]interface{}) Option { - return func(cfg *Config) { - cfg.ConnParams = params - } -} - -func WithTimeout(timeout time.Duration) Option { - return func(cfg *Config) { - cfg.DialTimeout = timeout - cfg.ReadTimeout = timeout - cfg.WriteTimeout = timeout - } -} - -func WithDialTimeout(dialTimeout time.Duration) Option { - return func(cfg *Config) { - cfg.DialTimeout = dialTimeout - } -} - -func WithReadTimeout(readTimeout time.Duration) Option { - return func(cfg *Config) { - cfg.ReadTimeout = readTimeout - } -} - -func WithWriteTimeout(writeTimeout time.Duration) Option { - return func(cfg *Config) { - cfg.WriteTimeout = writeTimeout - } -} - -// WithResetSessionFunc configures a function that is called prior to executing -// a query on a connection that has been used before. -// If the func returns driver.ErrBadConn, the connection is discarded. -func WithResetSessionFunc(fn func(context.Context, *Conn) error) Option { - return func(cfg *Config) { - cfg.ResetSessionFunc = fn - } -} - -func WithDSN(dsn string) Option { - return func(cfg *Config) { - opts, err := parseDSN(dsn) - if err != nil { - panic(err) - } - for _, opt := range opts { - opt(cfg) - } - } -} - -func env(key, defValue string) string { - if s := os.Getenv(key); s != "" { - return s - } - return defValue -} - -//------------------------------------------------------------------------------ - -func parseDSN(dsn string) ([]Option, error) { - u, err := url.Parse(dsn) - if err != nil { - return nil, err - } - - q := queryOptions{q: u.Query()} - var opts []Option - - switch u.Scheme { - case "postgres", "postgresql": - if u.Host != "" { - addr := u.Host - if !strings.Contains(addr, ":") { - addr += ":5432" - } - opts = append(opts, WithAddr(addr)) - } - - if len(u.Path) > 1 { - opts = append(opts, WithDatabase(u.Path[1:])) - } - - if host := q.string("host"); host != "" { - opts = append(opts, WithAddr(host)) - if host[0] == '/' { - opts = append(opts, WithNetwork("unix")) - } - } - case "unix": - if len(u.Path) == 0 { - return nil, fmt.Errorf("unix socket DSN requires a path: %s", dsn) - } - - opts = append(opts, WithNetwork("unix")) - if u.Host != "" { - opts = append(opts, WithDatabase(u.Host)) - } - opts = append(opts, WithAddr(u.Path)) - default: - return nil, errors.New("pgdriver: invalid scheme: " + u.Scheme) - } - - if u.User != nil { - opts = append(opts, WithUser(u.User.Username())) - if password, ok := u.User.Password(); ok { - opts = append(opts, WithPassword(password)) - } - } - - if appName := q.string("application_name"); appName != "" { - opts = append(opts, WithApplicationName(appName)) - } - - if sslMode, sslRootCert := q.string("sslmode"), q.string("sslrootcert"); sslMode != "" || sslRootCert != "" { - tlsConfig := &tls.Config{} - switch sslMode { - case "disable": - tlsConfig = nil - case "allow", "prefer", "": - tlsConfig.InsecureSkipVerify = true - case "require": - if sslRootCert == "" { - tlsConfig.InsecureSkipVerify = true - break - } - // For backwards compatibility reasons, in the presence of `sslrootcert`, - // `sslmode` = `require` must act as if `sslmode` = `verify-ca`. See the note at - // https://www.postgresql.org/docs/current/libpq-ssl.html#LIBQ-SSL-CERTIFICATES . - fallthrough - case "verify-ca": - // The default certificate verification will also verify the host name - // which is not the behavior of `verify-ca`. As such, we need to manually - // check the certificate chain. - // At the time of writing, tls.Config has no option for this behavior - // (verify chain, but skip server name). - // See https://github.com/golang/go/issues/21971 . - tlsConfig.InsecureSkipVerify = true - tlsConfig.VerifyPeerCertificate = func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error { - certs := make([]*x509.Certificate, 0, len(rawCerts)) - for _, rawCert := range rawCerts { - cert, err := x509.ParseCertificate(rawCert) - if err != nil { - return fmt.Errorf("pgdriver: failed to parse certificate: %w", err) - } - certs = append(certs, cert) - } - intermediates := x509.NewCertPool() - for _, cert := range certs[1:] { - intermediates.AddCert(cert) - } - _, err := certs[0].Verify(x509.VerifyOptions{ - Roots: tlsConfig.RootCAs, - Intermediates: intermediates, - }) - return err - } - case "verify-full": - tlsConfig.ServerName = u.Host - if host, _, err := net.SplitHostPort(u.Host); err == nil { - tlsConfig.ServerName = host - } - default: - return nil, fmt.Errorf("pgdriver: sslmode '%s' is not supported", sslMode) - } - if tlsConfig != nil && sslRootCert != "" { - rawCA, err := ioutil.ReadFile(sslRootCert) - if err != nil { - return nil, fmt.Errorf("pgdriver: failed to read root CA: %w", err) - } - certPool := x509.NewCertPool() - if !certPool.AppendCertsFromPEM(rawCA) { - return nil, fmt.Errorf("pgdriver: failed to append root CA") - } - tlsConfig.RootCAs = certPool - } - opts = append(opts, WithTLSConfig(tlsConfig)) - } - - if d := q.duration("timeout"); d != 0 { - opts = append(opts, WithTimeout(d)) - } - if d := q.duration("dial_timeout"); d != 0 { - opts = append(opts, WithDialTimeout(d)) - } - if d := q.duration("connect_timeout"); d != 0 { - opts = append(opts, WithDialTimeout(d)) - } - if d := q.duration("read_timeout"); d != 0 { - opts = append(opts, WithReadTimeout(d)) - } - if d := q.duration("write_timeout"); d != 0 { - opts = append(opts, WithWriteTimeout(d)) - } - - rem, err := q.remaining() - if err != nil { - return nil, q.err - } - - if len(rem) > 0 { - params := make(map[string]interface{}, len(rem)) - for k, v := range rem { - params[k] = v - } - opts = append(opts, WithConnParams(params)) - } - - return opts, nil -} - -// verify is a method to make sure if the config is legitimate -// in the case it detects any errors, it returns with a non-nil error -// it can be extended to check other parameters -func (c *Config) verify() error { - if c.User == "" { - return errors.New("pgdriver: User option is empty (to configure, use WithUser).") - } - return nil -} - -type queryOptions struct { - q url.Values - err error -} - -func (o *queryOptions) string(name string) string { - vs := o.q[name] - if len(vs) == 0 { - return "" - } - delete(o.q, name) // enable detection of unknown parameters - return vs[len(vs)-1] -} - -func (o *queryOptions) duration(name string) time.Duration { - s := o.string(name) - if s == "" { - return 0 - } - // try plain number first - if i, err := strconv.Atoi(s); err == nil { - if i <= 0 { - // disable timeouts - return -1 - } - return time.Duration(i) * time.Second - } - dur, err := time.ParseDuration(s) - if err == nil { - return dur - } - if o.err == nil { - o.err = fmt.Errorf("pgdriver: invalid %s duration: %w", name, err) - } - return 0 -} - -func (o *queryOptions) remaining() (map[string]string, error) { - if o.err != nil { - return nil, o.err - } - if len(o.q) == 0 { - return nil, nil - } - m := make(map[string]string, len(o.q)) - for k, ss := range o.q { - m[k] = ss[len(ss)-1] - } - return m, nil -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/copy.go b/vendor/github.com/uptrace/bun/driver/pgdriver/copy.go deleted file mode 100644 index aba28965..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/copy.go +++ /dev/null @@ -1,249 +0,0 @@ -package pgdriver - -import ( - "bufio" - "context" - "database/sql" - "fmt" - "io" - - "github.com/uptrace/bun" -) - -// CopyFrom copies data from the reader to the query destination. -func CopyFrom( - ctx context.Context, conn bun.Conn, r io.Reader, query string, args ...interface{}, -) (res sql.Result, err error) { - query, err = formatQueryArgs(query, args) - if err != nil { - return nil, err - } - - if err := conn.Raw(func(driverConn interface{}) error { - cn := driverConn.(*Conn) - - if err := writeQuery(ctx, cn, query); err != nil { - return err - } - if err := readCopyIn(ctx, cn); err != nil { - return err - } - if err := writeCopyData(ctx, cn, r); err != nil { - return err - } - if err := writeCopyDone(ctx, cn); err != nil { - return err - } - - res, err = readQuery(ctx, cn) - return err - }); err != nil { - return nil, err - } - - return res, nil -} - -func readCopyIn(ctx context.Context, cn *Conn) error { - rd := cn.reader(ctx, -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - if firstErr == nil { - firstErr = e - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - return firstErr - case copyInResponseMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - return firstErr - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - default: - return fmt.Errorf("pgdriver: readCopyIn: unexpected message %q", c) - } - } -} - -func writeCopyData(ctx context.Context, cn *Conn, r io.Reader) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - for { - wb.StartMessage(copyDataMsg) - if _, err := wb.ReadFrom(r); err != nil { - if err == io.EOF { - break - } - return err - } - wb.FinishMessage() - - if err := cn.write(ctx, wb); err != nil { - return err - } - } - - return nil -} - -func writeCopyDone(ctx context.Context, cn *Conn) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(copyDoneMsg) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -//------------------------------------------------------------------------------ - -// CopyTo copies data from the query source to the writer. -func CopyTo( - ctx context.Context, conn bun.Conn, w io.Writer, query string, args ...interface{}, -) (res sql.Result, err error) { - query, err = formatQueryArgs(query, args) - if err != nil { - return nil, err - } - - if err := conn.Raw(func(driverConn interface{}) error { - cn := driverConn.(*Conn) - - if err := writeQuery(ctx, cn, query); err != nil { - return err - } - if err := readCopyOut(ctx, cn); err != nil { - return err - } - - res, err = readCopyData(ctx, cn, w) - return err - }); err != nil { - return nil, err - } - - return res, nil -} - -func readCopyOut(ctx context.Context, cn *Conn) error { - rd := cn.reader(ctx, -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - if firstErr == nil { - firstErr = e - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - return firstErr - case copyOutResponseMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - return nil - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - default: - return fmt.Errorf("pgdriver: readCopyOut: unexpected message %q", c) - } - } -} - -func readCopyData(ctx context.Context, cn *Conn, w io.Writer) (res sql.Result, err error) { - rd := cn.reader(ctx, -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case copyDataMsg: - for msgLen > 0 { - b, err := rd.ReadTemp(msgLen) - if err != nil && err != bufio.ErrBufferFull { - return nil, err - } - - if _, err := w.Write(b); err != nil { - if firstErr == nil { - firstErr = err - } - break - } - - msgLen -= len(b) - } - case copyDoneMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case commandCompleteMsg: - tmp, err := rd.ReadTemp(msgLen) - if err != nil { - firstErr = err - break - } - - r, err := parseResult(tmp) - if err != nil { - firstErr = err - } else { - res = r - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - return res, firstErr - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("pgdriver: readCopyData: unexpected message %q", c) - } - } -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/driver.go b/vendor/github.com/uptrace/bun/driver/pgdriver/driver.go deleted file mode 100644 index 256de4a4..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/driver.go +++ /dev/null @@ -1,600 +0,0 @@ -package pgdriver - -import ( - "bytes" - "context" - "database/sql" - "database/sql/driver" - "errors" - "fmt" - "io" - "log" - "net" - "os" - "strconv" - "sync/atomic" - "time" -) - -func init() { - sql.Register("pg", NewDriver()) -} - -type logging interface { - Printf(ctx context.Context, format string, v ...interface{}) -} - -type logger struct { - log *log.Logger -} - -func (l *logger) Printf(ctx context.Context, format string, v ...interface{}) { - _ = l.log.Output(2, fmt.Sprintf(format, v...)) -} - -var Logger logging = &logger{ - log: log.New(os.Stderr, "pgdriver: ", log.LstdFlags|log.Lshortfile), -} - -//------------------------------------------------------------------------------ - -type Driver struct { - connector *Connector -} - -var _ driver.DriverContext = (*Driver)(nil) - -func NewDriver() Driver { - return Driver{} -} - -func (d Driver) OpenConnector(name string) (driver.Connector, error) { - opts, err := parseDSN(name) - if err != nil { - return nil, err - } - return NewConnector(opts...), nil -} - -func (d Driver) Open(name string) (driver.Conn, error) { - connector, err := d.OpenConnector(name) - if err != nil { - return nil, err - } - return connector.Connect(context.TODO()) -} - -//------------------------------------------------------------------------------ - -type Connector struct { - cfg *Config -} - -func NewConnector(opts ...Option) *Connector { - c := &Connector{cfg: newDefaultConfig()} - for _, opt := range opts { - opt(c.cfg) - } - return c -} - -var _ driver.Connector = (*Connector)(nil) - -func (c *Connector) Connect(ctx context.Context) (driver.Conn, error) { - if err := c.cfg.verify(); err != nil { - return nil, err - } - return newConn(ctx, c.cfg) -} - -func (c *Connector) Driver() driver.Driver { - return Driver{connector: c} -} - -func (c *Connector) Config() *Config { - return c.cfg -} - -//------------------------------------------------------------------------------ - -type Conn struct { - cfg *Config - - netConn net.Conn - rd *reader - - processID int32 - secretKey int32 - - stmtCount int - - closed int32 -} - -func newConn(ctx context.Context, cfg *Config) (*Conn, error) { - netConn, err := cfg.Dialer(ctx, cfg.Network, cfg.Addr) - if err != nil { - return nil, err - } - - cn := &Conn{ - cfg: cfg, - netConn: netConn, - rd: newReader(netConn), - } - - if cfg.TLSConfig != nil { - if err := enableSSL(ctx, cn, cfg.TLSConfig); err != nil { - return nil, err - } - } - - if err := startup(ctx, cn); err != nil { - return nil, err - } - - for k, v := range cfg.ConnParams { - if v != nil { - _, err = cn.ExecContext(ctx, fmt.Sprintf("SET %s TO $1", k), []driver.NamedValue{ - {Value: v}, - }) - } else { - _, err = cn.ExecContext(ctx, fmt.Sprintf("SET %s TO DEFAULT", k), nil) - } - if err != nil { - return nil, err - } - } - - return cn, nil -} - -func (cn *Conn) reader(ctx context.Context, timeout time.Duration) *reader { - cn.setReadDeadline(ctx, timeout) - return cn.rd -} - -func (cn *Conn) write(ctx context.Context, wb *writeBuffer) error { - cn.setWriteDeadline(ctx, -1) - - n, err := cn.netConn.Write(wb.Bytes) - wb.Reset() - - if err != nil { - if n == 0 { - Logger.Printf(ctx, "pgdriver: Conn.Write failed (zero-length): %s", err) - return driver.ErrBadConn - } - return err - } - return nil -} - -var _ driver.Conn = (*Conn)(nil) - -func (cn *Conn) Prepare(query string) (driver.Stmt, error) { - if cn.isClosed() { - return nil, driver.ErrBadConn - } - - ctx := context.TODO() - - name := fmt.Sprintf("pgdriver-%d", cn.stmtCount) - cn.stmtCount++ - - if err := writeParseDescribeSync(ctx, cn, name, query); err != nil { - return nil, err - } - - rowDesc, err := readParseDescribeSync(ctx, cn) - if err != nil { - return nil, err - } - - return newStmt(cn, name, rowDesc), nil -} - -func (cn *Conn) Close() error { - if !atomic.CompareAndSwapInt32(&cn.closed, 0, 1) { - return nil - } - return cn.netConn.Close() -} - -func (cn *Conn) isClosed() bool { - return atomic.LoadInt32(&cn.closed) == 1 -} - -func (cn *Conn) Begin() (driver.Tx, error) { - return cn.BeginTx(context.Background(), driver.TxOptions{}) -} - -var _ driver.ConnBeginTx = (*Conn)(nil) - -func (cn *Conn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) { - // No need to check if the conn is closed. ExecContext below handles that. - - if sql.IsolationLevel(opts.Isolation) != sql.LevelDefault { - return nil, errors.New("pgdriver: custom IsolationLevel is not supported") - } - if opts.ReadOnly { - return nil, errors.New("pgdriver: ReadOnly transactions are not supported") - } - - if _, err := cn.ExecContext(ctx, "BEGIN", nil); err != nil { - return nil, err - } - return tx{cn: cn}, nil -} - -var _ driver.ExecerContext = (*Conn)(nil) - -func (cn *Conn) ExecContext( - ctx context.Context, query string, args []driver.NamedValue, -) (driver.Result, error) { - if cn.isClosed() { - return nil, driver.ErrBadConn - } - res, err := cn.exec(ctx, query, args) - if err != nil { - return nil, cn.checkBadConn(err) - } - return res, nil -} - -func (cn *Conn) exec( - ctx context.Context, query string, args []driver.NamedValue, -) (driver.Result, error) { - query, err := formatQuery(query, args) - if err != nil { - return nil, err - } - if err := writeQuery(ctx, cn, query); err != nil { - return nil, err - } - return readQuery(ctx, cn) -} - -var _ driver.QueryerContext = (*Conn)(nil) - -func (cn *Conn) QueryContext( - ctx context.Context, query string, args []driver.NamedValue, -) (driver.Rows, error) { - if cn.isClosed() { - return nil, driver.ErrBadConn - } - rows, err := cn.query(ctx, query, args) - if err != nil { - return nil, cn.checkBadConn(err) - } - return rows, nil -} - -func (cn *Conn) query( - ctx context.Context, query string, args []driver.NamedValue, -) (driver.Rows, error) { - query, err := formatQuery(query, args) - if err != nil { - return nil, err - } - if err := writeQuery(ctx, cn, query); err != nil { - return nil, err - } - return readQueryData(ctx, cn) -} - -var _ driver.Pinger = (*Conn)(nil) - -func (cn *Conn) Ping(ctx context.Context) error { - _, err := cn.ExecContext(ctx, "SELECT 1", nil) - return err -} - -func (cn *Conn) setReadDeadline(ctx context.Context, timeout time.Duration) { - if timeout == -1 { - timeout = cn.cfg.ReadTimeout - } - _ = cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)) -} - -func (cn *Conn) setWriteDeadline(ctx context.Context, timeout time.Duration) { - if timeout == -1 { - timeout = cn.cfg.WriteTimeout - } - _ = cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)) -} - -func (cn *Conn) deadline(ctx context.Context, timeout time.Duration) time.Time { - deadline, ok := ctx.Deadline() - if !ok { - if timeout == 0 { - return time.Time{} - } - return time.Now().Add(timeout) - } - - if timeout == 0 { - return deadline - } - if tm := time.Now().Add(timeout); tm.Before(deadline) { - return tm - } - return deadline -} - -var _ driver.Validator = (*Conn)(nil) - -func (cn *Conn) IsValid() bool { - return !cn.isClosed() -} - -var _ driver.SessionResetter = (*Conn)(nil) - -func (cn *Conn) ResetSession(ctx context.Context) error { - if cn.isClosed() { - return driver.ErrBadConn - } - if cn.cfg.ResetSessionFunc != nil { - return cn.cfg.ResetSessionFunc(ctx, cn) - } - return nil -} - -func (cn *Conn) checkBadConn(err error) error { - if isBadConn(err, false) { - // Close and return driver.ErrBadConn next time the conn is used. - _ = cn.Close() - } - // Always return the original error. - return err -} - -func (cn *Conn) Conn() net.Conn { return cn.netConn } - -//------------------------------------------------------------------------------ - -type rows struct { - cn *Conn - rowDesc *rowDescription - reusable bool - closed bool -} - -var _ driver.Rows = (*rows)(nil) - -func newRows(cn *Conn, rowDesc *rowDescription, reusable bool) *rows { - return &rows{ - cn: cn, - rowDesc: rowDesc, - reusable: reusable, - } -} - -func (r *rows) Columns() []string { - if r.closed || r.rowDesc == nil { - return nil - } - return r.rowDesc.names -} - -func (r *rows) Close() error { - if r.closed { - return nil - } - defer r.close() - - for { - switch err := r.Next(nil); err { - case nil, io.EOF: - return nil - default: // unexpected error - _ = r.cn.Close() - return err - } - } -} - -func (r *rows) close() { - r.closed = true - - if r.rowDesc != nil { - if r.reusable { - rowDescPool.Put(r.rowDesc) - } - r.rowDesc = nil - } -} - -func (r *rows) Next(dest []driver.Value) error { - if r.closed { - return io.EOF - } - - eof, err := r.next(dest) - if err == io.EOF { - return io.ErrUnexpectedEOF - } else if err != nil { - return err - } - if eof { - return io.EOF - } - return nil -} - -func (r *rows) next(dest []driver.Value) (eof bool, _ error) { - rd := r.cn.reader(context.TODO(), -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return false, err - } - - switch c { - case dataRowMsg: - return false, r.readDataRow(rd, dest) - case commandCompleteMsg: - if err := rd.Discard(msgLen); err != nil { - return false, err - } - case readyForQueryMsg: - r.close() - - if err := rd.Discard(msgLen); err != nil { - return false, err - } - - if firstErr != nil { - return false, firstErr - } - return true, nil - case parameterStatusMsg, noticeResponseMsg: - if err := rd.Discard(msgLen); err != nil { - return false, err - } - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return false, err - } - if firstErr == nil { - firstErr = e - } - default: - return false, fmt.Errorf("pgdriver: Next: unexpected message %q", c) - } - } -} - -func (r *rows) readDataRow(rd *reader, dest []driver.Value) error { - numCol, err := readInt16(rd) - if err != nil { - return err - } - - if len(dest) != int(numCol) { - return fmt.Errorf("pgdriver: query returned %d columns, but Scan dest has %d items", - numCol, len(dest)) - } - - for colIdx := int16(0); colIdx < numCol; colIdx++ { - dataLen, err := readInt32(rd) - if err != nil { - return err - } - - value, err := readColumnValue(rd, r.rowDesc.types[colIdx], int(dataLen)) - if err != nil { - return err - } - - if dest != nil { - dest[colIdx] = value - } - } - - return nil -} - -//------------------------------------------------------------------------------ - -func parseResult(b []byte) (driver.RowsAffected, error) { - i := bytes.LastIndexByte(b, ' ') - if i == -1 { - return 0, nil - } - - b = b[i+1 : len(b)-1] - affected, err := strconv.ParseUint(bytesToString(b), 10, 64) - if err != nil { - return 0, nil - } - - return driver.RowsAffected(affected), nil -} - -//------------------------------------------------------------------------------ - -type tx struct { - cn *Conn -} - -var _ driver.Tx = (*tx)(nil) - -func (tx tx) Commit() error { - _, err := tx.cn.ExecContext(context.Background(), "COMMIT", nil) - return err -} - -func (tx tx) Rollback() error { - _, err := tx.cn.ExecContext(context.Background(), "ROLLBACK", nil) - return err -} - -//------------------------------------------------------------------------------ - -type stmt struct { - cn *Conn - name string - rowDesc *rowDescription -} - -var ( - _ driver.Stmt = (*stmt)(nil) - _ driver.StmtExecContext = (*stmt)(nil) - _ driver.StmtQueryContext = (*stmt)(nil) -) - -func newStmt(cn *Conn, name string, rowDesc *rowDescription) *stmt { - return &stmt{ - cn: cn, - name: name, - rowDesc: rowDesc, - } -} - -func (stmt *stmt) Close() error { - if stmt.rowDesc != nil { - rowDescPool.Put(stmt.rowDesc) - stmt.rowDesc = nil - } - - ctx := context.TODO() - if err := writeCloseStmt(ctx, stmt.cn, stmt.name); err != nil { - return err - } - if err := readCloseStmtComplete(ctx, stmt.cn); err != nil { - return err - } - return nil -} - -func (stmt *stmt) NumInput() int { - if stmt.rowDesc == nil { - return -1 - } - return int(stmt.rowDesc.numInput) -} - -func (stmt *stmt) Exec(args []driver.Value) (driver.Result, error) { - panic("not implemented") -} - -func (stmt *stmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) { - if err := writeBindExecute(ctx, stmt.cn, stmt.name, args); err != nil { - return nil, err - } - return readExtQuery(ctx, stmt.cn) -} - -func (stmt *stmt) Query(args []driver.Value) (driver.Rows, error) { - panic("not implemented") -} - -func (stmt *stmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) { - if err := writeBindExecute(ctx, stmt.cn, stmt.name, args); err != nil { - return nil, err - } - return readExtQueryData(ctx, stmt.cn, stmt.rowDesc) -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/error.go b/vendor/github.com/uptrace/bun/driver/pgdriver/error.go deleted file mode 100644 index 5f3fa1f2..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/error.go +++ /dev/null @@ -1,75 +0,0 @@ -package pgdriver - -import ( - "database/sql/driver" - "fmt" - "net" -) - -// Error represents an error returned by PostgreSQL server -// using PostgreSQL ErrorResponse protocol. -// -// https://www.postgresql.org/docs/current/static/protocol-message-formats.html -type Error struct { - m map[byte]string -} - -// Field returns a string value associated with an error field. -// -// https://www.postgresql.org/docs/current/static/protocol-error-fields.html -func (err Error) Field(k byte) string { - return err.m[k] -} - -// IntegrityViolation reports whether the error is a part of -// Integrity Constraint Violation class of errors. -// -// https://www.postgresql.org/docs/current/static/errcodes-appendix.html -func (err Error) IntegrityViolation() bool { - switch err.Field('C') { - case "23000", "23001", "23502", "23503", "23505", "23514", "23P01": - return true - default: - return false - } -} - -// StatementTimeout reports whether the error is a statement timeout error. -func (err Error) StatementTimeout() bool { - return err.Field('C') == "57014" -} - -func (err Error) Error() string { - return fmt.Sprintf("%s: %s (SQLSTATE=%s)", - err.Field('S'), err.Field('M'), err.Field('C')) -} - -func isBadConn(err error, allowTimeout bool) bool { - switch err { - case nil: - return false - case driver.ErrBadConn: - return true - } - - if err, ok := err.(Error); ok { - switch err.Field('V') { - case "FATAL", "PANIC": - return true - } - switch err.Field('C') { - case "25P02", // current transaction is aborted - "57014": // canceling statement due to user request - return true - } - return false - } - - if allowTimeout { - if err, ok := err.(net.Error); ok && err.Timeout() { - return !err.Temporary() - } - } - - return true -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/format.go b/vendor/github.com/uptrace/bun/driver/pgdriver/format.go deleted file mode 100644 index 13157ade..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/format.go +++ /dev/null @@ -1,199 +0,0 @@ -package pgdriver - -import ( - "database/sql/driver" - "encoding/hex" - "fmt" - "math" - "strconv" - "time" - "unicode/utf8" -) - -func formatQueryArgs(query string, args []interface{}) (string, error) { - namedArgs := make([]driver.NamedValue, len(args)) - for i, arg := range args { - namedArgs[i] = driver.NamedValue{Value: arg} - } - return formatQuery(query, namedArgs) -} - -func formatQuery(query string, args []driver.NamedValue) (string, error) { - if len(args) == 0 { - return query, nil - } - - dst := make([]byte, 0, 2*len(query)) - - p := newParser(query) - for p.Valid() { - switch c := p.Next(); c { - case '$': - if i, ok := p.Number(); ok { - if i < 1 { - return "", fmt.Errorf("pgdriver: got $%d, but the minimal arg index is 1", i) - } - if i > len(args) { - return "", fmt.Errorf("pgdriver: got %d args, wanted %d", len(args), i) - } - - var err error - dst, err = appendArg(dst, args[i-1].Value) - if err != nil { - return "", err - } - } else { - dst = append(dst, '$') - } - case '\'': - if b, ok := p.QuotedString(); ok { - dst = append(dst, b...) - } else { - dst = append(dst, '\'') - } - default: - dst = append(dst, c) - } - } - - return bytesToString(dst), nil -} - -func appendArg(b []byte, v interface{}) ([]byte, error) { - switch v := v.(type) { - case nil: - return append(b, "NULL"...), nil - case int64: - return strconv.AppendInt(b, v, 10), nil - case float64: - switch { - case math.IsNaN(v): - return append(b, "'NaN'"...), nil - case math.IsInf(v, 1): - return append(b, "'Infinity'"...), nil - case math.IsInf(v, -1): - return append(b, "'-Infinity'"...), nil - default: - return strconv.AppendFloat(b, v, 'f', -1, 64), nil - } - case bool: - if v { - return append(b, "TRUE"...), nil - } - return append(b, "FALSE"...), nil - case []byte: - if v == nil { - return append(b, "NULL"...), nil - } - - b = append(b, `'\x`...) - - s := len(b) - b = append(b, make([]byte, hex.EncodedLen(len(v)))...) - hex.Encode(b[s:], v) - - b = append(b, "'"...) - - return b, nil - case string: - b = append(b, '\'') - for _, r := range v { - if r == '\000' { - continue - } - - if r == '\'' { - b = append(b, '\'', '\'') - continue - } - - if r < utf8.RuneSelf { - b = append(b, byte(r)) - continue - } - l := len(b) - if cap(b)-l < utf8.UTFMax { - b = append(b, make([]byte, utf8.UTFMax)...) - } - n := utf8.EncodeRune(b[l:l+utf8.UTFMax], r) - b = b[:l+n] - } - b = append(b, '\'') - return b, nil - case time.Time: - if v.IsZero() { - return append(b, "NULL"...), nil - } - return v.UTC().AppendFormat(b, "'2006-01-02 15:04:05.999999-07:00'"), nil - default: - return nil, fmt.Errorf("pgdriver: unexpected arg: %T", v) - } -} - -type parser struct { - b []byte - i int -} - -func newParser(s string) *parser { - return &parser{ - b: stringToBytes(s), - } -} - -func (p *parser) Valid() bool { - return p.i < len(p.b) -} - -func (p *parser) Next() byte { - c := p.b[p.i] - p.i++ - return c -} - -func (p *parser) Number() (int, bool) { - start := p.i - end := len(p.b) - - for i := p.i; i < len(p.b); i++ { - c := p.b[i] - if !isNum(c) { - end = i - break - } - } - - p.i = end - b := p.b[start:end] - - n, err := strconv.Atoi(bytesToString(b)) - if err != nil { - return 0, false - } - - return n, true -} - -func (p *parser) QuotedString() ([]byte, bool) { - start := p.i - 1 - end := len(p.b) - - var c byte - for i := p.i; i < len(p.b); i++ { - next := p.b[i] - if c == '\'' && next != '\'' { - end = i - break - } - c = next - } - - p.i = end - b := p.b[start:end] - - return b, true -} - -func isNum(c byte) bool { - return c >= '0' && c <= '9' -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/listener.go b/vendor/github.com/uptrace/bun/driver/pgdriver/listener.go deleted file mode 100644 index 2a783e55..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/listener.go +++ /dev/null @@ -1,380 +0,0 @@ -package pgdriver - -import ( - "context" - "errors" - "strconv" - "sync" - "time" - - "github.com/uptrace/bun" -) - -const pingChannel = "bun:ping" - -var ( - errListenerClosed = errors.New("bun: listener is closed") - errPingTimeout = errors.New("bun: ping timeout") -) - -// Notify sends a notification on the channel using `NOTIFY` command. -func Notify(ctx context.Context, db *bun.DB, channel, payload string) error { - _, err := db.ExecContext(ctx, "NOTIFY ?, ?", bun.Ident(channel), payload) - return err -} - -type Listener struct { - db *bun.DB - driver *Connector - - channels []string - - mu sync.Mutex - cn *Conn - closed bool - exit chan struct{} -} - -func NewListener(db *bun.DB) *Listener { - return &Listener{ - db: db, - driver: db.Driver().(Driver).connector, - exit: make(chan struct{}), - } -} - -// Close closes the listener, releasing any open resources. -func (ln *Listener) Close() error { - return ln.withLock(func() error { - if ln.closed { - return errListenerClosed - } - - ln.closed = true - close(ln.exit) - - return ln.closeConn(errListenerClosed) - }) -} - -func (ln *Listener) withLock(fn func() error) error { - ln.mu.Lock() - defer ln.mu.Unlock() - return fn() -} - -func (ln *Listener) conn(ctx context.Context) (*Conn, error) { - if ln.closed { - return nil, errListenerClosed - } - if ln.cn != nil { - return ln.cn, nil - } - - cn, err := ln._conn(ctx) - if err != nil { - return nil, err - } - - ln.cn = cn - return cn, nil -} - -func (ln *Listener) _conn(ctx context.Context) (*Conn, error) { - driverConn, err := ln.driver.Connect(ctx) - if err != nil { - return nil, err - } - cn := driverConn.(*Conn) - - if len(ln.channels) > 0 { - err := ln.listen(ctx, cn, ln.channels...) - if err != nil { - _ = cn.Close() - return nil, err - } - } - - return cn, nil -} - -func (ln *Listener) checkConn(ctx context.Context, cn *Conn, err error, allowTimeout bool) { - _ = ln.withLock(func() error { - if ln.closed || ln.cn != cn { - return nil - } - if isBadConn(err, allowTimeout) { - ln.reconnect(ctx, err) - } - return nil - }) -} - -func (ln *Listener) reconnect(ctx context.Context, reason error) { - if ln.cn != nil { - Logger.Printf(ctx, "bun: discarding bad listener connection: %s", reason) - _ = ln.closeConn(reason) - } - _, _ = ln.conn(ctx) -} - -func (ln *Listener) closeConn(reason error) error { - if ln.cn == nil { - return nil - } - err := ln.cn.Close() - ln.cn = nil - return err -} - -// Listen starts listening for notifications on channels. -func (ln *Listener) Listen(ctx context.Context, channels ...string) error { - var cn *Conn - - if err := ln.withLock(func() error { - ln.channels = appendIfNotExists(ln.channels, channels...) - - var err error - cn, err = ln.conn(ctx) - return err - }); err != nil { - return err - } - - if err := ln.listen(ctx, cn, channels...); err != nil { - ln.checkConn(ctx, cn, err, false) - return err - } - return nil -} - -func (ln *Listener) listen(ctx context.Context, cn *Conn, channels ...string) error { - for _, channel := range channels { - if err := writeQuery(ctx, cn, "LISTEN "+strconv.Quote(channel)); err != nil { - return err - } - } - return nil -} - -// Unlisten stops listening for notifications on channels. -func (ln *Listener) Unlisten(ctx context.Context, channels ...string) error { - var cn *Conn - - if err := ln.withLock(func() error { - ln.channels = removeIfExists(ln.channels, channels...) - - var err error - cn, err = ln.conn(ctx) - return err - }); err != nil { - return err - } - - if err := ln.unlisten(ctx, cn, channels...); err != nil { - ln.checkConn(ctx, cn, err, false) - return err - } - return nil -} - -func (ln *Listener) unlisten(ctx context.Context, cn *Conn, channels ...string) error { - for _, channel := range channels { - if err := writeQuery(ctx, cn, "UNLISTEN "+strconv.Quote(channel)); err != nil { - return err - } - } - return nil -} - -// Receive indefinitely waits for a notification. This is low-level API -// and in most cases Channel should be used instead. -func (ln *Listener) Receive(ctx context.Context) (channel string, payload string, err error) { - return ln.ReceiveTimeout(ctx, 0) -} - -// ReceiveTimeout waits for a notification until timeout is reached. -// This is low-level API and in most cases Channel should be used instead. -func (ln *Listener) ReceiveTimeout( - ctx context.Context, timeout time.Duration, -) (channel, payload string, err error) { - var cn *Conn - - if err := ln.withLock(func() error { - var err error - cn, err = ln.conn(ctx) - return err - }); err != nil { - return "", "", err - } - - rd := cn.reader(ctx, timeout) - channel, payload, err = readNotification(ctx, rd) - if err != nil { - ln.checkConn(ctx, cn, err, timeout > 0) - return "", "", err - } - - return channel, payload, nil -} - -// Channel returns a channel for concurrently receiving notifications. -// It periodically sends Ping notification to test connection health. -// -// The channel is closed with Listener. Receive* APIs can not be used -// after channel is created. -func (ln *Listener) Channel(opts ...ChannelOption) <-chan Notification { - return newChannel(ln, opts).ch -} - -//------------------------------------------------------------------------------ - -// Notification received with LISTEN command. -type Notification struct { - Channel string - Payload string -} - -type ChannelOption func(c *channel) - -func WithChannelSize(size int) ChannelOption { - return func(c *channel) { - c.size = size - } -} - -type channel struct { - ctx context.Context - ln *Listener - - size int - pingTimeout time.Duration - - ch chan Notification - pingCh chan struct{} -} - -func newChannel(ln *Listener, opts []ChannelOption) *channel { - c := &channel{ - ctx: context.TODO(), - ln: ln, - - size: 1000, - pingTimeout: 5 * time.Second, - } - - for _, opt := range opts { - opt(c) - } - - c.ch = make(chan Notification, c.size) - c.pingCh = make(chan struct{}, 1) - _ = c.ln.Listen(c.ctx, pingChannel) - go c.startReceive() - go c.startPing() - - return c -} - -func (c *channel) startReceive() { - var errCount int - for { - channel, payload, err := c.ln.Receive(c.ctx) - if err != nil { - if err == errListenerClosed { - close(c.ch) - return - } - - if errCount > 0 { - time.Sleep(500 * time.Millisecond) - } - errCount++ - - continue - } - - errCount = 0 - - // Any notification is as good as a ping. - select { - case c.pingCh <- struct{}{}: - default: - } - - switch channel { - case pingChannel: - // ignore - default: - select { - case c.ch <- Notification{channel, payload}: - default: - Logger.Printf(c.ctx, "pgdriver: Listener buffer is full (message is dropped)") - } - } - } -} - -func (c *channel) startPing() { - timer := time.NewTimer(time.Minute) - timer.Stop() - - healthy := true - for { - timer.Reset(c.pingTimeout) - select { - case <-c.pingCh: - healthy = true - if !timer.Stop() { - <-timer.C - } - case <-timer.C: - pingErr := c.ping(c.ctx) - if healthy { - healthy = false - } else { - if pingErr == nil { - pingErr = errPingTimeout - } - _ = c.ln.withLock(func() error { - c.ln.reconnect(c.ctx, pingErr) - return nil - }) - } - case <-c.ln.exit: - return - } - } -} - -func (c *channel) ping(ctx context.Context) error { - _, err := c.ln.db.ExecContext(ctx, "NOTIFY "+strconv.Quote(pingChannel)) - return err -} - -func appendIfNotExists(ss []string, es ...string) []string { -loop: - for _, e := range es { - for _, s := range ss { - if s == e { - continue loop - } - } - ss = append(ss, e) - } - return ss -} - -func removeIfExists(ss []string, es ...string) []string { - for _, e := range es { - for i, s := range ss { - if s == e { - last := len(ss) - 1 - ss[i] = ss[last] - ss = ss[:last] - break - } - } - } - return ss -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/proto.go b/vendor/github.com/uptrace/bun/driver/pgdriver/proto.go deleted file mode 100644 index ab4be022..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/proto.go +++ /dev/null @@ -1,1100 +0,0 @@ -package pgdriver - -import ( - "bufio" - "context" - "crypto/md5" - "crypto/tls" - "database/sql" - "database/sql/driver" - "encoding/binary" - "encoding/hex" - "errors" - "fmt" - "io" - "math" - "strconv" - "sync" - "time" - "unicode/utf8" - - "mellium.im/sasl" -) - -// https://www.postgresql.org/docs/current/protocol-message-formats.html -//nolint:deadcode,varcheck,unused -const ( - commandCompleteMsg = 'C' - errorResponseMsg = 'E' - noticeResponseMsg = 'N' - parameterStatusMsg = 'S' - authenticationOKMsg = 'R' - backendKeyDataMsg = 'K' - noDataMsg = 'n' - passwordMessageMsg = 'p' - terminateMsg = 'X' - - saslInitialResponseMsg = 'p' - authenticationSASLContinueMsg = 'R' - saslResponseMsg = 'p' - authenticationSASLFinalMsg = 'R' - - authenticationOK = 0 - authenticationCleartextPassword = 3 - authenticationMD5Password = 5 - authenticationSASL = 10 - - notificationResponseMsg = 'A' - - describeMsg = 'D' - parameterDescriptionMsg = 't' - - queryMsg = 'Q' - readyForQueryMsg = 'Z' - emptyQueryResponseMsg = 'I' - rowDescriptionMsg = 'T' - dataRowMsg = 'D' - - parseMsg = 'P' - parseCompleteMsg = '1' - - bindMsg = 'B' - bindCompleteMsg = '2' - - executeMsg = 'E' - - syncMsg = 'S' - flushMsg = 'H' - - closeMsg = 'C' - closeCompleteMsg = '3' - - copyInResponseMsg = 'G' - copyOutResponseMsg = 'H' - copyDataMsg = 'd' - copyDoneMsg = 'c' -) - -var errEmptyQuery = errors.New("pgdriver: query is empty") - -type reader struct { - *bufio.Reader - buf []byte -} - -func newReader(r io.Reader) *reader { - return &reader{ - Reader: bufio.NewReader(r), - buf: make([]byte, 128), - } -} - -func (r *reader) ReadTemp(n int) ([]byte, error) { - if n <= len(r.buf) { - b := r.buf[:n] - _, err := io.ReadFull(r.Reader, b) - return b, err - } - - b := make([]byte, n) - _, err := io.ReadFull(r.Reader, b) - return b, err -} - -func (r *reader) Discard(n int) error { - _, err := r.ReadTemp(n) - return err -} - -func enableSSL(ctx context.Context, cn *Conn, tlsConf *tls.Config) error { - if err := writeSSLMsg(ctx, cn); err != nil { - return err - } - - rd := cn.reader(ctx, -1) - - c, err := rd.ReadByte() - if err != nil { - return err - } - if c != 'S' { - return errors.New("pgdriver: SSL is not enabled on the server") - } - - tlsCN := tls.Client(cn.netConn, tlsConf) - if err := tlsCN.HandshakeContext(ctx); err != nil { - return fmt.Errorf("pgdriver: TLS handshake failed: %w", err) - } - cn.netConn = tlsCN - rd.Reset(cn.netConn) - - return nil -} - -func writeSSLMsg(ctx context.Context, cn *Conn) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(0) - wb.WriteInt32(80877103) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -//------------------------------------------------------------------------------ - -func startup(ctx context.Context, cn *Conn) error { - if err := writeStartup(ctx, cn); err != nil { - return err - } - - rd := cn.reader(ctx, -1) - - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case backendKeyDataMsg: - processID, err := readInt32(rd) - if err != nil { - return err - } - secretKey, err := readInt32(rd) - if err != nil { - return err - } - cn.processID = processID - cn.secretKey = secretKey - case authenticationOKMsg: - if err := auth(ctx, cn, rd); err != nil { - return err - } - case readyForQueryMsg: - return rd.Discard(msgLen) - case parameterStatusMsg, noticeResponseMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - return e - default: - return fmt.Errorf("pgdriver: unexpected startup message: %q", c) - } - } -} - -func writeStartup(ctx context.Context, cn *Conn) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(0) - wb.WriteInt32(196608) - wb.WriteString("user") - wb.WriteString(cn.cfg.User) - wb.WriteString("database") - wb.WriteString(cn.cfg.Database) - if cn.cfg.AppName != "" { - wb.WriteString("application_name") - wb.WriteString(cn.cfg.AppName) - } - wb.WriteString("") - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -//------------------------------------------------------------------------------ - -func auth(ctx context.Context, cn *Conn, rd *reader) error { - num, err := readInt32(rd) - if err != nil { - return err - } - - switch num { - case authenticationOK: - return nil - case authenticationCleartextPassword: - return authCleartext(ctx, cn, rd) - case authenticationMD5Password: - return authMD5(ctx, cn, rd) - case authenticationSASL: - if err := authSASL(ctx, cn, rd); err != nil { - return fmt.Errorf("pgdriver: SASL: %w", err) - } - return nil - default: - return fmt.Errorf("pgdriver: unknown authentication message: %q", num) - } -} - -func authCleartext(ctx context.Context, cn *Conn, rd *reader) error { - if err := writePassword(ctx, cn, cn.cfg.Password); err != nil { - return err - } - return readAuthOK(cn, rd) -} - -func readAuthOK(cn *Conn, rd *reader) error { - c, _, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case authenticationOKMsg: - num, err := readInt32(rd) - if err != nil { - return err - } - if num != 0 { - return fmt.Errorf("pgdriver: unexpected authentication code: %q", num) - } - return nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - return e - default: - return fmt.Errorf("pgdriver: unknown password message: %q", c) - } -} - -//------------------------------------------------------------------------------ - -func authMD5(ctx context.Context, cn *Conn, rd *reader) error { - b, err := rd.ReadTemp(4) - if err != nil { - return err - } - - secret := "md5" + md5s(md5s(cn.cfg.Password+cn.cfg.User)+string(b)) - if err := writePassword(ctx, cn, secret); err != nil { - return err - } - - return readAuthOK(cn, rd) -} - -func writePassword(ctx context.Context, cn *Conn, password string) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(passwordMessageMsg) - wb.WriteString(password) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func md5s(s string) string { - h := md5.Sum([]byte(s)) - return hex.EncodeToString(h[:]) -} - -//------------------------------------------------------------------------------ - -func authSASL(ctx context.Context, cn *Conn, rd *reader) error { - var saslMech sasl.Mechanism - -loop: - for { - s, err := readString(rd) - if err != nil { - return err - } - - switch s { - case "": - break loop - case sasl.ScramSha256.Name: - saslMech = sasl.ScramSha256 - case sasl.ScramSha256Plus.Name: - // ignore - default: - return fmt.Errorf("got %q, wanted %q", s, sasl.ScramSha256.Name) - } - } - - creds := sasl.Credentials(func() (Username, Password, Identity []byte) { - return []byte(cn.cfg.User), []byte(cn.cfg.Password), nil - }) - client := sasl.NewClient(saslMech, creds) - - _, resp, err := client.Step(nil) - if err != nil { - return fmt.Errorf("client.Step 1 failed: %w", err) - } - - if err := saslWriteInitialResponse(ctx, cn, saslMech, resp); err != nil { - return err - } - - c, msgLen, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case authenticationSASLContinueMsg: - c11, err := readInt32(rd) - if err != nil { - return err - } - if c11 != 11 { - return fmt.Errorf("got %q, wanted %q", c, 11) - } - - b, err := rd.ReadTemp(msgLen - 4) - if err != nil { - return err - } - - _, resp, err = client.Step(b) - if err != nil { - return fmt.Errorf("client.Step 2 failed: %w", err) - } - - if err := saslWriteResponse(ctx, cn, resp); err != nil { - return err - } - - resp, err = saslReadAuthFinal(cn, rd) - if err != nil { - return err - } - - if _, _, err := client.Step(resp); err != nil { - return fmt.Errorf("client.Step 3 failed: %w", err) - } - - if client.State() != sasl.ValidServerResponse { - return fmt.Errorf("got state=%q, wanted %q", client.State(), sasl.ValidServerResponse) - } - - return nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - return e - default: - return fmt.Errorf("got %q, wanted %q", c, authenticationSASLContinueMsg) - } -} - -func saslWriteInitialResponse( - ctx context.Context, cn *Conn, saslMech sasl.Mechanism, resp []byte, -) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(saslInitialResponseMsg) - wb.WriteString(saslMech.Name) - wb.WriteInt32(int32(len(resp))) - if _, err := wb.Write(resp); err != nil { - return err - } - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func saslWriteResponse(ctx context.Context, cn *Conn, resp []byte) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(saslResponseMsg) - if _, err := wb.Write(resp); err != nil { - return err - } - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func saslReadAuthFinal(cn *Conn, rd *reader) ([]byte, error) { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case authenticationSASLFinalMsg: - c12, err := readInt32(rd) - if err != nil { - return nil, err - } - if c12 != 12 { - return nil, fmt.Errorf("got %q, wanted %q", c, 12) - } - - resp := make([]byte, msgLen-4) - if _, err := io.ReadFull(rd, resp); err != nil { - return nil, err - } - - if err := readAuthOK(cn, rd); err != nil { - return nil, err - } - - return resp, nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - return nil, e - default: - return nil, fmt.Errorf("got %q, wanted %q", c, authenticationSASLFinalMsg) - } -} - -//------------------------------------------------------------------------------ - -func writeQuery(ctx context.Context, cn *Conn, query string) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(queryMsg) - wb.WriteString(query) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func readQuery(ctx context.Context, cn *Conn) (sql.Result, error) { - rd := cn.reader(ctx, -1) - - var res driver.Result - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case emptyQueryResponseMsg: - if firstErr == nil { - firstErr = errEmptyQuery - } - case commandCompleteMsg: - tmp, err := rd.ReadTemp(msgLen) - if err != nil { - firstErr = err - break - } - - r, err := parseResult(tmp) - if err != nil { - firstErr = err - } else { - res = r - } - case describeMsg, - rowDescriptionMsg, - noticeResponseMsg, - parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - return res, firstErr - default: - return nil, fmt.Errorf("pgdriver: Exec: unexpected message %q", c) - } - } -} - -func readQueryData(ctx context.Context, cn *Conn) (*rows, error) { - rd := cn.reader(ctx, -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case rowDescriptionMsg: - rowDesc, err := readRowDescription(rd) - if err != nil { - return nil, err - } - return newRows(cn, rowDesc, true), nil - case commandCompleteMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - if firstErr != nil { - return nil, firstErr - } - return &rows{closed: true}, nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case emptyQueryResponseMsg: - if firstErr == nil { - firstErr = errEmptyQuery - } - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("pgdriver: newRows: unexpected message %q", c) - } - } -} - -//------------------------------------------------------------------------------ - -var rowDescPool sync.Pool - -type rowDescription struct { - buf []byte - names []string - types []int32 - numInput int16 -} - -func newRowDescription(numCol int) *rowDescription { - if numCol < 16 { - numCol = 16 - } - return &rowDescription{ - buf: make([]byte, 0, 16*numCol), - names: make([]string, 0, numCol), - types: make([]int32, 0, numCol), - numInput: -1, - } -} - -func (d *rowDescription) reset(numCol int) { - d.buf = make([]byte, 0, 16*numCol) - d.names = d.names[:0] - d.types = d.types[:0] - d.numInput = -1 -} - -func (d *rowDescription) addName(name []byte) { - if len(d.buf)+len(name) > cap(d.buf) { - d.buf = make([]byte, 0, cap(d.buf)) - } - - i := len(d.buf) - d.buf = append(d.buf, name...) - d.names = append(d.names, bytesToString(d.buf[i:])) -} - -func (d *rowDescription) addType(dataType int32) { - d.types = append(d.types, dataType) -} - -func readRowDescription(rd *reader) (*rowDescription, error) { - numCol, err := readInt16(rd) - if err != nil { - return nil, err - } - - rowDesc, ok := rowDescPool.Get().(*rowDescription) - if !ok { - rowDesc = newRowDescription(int(numCol)) - } else { - rowDesc.reset(int(numCol)) - } - - for i := 0; i < int(numCol); i++ { - name, err := rd.ReadSlice(0) - if err != nil { - return nil, err - } - rowDesc.addName(name[:len(name)-1]) - - if _, err := rd.ReadTemp(6); err != nil { - return nil, err - } - - dataType, err := readInt32(rd) - if err != nil { - return nil, err - } - rowDesc.addType(dataType) - - if _, err := rd.ReadTemp(8); err != nil { - return nil, err - } - } - - return rowDesc, nil -} - -//------------------------------------------------------------------------------ - -func readNotification(ctx context.Context, rd *reader) (channel, payload string, err error) { - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return "", "", err - } - - switch c { - case commandCompleteMsg, readyForQueryMsg, noticeResponseMsg: - if err := rd.Discard(msgLen); err != nil { - return "", "", err - } - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return "", "", err - } - return "", "", e - case notificationResponseMsg: - if err := rd.Discard(4); err != nil { - return "", "", err - } - channel, err = readString(rd) - if err != nil { - return "", "", err - } - payload, err = readString(rd) - if err != nil { - return "", "", err - } - return channel, payload, nil - default: - return "", "", fmt.Errorf("pgdriver: readNotification: unexpected message %q", c) - } - } -} - -//------------------------------------------------------------------------------ - -func writeParseDescribeSync(ctx context.Context, cn *Conn, name, query string) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(parseMsg) - wb.WriteString(name) - wb.WriteString(query) - wb.WriteInt16(0) - wb.FinishMessage() - - wb.StartMessage(describeMsg) - wb.WriteByte('S') - wb.WriteString(name) - wb.FinishMessage() - - wb.StartMessage(syncMsg) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func readParseDescribeSync(ctx context.Context, cn *Conn) (*rowDescription, error) { - rd := cn.reader(ctx, -1) - var numParam int16 - var rowDesc *rowDescription - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case parseCompleteMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case rowDescriptionMsg: // response to DESCRIBE message. - rowDesc, err = readRowDescription(rd) - if err != nil { - return nil, err - } - rowDesc.numInput = numParam - case parameterDescriptionMsg: // response to DESCRIBE message. - numParam, err = readInt16(rd) - if err != nil { - return nil, err - } - - for i := 0; i < int(numParam); i++ { - if _, err := readInt32(rd); err != nil { - return nil, err - } - } - case noDataMsg: // response to DESCRIBE message. - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case readyForQueryMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - if firstErr != nil { - return nil, firstErr - } - return rowDesc, err - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("pgdriver: readParseDescribeSync: unexpected message %q", c) - } - } -} - -func writeBindExecute(ctx context.Context, cn *Conn, name string, args []driver.NamedValue) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(bindMsg) - wb.WriteString("") - wb.WriteString(name) - wb.WriteInt16(0) - wb.WriteInt16(int16(len(args))) - for i := range args { - wb.StartParam() - bytes, err := appendStmtArg(wb.Bytes, args[i].Value) - if err != nil { - return err - } - if bytes != nil { - wb.Bytes = bytes - wb.FinishParam() - } else { - wb.FinishNullParam() - } - } - wb.WriteInt16(0) - wb.FinishMessage() - - wb.StartMessage(executeMsg) - wb.WriteString("") - wb.WriteInt32(0) - wb.FinishMessage() - - wb.StartMessage(syncMsg) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func readExtQuery(ctx context.Context, cn *Conn) (driver.Result, error) { - rd := cn.reader(ctx, -1) - var res driver.Result - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case bindCompleteMsg, dataRowMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case commandCompleteMsg: // response to EXECUTE message. - tmp, err := rd.ReadTemp(msgLen) - if err != nil { - return nil, err - } - - r, err := parseResult(tmp) - if err != nil { - if firstErr == nil { - firstErr = err - } - } else { - res = r - } - case readyForQueryMsg: // Response to SYNC message. - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - if firstErr != nil { - return nil, firstErr - } - return res, nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case emptyQueryResponseMsg: - if firstErr == nil { - firstErr = errEmptyQuery - } - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("pgdriver: readExtQuery: unexpected message %q", c) - } - } -} - -func readExtQueryData(ctx context.Context, cn *Conn, rowDesc *rowDescription) (*rows, error) { - rd := cn.reader(ctx, -1) - var firstErr error - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return nil, err - } - - switch c { - case bindCompleteMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - return newRows(cn, rowDesc, false), nil - case commandCompleteMsg: // response to EXECUTE message. - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - case readyForQueryMsg: // Response to SYNC message. - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - if firstErr != nil { - return nil, firstErr - } - return &rows{closed: true}, nil - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return nil, err - } - if firstErr == nil { - firstErr = e - } - case emptyQueryResponseMsg: - if firstErr == nil { - firstErr = errEmptyQuery - } - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("pgdriver: readExtQueryData: unexpected message %q", c) - } - } -} - -func writeCloseStmt(ctx context.Context, cn *Conn, name string) error { - wb := getWriteBuffer() - defer putWriteBuffer(wb) - - wb.StartMessage(closeMsg) - wb.WriteByte('S') //nolint - wb.WriteString(name) - wb.FinishMessage() - - wb.StartMessage(flushMsg) - wb.FinishMessage() - - return cn.write(ctx, wb) -} - -func readCloseStmtComplete(ctx context.Context, cn *Conn) error { - rd := cn.reader(ctx, -1) - for { - c, msgLen, err := readMessageType(rd) - if err != nil { - return err - } - - switch c { - case closeCompleteMsg: - return rd.Discard(msgLen) - case errorResponseMsg: - e, err := readError(rd) - if err != nil { - return err - } - return e - case noticeResponseMsg, parameterStatusMsg: - if err := rd.Discard(msgLen); err != nil { - return err - } - default: - return fmt.Errorf("pgdriver: readCloseCompleteMsg: unexpected message %q", c) - } - } -} - -//------------------------------------------------------------------------------ - -func readMessageType(rd *reader) (byte, int, error) { - c, err := rd.ReadByte() - if err != nil { - return 0, 0, err - } - l, err := readInt32(rd) - if err != nil { - return 0, 0, err - } - return c, int(l) - 4, nil -} - -func readInt16(rd *reader) (int16, error) { - b, err := rd.ReadTemp(2) - if err != nil { - return 0, err - } - return int16(binary.BigEndian.Uint16(b)), nil -} - -func readInt32(rd *reader) (int32, error) { - b, err := rd.ReadTemp(4) - if err != nil { - return 0, err - } - return int32(binary.BigEndian.Uint32(b)), nil -} - -func readString(rd *reader) (string, error) { - b, err := rd.ReadSlice(0) - if err != nil { - return "", err - } - return string(b[:len(b)-1]), nil -} - -func readError(rd *reader) (error, error) { - m := make(map[byte]string) - for { - c, err := rd.ReadByte() - if err != nil { - return nil, err - } - if c == 0 { - break - } - s, err := readString(rd) - if err != nil { - return nil, err - } - m[c] = s - } - switch err := (Error{m: m}); err.Field('V') { - case "FATAL", "PANIC": - // Return this as an error and stop processing. - return nil, err - default: - // Return this as an error message and continue processing. - return err, nil - } -} - -//------------------------------------------------------------------------------ - -func appendStmtArg(b []byte, v driver.Value) ([]byte, error) { - switch v := v.(type) { - case nil: - return nil, nil - case int64: - return strconv.AppendInt(b, v, 10), nil - case float64: - switch { - case math.IsNaN(v): - return append(b, "NaN"...), nil - case math.IsInf(v, 1): - return append(b, "Infinity"...), nil - case math.IsInf(v, -1): - return append(b, "-Infinity"...), nil - default: - return strconv.AppendFloat(b, v, 'f', -1, 64), nil - } - case bool: - if v { - return append(b, "TRUE"...), nil - } - return append(b, "FALSE"...), nil - case []byte: - if v == nil { - return nil, nil - } - - b = append(b, `\x`...) - - s := len(b) - b = append(b, make([]byte, hex.EncodedLen(len(v)))...) - hex.Encode(b[s:], v) - - return b, nil - case string: - for _, r := range v { - if r == 0 { - continue - } - if r < utf8.RuneSelf { - b = append(b, byte(r)) - continue - } - l := len(b) - if cap(b)-l < utf8.UTFMax { - b = append(b, make([]byte, utf8.UTFMax)...) - } - n := utf8.EncodeRune(b[l:l+utf8.UTFMax], r) - b = b[:l+n] - } - return b, nil - case time.Time: - if v.IsZero() { - return nil, nil - } - return v.UTC().AppendFormat(b, "2006-01-02 15:04:05.999999-07:00"), nil - default: - return nil, fmt.Errorf("pgdriver: unexpected arg: %T", v) - } -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/safe.go b/vendor/github.com/uptrace/bun/driver/pgdriver/safe.go deleted file mode 100644 index fab151a7..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/safe.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build appengine - -package internal - -func bytesToString(b []byte) string { - return string(b) -} - -func stringToBytes(s string) []byte { - return []byte(s) -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/unsafe.go b/vendor/github.com/uptrace/bun/driver/pgdriver/unsafe.go deleted file mode 100644 index 6ba86810..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/unsafe.go +++ /dev/null @@ -1,19 +0,0 @@ -// +build !appengine - -package pgdriver - -import "unsafe" - -func bytesToString(b []byte) string { - return *(*string)(unsafe.Pointer(&b)) -} - -//nolint:deadcode,unused -func stringToBytes(s string) []byte { - return *(*[]byte)(unsafe.Pointer( - &struct { - string - Cap int - }{s, len(s)}, - )) -} diff --git a/vendor/github.com/uptrace/bun/driver/pgdriver/write_buffer.go b/vendor/github.com/uptrace/bun/driver/pgdriver/write_buffer.go deleted file mode 100644 index cb683563..00000000 --- a/vendor/github.com/uptrace/bun/driver/pgdriver/write_buffer.go +++ /dev/null @@ -1,112 +0,0 @@ -package pgdriver - -import ( - "encoding/binary" - "io" - "sync" -) - -var wbPool = sync.Pool{ - New: func() interface{} { - return newWriteBuffer() - }, -} - -func getWriteBuffer() *writeBuffer { - wb := wbPool.Get().(*writeBuffer) - return wb -} - -func putWriteBuffer(wb *writeBuffer) { - wb.Reset() - wbPool.Put(wb) -} - -type writeBuffer struct { - Bytes []byte - - msgStart int - paramStart int -} - -func newWriteBuffer() *writeBuffer { - return &writeBuffer{ - Bytes: make([]byte, 0, 1024), - } -} - -func (b *writeBuffer) Reset() { - b.Bytes = b.Bytes[:0] -} - -func (b *writeBuffer) StartMessage(c byte) { - if c == 0 { - b.msgStart = len(b.Bytes) - b.Bytes = append(b.Bytes, 0, 0, 0, 0) - } else { - b.msgStart = len(b.Bytes) + 1 - b.Bytes = append(b.Bytes, c, 0, 0, 0, 0) - } -} - -func (b *writeBuffer) FinishMessage() { - binary.BigEndian.PutUint32( - b.Bytes[b.msgStart:], uint32(len(b.Bytes)-b.msgStart)) -} - -func (b *writeBuffer) Query() []byte { - return b.Bytes[b.msgStart+4 : len(b.Bytes)-1] -} - -func (b *writeBuffer) StartParam() { - b.paramStart = len(b.Bytes) - b.Bytes = append(b.Bytes, 0, 0, 0, 0) -} - -func (b *writeBuffer) FinishParam() { - binary.BigEndian.PutUint32( - b.Bytes[b.paramStart:], uint32(len(b.Bytes)-b.paramStart-4)) -} - -var nullParamLength = int32(-1) - -func (b *writeBuffer) FinishNullParam() { - binary.BigEndian.PutUint32( - b.Bytes[b.paramStart:], uint32(nullParamLength)) -} - -func (b *writeBuffer) Write(data []byte) (int, error) { - b.Bytes = append(b.Bytes, data...) - return len(data), nil -} - -func (b *writeBuffer) WriteInt16(num int16) { - b.Bytes = append(b.Bytes, 0, 0) - binary.BigEndian.PutUint16(b.Bytes[len(b.Bytes)-2:], uint16(num)) -} - -func (b *writeBuffer) WriteInt32(num int32) { - b.Bytes = append(b.Bytes, 0, 0, 0, 0) - binary.BigEndian.PutUint32(b.Bytes[len(b.Bytes)-4:], uint32(num)) -} - -func (b *writeBuffer) WriteString(s string) { - b.Bytes = append(b.Bytes, s...) - b.Bytes = append(b.Bytes, 0) -} - -func (b *writeBuffer) WriteBytes(data []byte) { - b.Bytes = append(b.Bytes, data...) - b.Bytes = append(b.Bytes, 0) -} - -func (b *writeBuffer) WriteByte(c byte) error { - b.Bytes = append(b.Bytes, c) - return nil -} - -func (b *writeBuffer) ReadFrom(r io.Reader) (int64, error) { - n, err := r.Read(b.Bytes[len(b.Bytes):cap(b.Bytes)]) - b.Bytes = b.Bytes[:len(b.Bytes)+n] - return int64(n), err -} diff --git a/vendor/github.com/uptrace/bun/extra/bunjson/json.go b/vendor/github.com/uptrace/bun/extra/bunjson/json.go deleted file mode 100644 index eff9d3f0..00000000 --- a/vendor/github.com/uptrace/bun/extra/bunjson/json.go +++ /dev/null @@ -1,26 +0,0 @@ -package bunjson - -import ( - "encoding/json" - "io" -) - -var _ Provider = (*StdProvider)(nil) - -type StdProvider struct{} - -func (StdProvider) Marshal(v interface{}) ([]byte, error) { - return json.Marshal(v) -} - -func (StdProvider) Unmarshal(data []byte, v interface{}) error { - return json.Unmarshal(data, v) -} - -func (StdProvider) NewEncoder(w io.Writer) Encoder { - return json.NewEncoder(w) -} - -func (StdProvider) NewDecoder(r io.Reader) Decoder { - return json.NewDecoder(r) -} diff --git a/vendor/github.com/uptrace/bun/extra/bunjson/provider.go b/vendor/github.com/uptrace/bun/extra/bunjson/provider.go deleted file mode 100644 index 7f810e12..00000000 --- a/vendor/github.com/uptrace/bun/extra/bunjson/provider.go +++ /dev/null @@ -1,43 +0,0 @@ -package bunjson - -import ( - "io" -) - -var provider Provider = StdProvider{} - -func SetProvider(p Provider) { - provider = p -} - -type Provider interface { - Marshal(v interface{}) ([]byte, error) - Unmarshal(data []byte, v interface{}) error - NewEncoder(w io.Writer) Encoder - NewDecoder(r io.Reader) Decoder -} - -type Decoder interface { - Decode(v interface{}) error - UseNumber() -} - -type Encoder interface { - Encode(v interface{}) error -} - -func Marshal(v interface{}) ([]byte, error) { - return provider.Marshal(v) -} - -func Unmarshal(data []byte, v interface{}) error { - return provider.Unmarshal(data, v) -} - -func NewEncoder(w io.Writer) Encoder { - return provider.NewEncoder(w) -} - -func NewDecoder(r io.Reader) Decoder { - return provider.NewDecoder(r) -} diff --git a/vendor/github.com/uptrace/bun/hook.go b/vendor/github.com/uptrace/bun/hook.go deleted file mode 100644 index 016f06a1..00000000 --- a/vendor/github.com/uptrace/bun/hook.go +++ /dev/null @@ -1,116 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "strings" - "sync/atomic" - "time" - "unicode" - - "github.com/uptrace/bun/schema" -) - -type QueryEvent struct { - DB *DB - - QueryAppender schema.QueryAppender // DEPRECATED: use IQuery instead - IQuery Query - Query string - QueryTemplate string - QueryArgs []interface{} - Model Model - - StartTime time.Time - Result sql.Result - Err error - - Stash map[interface{}]interface{} -} - -func (e *QueryEvent) Operation() string { - if e.IQuery != nil { - return e.IQuery.Operation() - } - return queryOperation(e.Query) -} - -func queryOperation(query string) string { - queryOp := strings.TrimLeftFunc(query, unicode.IsSpace) - - if idx := strings.IndexByte(queryOp, ' '); idx > 0 { - queryOp = queryOp[:idx] - } - if len(queryOp) > 16 { - queryOp = queryOp[:16] - } - return queryOp -} - -type QueryHook interface { - BeforeQuery(context.Context, *QueryEvent) context.Context - AfterQuery(context.Context, *QueryEvent) -} - -func (db *DB) beforeQuery( - ctx context.Context, - iquery Query, - queryTemplate string, - queryArgs []interface{}, - query string, - model Model, -) (context.Context, *QueryEvent) { - atomic.AddUint32(&db.stats.Queries, 1) - - if len(db.queryHooks) == 0 { - return ctx, nil - } - - event := &QueryEvent{ - DB: db, - - Model: model, - QueryAppender: iquery, - IQuery: iquery, - Query: query, - QueryTemplate: queryTemplate, - QueryArgs: queryArgs, - - StartTime: time.Now(), - } - - for _, hook := range db.queryHooks { - ctx = hook.BeforeQuery(ctx, event) - } - - return ctx, event -} - -func (db *DB) afterQuery( - ctx context.Context, - event *QueryEvent, - res sql.Result, - err error, -) { - switch err { - case nil, sql.ErrNoRows: - // nothing - default: - atomic.AddUint32(&db.stats.Errors, 1) - } - - if event == nil { - return - } - - event.Result = res - event.Err = err - - db.afterQueryFromIndex(ctx, event, len(db.queryHooks)-1) -} - -func (db *DB) afterQueryFromIndex(ctx context.Context, event *QueryEvent, hookIndex int) { - for ; hookIndex >= 0; hookIndex-- { - db.queryHooks[hookIndex].AfterQuery(ctx, event) - } -} diff --git a/vendor/github.com/uptrace/bun/internal/flag.go b/vendor/github.com/uptrace/bun/internal/flag.go deleted file mode 100644 index 22d2db29..00000000 --- a/vendor/github.com/uptrace/bun/internal/flag.go +++ /dev/null @@ -1,16 +0,0 @@ -package internal - -type Flag uint64 - -func (flag Flag) Has(other Flag) bool { - return flag&other != 0 -} - -func (flag Flag) Set(other Flag) Flag { - return flag | other -} - -func (flag Flag) Remove(other Flag) Flag { - flag &= ^other - return flag -} diff --git a/vendor/github.com/uptrace/bun/internal/hex.go b/vendor/github.com/uptrace/bun/internal/hex.go deleted file mode 100644 index 6fae2bb7..00000000 --- a/vendor/github.com/uptrace/bun/internal/hex.go +++ /dev/null @@ -1,43 +0,0 @@ -package internal - -import ( - fasthex "github.com/tmthrgd/go-hex" -) - -type HexEncoder struct { - b []byte - written bool -} - -func NewHexEncoder(b []byte) *HexEncoder { - return &HexEncoder{ - b: b, - } -} - -func (enc *HexEncoder) Bytes() []byte { - return enc.b -} - -func (enc *HexEncoder) Write(b []byte) (int, error) { - if !enc.written { - enc.b = append(enc.b, '\'') - enc.b = append(enc.b, `\x`...) - enc.written = true - } - - i := len(enc.b) - enc.b = append(enc.b, make([]byte, fasthex.EncodedLen(len(b)))...) - fasthex.Encode(enc.b[i:], b) - - return len(b), nil -} - -func (enc *HexEncoder) Close() error { - if enc.written { - enc.b = append(enc.b, '\'') - } else { - enc.b = append(enc.b, "NULL"...) - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/internal/logger.go b/vendor/github.com/uptrace/bun/internal/logger.go deleted file mode 100644 index 2e22a089..00000000 --- a/vendor/github.com/uptrace/bun/internal/logger.go +++ /dev/null @@ -1,27 +0,0 @@ -package internal - -import ( - "fmt" - "log" - "os" -) - -var Warn = log.New(os.Stderr, "WARN: bun: ", log.LstdFlags) - -var Deprecated = log.New(os.Stderr, "DEPRECATED: bun: ", log.LstdFlags) - -type Logging interface { - Printf(format string, v ...interface{}) -} - -type logger struct { - log *log.Logger -} - -func (l *logger) Printf(format string, v ...interface{}) { - _ = l.log.Output(2, fmt.Sprintf(format, v...)) -} - -var Logger Logging = &logger{ - log: log.New(os.Stderr, "bun: ", log.LstdFlags|log.Lshortfile), -} diff --git a/vendor/github.com/uptrace/bun/internal/map_key.go b/vendor/github.com/uptrace/bun/internal/map_key.go deleted file mode 100644 index bb5fcca8..00000000 --- a/vendor/github.com/uptrace/bun/internal/map_key.go +++ /dev/null @@ -1,67 +0,0 @@ -package internal - -import "reflect" - -var ifaceType = reflect.TypeOf((*interface{})(nil)).Elem() - -type MapKey struct { - iface interface{} -} - -func NewMapKey(is []interface{}) MapKey { - return MapKey{ - iface: newMapKey(is), - } -} - -func newMapKey(is []interface{}) interface{} { - switch len(is) { - case 1: - ptr := new([1]interface{}) - copy((*ptr)[:], is) - return *ptr - case 2: - ptr := new([2]interface{}) - copy((*ptr)[:], is) - return *ptr - case 3: - ptr := new([3]interface{}) - copy((*ptr)[:], is) - return *ptr - case 4: - ptr := new([4]interface{}) - copy((*ptr)[:], is) - return *ptr - case 5: - ptr := new([5]interface{}) - copy((*ptr)[:], is) - return *ptr - case 6: - ptr := new([6]interface{}) - copy((*ptr)[:], is) - return *ptr - case 7: - ptr := new([7]interface{}) - copy((*ptr)[:], is) - return *ptr - case 8: - ptr := new([8]interface{}) - copy((*ptr)[:], is) - return *ptr - case 9: - ptr := new([9]interface{}) - copy((*ptr)[:], is) - return *ptr - case 10: - ptr := new([10]interface{}) - copy((*ptr)[:], is) - return *ptr - default: - } - - at := reflect.New(reflect.ArrayOf(len(is), ifaceType)).Elem() - for i, v := range is { - *(at.Index(i).Addr().Interface().(*interface{})) = v - } - return at.Interface() -} diff --git a/vendor/github.com/uptrace/bun/internal/parser/parser.go b/vendor/github.com/uptrace/bun/internal/parser/parser.go deleted file mode 100644 index cdfc0be1..00000000 --- a/vendor/github.com/uptrace/bun/internal/parser/parser.go +++ /dev/null @@ -1,141 +0,0 @@ -package parser - -import ( - "bytes" - "strconv" - - "github.com/uptrace/bun/internal" -) - -type Parser struct { - b []byte - i int -} - -func New(b []byte) *Parser { - return &Parser{ - b: b, - } -} - -func NewString(s string) *Parser { - return New(internal.Bytes(s)) -} - -func (p *Parser) Valid() bool { - return p.i < len(p.b) -} - -func (p *Parser) Bytes() []byte { - return p.b[p.i:] -} - -func (p *Parser) Read() byte { - if p.Valid() { - c := p.b[p.i] - p.Advance() - return c - } - return 0 -} - -func (p *Parser) Peek() byte { - if p.Valid() { - return p.b[p.i] - } - return 0 -} - -func (p *Parser) Advance() { - p.i++ -} - -func (p *Parser) Skip(skip byte) bool { - if p.Peek() == skip { - p.Advance() - return true - } - return false -} - -func (p *Parser) SkipBytes(skip []byte) bool { - if len(skip) > len(p.b[p.i:]) { - return false - } - if !bytes.Equal(p.b[p.i:p.i+len(skip)], skip) { - return false - } - p.i += len(skip) - return true -} - -func (p *Parser) ReadSep(sep byte) ([]byte, bool) { - ind := bytes.IndexByte(p.b[p.i:], sep) - if ind == -1 { - b := p.b[p.i:] - p.i = len(p.b) - return b, false - } - - b := p.b[p.i : p.i+ind] - p.i += ind + 1 - return b, true -} - -func (p *Parser) ReadIdentifier() (string, bool) { - if p.i < len(p.b) && p.b[p.i] == '(' { - s := p.i + 1 - if ind := bytes.IndexByte(p.b[s:], ')'); ind != -1 { - b := p.b[s : s+ind] - p.i = s + ind + 1 - return internal.String(b), false - } - } - - ind := len(p.b) - p.i - var alpha bool - for i, c := range p.b[p.i:] { - if isNum(c) { - continue - } - if isAlpha(c) || (i > 0 && alpha && c == '_') { - alpha = true - continue - } - ind = i - break - } - if ind == 0 { - return "", false - } - b := p.b[p.i : p.i+ind] - p.i += ind - return internal.String(b), !alpha -} - -func (p *Parser) ReadNumber() int { - ind := len(p.b) - p.i - for i, c := range p.b[p.i:] { - if !isNum(c) { - ind = i - break - } - } - if ind == 0 { - return 0 - } - n, err := strconv.Atoi(string(p.b[p.i : p.i+ind])) - if err != nil { - panic(err) - } - p.i += ind - return n -} - -func isNum(c byte) bool { - return c >= '0' && c <= '9' -} - -func isAlpha(c byte) bool { - return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') -} diff --git a/vendor/github.com/uptrace/bun/internal/safe.go b/vendor/github.com/uptrace/bun/internal/safe.go deleted file mode 100644 index 862ff0eb..00000000 --- a/vendor/github.com/uptrace/bun/internal/safe.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build appengine - -package internal - -func String(b []byte) string { - return string(b) -} - -func Bytes(s string) []byte { - return []byte(s) -} diff --git a/vendor/github.com/uptrace/bun/internal/tagparser/parser.go b/vendor/github.com/uptrace/bun/internal/tagparser/parser.go deleted file mode 100644 index a3905853..00000000 --- a/vendor/github.com/uptrace/bun/internal/tagparser/parser.go +++ /dev/null @@ -1,184 +0,0 @@ -package tagparser - -import ( - "strings" -) - -type Tag struct { - Name string - Options map[string][]string -} - -func (t Tag) IsZero() bool { - return t.Name == "" && t.Options == nil -} - -func (t Tag) HasOption(name string) bool { - _, ok := t.Options[name] - return ok -} - -func (t Tag) Option(name string) (string, bool) { - if vs, ok := t.Options[name]; ok { - return vs[len(vs)-1], true - } - return "", false -} - -func Parse(s string) Tag { - if s == "" { - return Tag{} - } - p := parser{ - s: s, - } - p.parse() - return p.tag -} - -type parser struct { - s string - i int - - tag Tag - seenName bool // for empty names -} - -func (p *parser) setName(name string) { - if p.seenName { - p.addOption(name, "") - } else { - p.seenName = true - p.tag.Name = name - } -} - -func (p *parser) addOption(key, value string) { - p.seenName = true - if key == "" { - return - } - if p.tag.Options == nil { - p.tag.Options = make(map[string][]string) - } - if vs, ok := p.tag.Options[key]; ok { - p.tag.Options[key] = append(vs, value) - } else { - p.tag.Options[key] = []string{value} - } -} - -func (p *parser) parse() { - for p.valid() { - p.parseKeyValue() - if p.peek() == ',' { - p.i++ - } - } -} - -func (p *parser) parseKeyValue() { - start := p.i - - for p.valid() { - switch c := p.read(); c { - case ',': - key := p.s[start : p.i-1] - p.setName(key) - return - case ':': - key := p.s[start : p.i-1] - value := p.parseValue() - p.addOption(key, value) - return - case '"': - key := p.parseQuotedValue() - p.setName(key) - return - } - } - - key := p.s[start:p.i] - p.setName(key) -} - -func (p *parser) parseValue() string { - start := p.i - - for p.valid() { - switch c := p.read(); c { - case '"': - return p.parseQuotedValue() - case ',': - return p.s[start : p.i-1] - case '(': - p.skipPairs('(', ')') - } - } - - if p.i == start { - return "" - } - return p.s[start:p.i] -} - -func (p *parser) parseQuotedValue() string { - if i := strings.IndexByte(p.s[p.i:], '"'); i >= 0 && p.s[p.i+i-1] != '\\' { - s := p.s[p.i : p.i+i] - p.i += i + 1 - return s - } - - b := make([]byte, 0, 16) - - for p.valid() { - switch c := p.read(); c { - case '\\': - b = append(b, p.read()) - case '"': - return string(b) - default: - b = append(b, c) - } - } - - return "" -} - -func (p *parser) skipPairs(start, end byte) { - var lvl int - for p.valid() { - switch c := p.read(); c { - case '"': - _ = p.parseQuotedValue() - case start: - lvl++ - case end: - if lvl == 0 { - return - } - lvl-- - } - } -} - -func (p *parser) valid() bool { - return p.i < len(p.s) -} - -func (p *parser) read() byte { - if !p.valid() { - return 0 - } - c := p.s[p.i] - p.i++ - return c -} - -func (p *parser) peek() byte { - if !p.valid() { - return 0 - } - c := p.s[p.i] - return c -} diff --git a/vendor/github.com/uptrace/bun/internal/time.go b/vendor/github.com/uptrace/bun/internal/time.go deleted file mode 100644 index 2cb69b46..00000000 --- a/vendor/github.com/uptrace/bun/internal/time.go +++ /dev/null @@ -1,61 +0,0 @@ -package internal - -import ( - "fmt" - "time" -) - -const ( - dateFormat = "2006-01-02" - timeFormat = "15:04:05.999999999" - timetzFormat1 = "15:04:05.999999999-07:00:00" - timetzFormat2 = "15:04:05.999999999-07:00" - timetzFormat3 = "15:04:05.999999999-07" - timestampFormat = "2006-01-02 15:04:05.999999999" - timestamptzFormat1 = "2006-01-02 15:04:05.999999999-07:00:00" - timestamptzFormat2 = "2006-01-02 15:04:05.999999999-07:00" - timestamptzFormat3 = "2006-01-02 15:04:05.999999999-07" -) - -func ParseTime(s string) (time.Time, error) { - l := len(s) - - if l >= len("2006-01-02 15:04:05") { - switch s[10] { - case ' ': - if c := s[l-6]; c == '+' || c == '-' { - return time.Parse(timestamptzFormat2, s) - } - if c := s[l-3]; c == '+' || c == '-' { - return time.Parse(timestamptzFormat3, s) - } - if c := s[l-9]; c == '+' || c == '-' { - return time.Parse(timestamptzFormat1, s) - } - return time.ParseInLocation(timestampFormat, s, time.UTC) - case 'T': - return time.Parse(time.RFC3339Nano, s) - } - } - - if l >= len("15:04:05-07") { - if c := s[l-6]; c == '+' || c == '-' { - return time.Parse(timetzFormat2, s) - } - if c := s[l-3]; c == '+' || c == '-' { - return time.Parse(timetzFormat3, s) - } - if c := s[l-9]; c == '+' || c == '-' { - return time.Parse(timetzFormat1, s) - } - } - - if l < len("15:04:05") { - return time.Time{}, fmt.Errorf("bun: can't parse time=%q", s) - } - - if s[2] == ':' { - return time.ParseInLocation(timeFormat, s, time.UTC) - } - return time.ParseInLocation(dateFormat, s, time.UTC) -} diff --git a/vendor/github.com/uptrace/bun/internal/underscore.go b/vendor/github.com/uptrace/bun/internal/underscore.go deleted file mode 100644 index 9de52fb7..00000000 --- a/vendor/github.com/uptrace/bun/internal/underscore.go +++ /dev/null @@ -1,67 +0,0 @@ -package internal - -func IsUpper(c byte) bool { - return c >= 'A' && c <= 'Z' -} - -func IsLower(c byte) bool { - return c >= 'a' && c <= 'z' -} - -func ToUpper(c byte) byte { - return c - 32 -} - -func ToLower(c byte) byte { - return c + 32 -} - -// Underscore converts "CamelCasedString" to "camel_cased_string". -func Underscore(s string) string { - r := make([]byte, 0, len(s)+5) - for i := 0; i < len(s); i++ { - c := s[i] - if IsUpper(c) { - if i > 0 && i+1 < len(s) && (IsLower(s[i-1]) || IsLower(s[i+1])) { - r = append(r, '_', ToLower(c)) - } else { - r = append(r, ToLower(c)) - } - } else { - r = append(r, c) - } - } - return string(r) -} - -func CamelCased(s string) string { - r := make([]byte, 0, len(s)) - upperNext := true - for i := 0; i < len(s); i++ { - c := s[i] - if c == '_' { - upperNext = true - continue - } - if upperNext { - if IsLower(c) { - c = ToUpper(c) - } - upperNext = false - } - r = append(r, c) - } - return string(r) -} - -func ToExported(s string) string { - if len(s) == 0 { - return s - } - if c := s[0]; IsLower(c) { - b := []byte(s) - b[0] = ToUpper(c) - return string(b) - } - return s -} diff --git a/vendor/github.com/uptrace/bun/internal/unsafe.go b/vendor/github.com/uptrace/bun/internal/unsafe.go deleted file mode 100644 index 4bc79701..00000000 --- a/vendor/github.com/uptrace/bun/internal/unsafe.go +++ /dev/null @@ -1,20 +0,0 @@ -// +build !appengine - -package internal - -import "unsafe" - -// String converts byte slice to string. -func String(b []byte) string { - return *(*string)(unsafe.Pointer(&b)) -} - -// Bytes converts string to byte slice. -func Bytes(s string) []byte { - return *(*[]byte)(unsafe.Pointer( - &struct { - string - Cap int - }{s, len(s)}, - )) -} diff --git a/vendor/github.com/uptrace/bun/internal/util.go b/vendor/github.com/uptrace/bun/internal/util.go deleted file mode 100644 index 64130972..00000000 --- a/vendor/github.com/uptrace/bun/internal/util.go +++ /dev/null @@ -1,57 +0,0 @@ -package internal - -import ( - "reflect" -) - -func MakeSliceNextElemFunc(v reflect.Value) func() reflect.Value { - if v.Kind() == reflect.Array { - var pos int - return func() reflect.Value { - v := v.Index(pos) - pos++ - return v - } - } - - elemType := v.Type().Elem() - - if elemType.Kind() == reflect.Ptr { - elemType = elemType.Elem() - return func() reflect.Value { - if v.Len() < v.Cap() { - v.Set(v.Slice(0, v.Len()+1)) - elem := v.Index(v.Len() - 1) - if elem.IsNil() { - elem.Set(reflect.New(elemType)) - } - return elem - } - - elem := reflect.New(elemType) - v.Set(reflect.Append(v, elem)) - return elem - } - } - - zero := reflect.Zero(elemType) - return func() reflect.Value { - if v.Len() < v.Cap() { - v.Set(v.Slice(0, v.Len()+1)) - return v.Index(v.Len() - 1) - } - - v.Set(reflect.Append(v, zero)) - return v.Index(v.Len() - 1) - } -} - -func Unwrap(err error) error { - u, ok := err.(interface { - Unwrap() error - }) - if !ok { - return nil - } - return u.Unwrap() -} diff --git a/vendor/github.com/uptrace/bun/model.go b/vendor/github.com/uptrace/bun/model.go deleted file mode 100644 index 6ad4d8ef..00000000 --- a/vendor/github.com/uptrace/bun/model.go +++ /dev/null @@ -1,207 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "errors" - "fmt" - "reflect" - "time" - - "github.com/uptrace/bun/schema" -) - -var errNilModel = errors.New("bun: Model(nil)") - -var ( - timeType = reflect.TypeOf((*time.Time)(nil)).Elem() - bytesType = reflect.TypeOf((*[]byte)(nil)).Elem() -) - -type Model = schema.Model - -type rowScanner interface { - ScanRow(ctx context.Context, rows *sql.Rows) error -} - -type TableModel interface { - Model - - schema.BeforeAppendModelHook - schema.BeforeScanRowHook - schema.AfterScanRowHook - ScanColumn(column string, src interface{}) error - - Table() *schema.Table - Relation() *schema.Relation - - join(string) *relationJoin - getJoin(string) *relationJoin - getJoins() []relationJoin - addJoin(relationJoin) *relationJoin - - rootValue() reflect.Value - parentIndex() []int - mount(reflect.Value) - - updateSoftDeleteField(time.Time) error -} - -func newModel(db *DB, dest []interface{}) (Model, error) { - if len(dest) == 1 { - return _newModel(db, dest[0], true) - } - - values := make([]reflect.Value, len(dest)) - - for i, el := range dest { - v := reflect.ValueOf(el) - if v.Kind() != reflect.Ptr { - return nil, fmt.Errorf("bun: Scan(non-pointer %T)", dest) - } - - v = v.Elem() - if v.Kind() != reflect.Slice { - return newScanModel(db, dest), nil - } - - values[i] = v - } - - return newSliceModel(db, dest, values), nil -} - -func newSingleModel(db *DB, dest interface{}) (Model, error) { - return _newModel(db, dest, false) -} - -func _newModel(db *DB, dest interface{}, scan bool) (Model, error) { - switch dest := dest.(type) { - case nil: - return nil, errNilModel - case Model: - return dest, nil - case sql.Scanner: - if !scan { - return nil, fmt.Errorf("bun: Model(unsupported %T)", dest) - } - return newScanModel(db, []interface{}{dest}), nil - } - - v := reflect.ValueOf(dest) - if !v.IsValid() { - return nil, errNilModel - } - if v.Kind() != reflect.Ptr { - return nil, fmt.Errorf("bun: Model(non-pointer %T)", dest) - } - - if v.IsNil() { - typ := v.Type().Elem() - if typ.Kind() == reflect.Struct { - return newStructTableModel(db, dest, db.Table(typ)), nil - } - return nil, fmt.Errorf("bun: Model(nil %T)", dest) - } - - v = v.Elem() - typ := v.Type() - - switch typ { - case timeType, bytesType: - return newScanModel(db, []interface{}{dest}), nil - } - - switch v.Kind() { - case reflect.Map: - if err := validMap(typ); err != nil { - return nil, err - } - mapPtr := v.Addr().Interface().(*map[string]interface{}) - return newMapModel(db, mapPtr), nil - case reflect.Struct: - return newStructTableModelValue(db, dest, v), nil - case reflect.Slice: - switch elemType := sliceElemType(v); elemType.Kind() { - case reflect.Struct: - if elemType != timeType { - return newSliceTableModel(db, dest, v, elemType), nil - } - case reflect.Map: - if err := validMap(elemType); err != nil { - return nil, err - } - slicePtr := v.Addr().Interface().(*[]map[string]interface{}) - return newMapSliceModel(db, slicePtr), nil - } - return newSliceModel(db, []interface{}{dest}, []reflect.Value{v}), nil - } - - if scan { - return newScanModel(db, []interface{}{dest}), nil - } - - return nil, fmt.Errorf("bun: Model(unsupported %T)", dest) -} - -func newTableModelIndex( - db *DB, - table *schema.Table, - root reflect.Value, - index []int, - rel *schema.Relation, -) (TableModel, error) { - typ := typeByIndex(table.Type, index) - - if typ.Kind() == reflect.Struct { - return &structTableModel{ - db: db, - table: table.Dialect().Tables().Get(typ), - rel: rel, - - root: root, - index: index, - }, nil - } - - if typ.Kind() == reflect.Slice { - structType := indirectType(typ.Elem()) - if structType.Kind() == reflect.Struct { - m := sliceTableModel{ - structTableModel: structTableModel{ - db: db, - table: table.Dialect().Tables().Get(structType), - rel: rel, - - root: root, - index: index, - }, - } - m.init(typ) - return &m, nil - } - } - - return nil, fmt.Errorf("bun: NewModel(%s)", typ) -} - -func validMap(typ reflect.Type) error { - if typ.Key().Kind() != reflect.String || typ.Elem().Kind() != reflect.Interface { - return fmt.Errorf("bun: Model(unsupported %s) (expected *map[string]interface{})", - typ) - } - return nil -} - -//------------------------------------------------------------------------------ - -func isSingleRowModel(m Model) bool { - switch m.(type) { - case *mapModel, - *structTableModel, - *scanModel: - return true - default: - return false - } -} diff --git a/vendor/github.com/uptrace/bun/model_map.go b/vendor/github.com/uptrace/bun/model_map.go deleted file mode 100644 index 814d636e..00000000 --- a/vendor/github.com/uptrace/bun/model_map.go +++ /dev/null @@ -1,183 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "reflect" - "sort" - - "github.com/uptrace/bun/schema" -) - -type mapModel struct { - db *DB - - dest *map[string]interface{} - m map[string]interface{} - - rows *sql.Rows - columns []string - _columnTypes []*sql.ColumnType - scanIndex int -} - -var _ Model = (*mapModel)(nil) - -func newMapModel(db *DB, dest *map[string]interface{}) *mapModel { - m := &mapModel{ - db: db, - dest: dest, - } - if dest != nil { - m.m = *dest - } - return m -} - -func (m *mapModel) Value() interface{} { - return m.dest -} - -func (m *mapModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - if !rows.Next() { - return 0, rows.Err() - } - - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.rows = rows - m.columns = columns - dest := makeDest(m, len(columns)) - - if m.m == nil { - m.m = make(map[string]interface{}, len(m.columns)) - } - - m.scanIndex = 0 - if err := rows.Scan(dest...); err != nil { - return 0, err - } - - *m.dest = m.m - - return 1, nil -} - -func (m *mapModel) Scan(src interface{}) error { - if _, ok := src.([]byte); !ok { - return m.scanRaw(src) - } - - columnTypes, err := m.columnTypes() - if err != nil { - return err - } - - scanType := columnTypes[m.scanIndex].ScanType() - switch scanType.Kind() { - case reflect.Interface: - return m.scanRaw(src) - case reflect.Slice: - if scanType.Elem().Kind() == reflect.Uint8 { - return m.scanRaw(src) - } - } - - dest := reflect.New(scanType).Elem() - if err := schema.Scanner(scanType)(dest, src); err != nil { - return err - } - - return m.scanRaw(dest.Interface()) -} - -func (m *mapModel) columnTypes() ([]*sql.ColumnType, error) { - if m._columnTypes == nil { - columnTypes, err := m.rows.ColumnTypes() - if err != nil { - return nil, err - } - m._columnTypes = columnTypes - } - return m._columnTypes, nil -} - -func (m *mapModel) scanRaw(src interface{}) error { - columnName := m.columns[m.scanIndex] - m.scanIndex++ - m.m[columnName] = src - return nil -} - -func (m *mapModel) appendColumnsValues(fmter schema.Formatter, b []byte) []byte { - keys := make([]string, 0, len(m.m)) - - for k := range m.m { - keys = append(keys, k) - } - sort.Strings(keys) - - b = append(b, " ("...) - - for i, k := range keys { - if i > 0 { - b = append(b, ", "...) - } - b = fmter.AppendIdent(b, k) - } - - b = append(b, ") VALUES ("...) - - isTemplate := fmter.IsNop() - for i, k := range keys { - if i > 0 { - b = append(b, ", "...) - } - if isTemplate { - b = append(b, '?') - } else { - b = schema.Append(fmter, b, m.m[k]) - } - } - - b = append(b, ")"...) - - return b -} - -func (m *mapModel) appendSet(fmter schema.Formatter, b []byte) []byte { - keys := make([]string, 0, len(m.m)) - - for k := range m.m { - keys = append(keys, k) - } - sort.Strings(keys) - - isTemplate := fmter.IsNop() - for i, k := range keys { - if i > 0 { - b = append(b, ", "...) - } - - b = fmter.AppendIdent(b, k) - b = append(b, " = "...) - if isTemplate { - b = append(b, '?') - } else { - b = schema.Append(fmter, b, m.m[k]) - } - } - - return b -} - -func makeDest(v interface{}, n int) []interface{} { - dest := make([]interface{}, n) - for i := range dest { - dest[i] = v - } - return dest -} diff --git a/vendor/github.com/uptrace/bun/model_map_slice.go b/vendor/github.com/uptrace/bun/model_map_slice.go deleted file mode 100644 index 1e96c898..00000000 --- a/vendor/github.com/uptrace/bun/model_map_slice.go +++ /dev/null @@ -1,162 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "errors" - "sort" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/schema" -) - -type mapSliceModel struct { - mapModel - dest *[]map[string]interface{} - - keys []string -} - -var _ Model = (*mapSliceModel)(nil) - -func newMapSliceModel(db *DB, dest *[]map[string]interface{}) *mapSliceModel { - return &mapSliceModel{ - mapModel: mapModel{ - db: db, - }, - dest: dest, - } -} - -func (m *mapSliceModel) Value() interface{} { - return m.dest -} - -func (m *mapSliceModel) SetCap(cap int) { - if cap > 100 { - cap = 100 - } - if slice := *m.dest; len(slice) < cap { - *m.dest = make([]map[string]interface{}, 0, cap) - } -} - -func (m *mapSliceModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.rows = rows - m.columns = columns - dest := makeDest(m, len(columns)) - - slice := *m.dest - if len(slice) > 0 { - slice = slice[:0] - } - - var n int - - for rows.Next() { - m.m = make(map[string]interface{}, len(m.columns)) - - m.scanIndex = 0 - if err := rows.Scan(dest...); err != nil { - return 0, err - } - - slice = append(slice, m.m) - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - *m.dest = slice - return n, nil -} - -func (m *mapSliceModel) appendColumns(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if err := m.initKeys(); err != nil { - return nil, err - } - - for i, k := range m.keys { - if i > 0 { - b = append(b, ", "...) - } - b = fmter.AppendIdent(b, k) - } - - return b, nil -} - -func (m *mapSliceModel) appendValues(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if err := m.initKeys(); err != nil { - return nil, err - } - slice := *m.dest - - b = append(b, "VALUES "...) - if m.db.features.Has(feature.ValuesRow) { - b = append(b, "ROW("...) - } else { - b = append(b, '(') - } - - if fmter.IsNop() { - for i := range m.keys { - if i > 0 { - b = append(b, ", "...) - } - b = append(b, '?') - } - return b, nil - } - - for i, el := range slice { - if i > 0 { - b = append(b, "), "...) - if m.db.features.Has(feature.ValuesRow) { - b = append(b, "ROW("...) - } else { - b = append(b, '(') - } - } - - for j, key := range m.keys { - if j > 0 { - b = append(b, ", "...) - } - b = schema.Append(fmter, b, el[key]) - } - } - - b = append(b, ')') - - return b, nil -} - -func (m *mapSliceModel) initKeys() error { - if m.keys != nil { - return nil - } - - slice := *m.dest - if len(slice) == 0 { - return errors.New("bun: map slice is empty") - } - - first := slice[0] - keys := make([]string, 0, len(first)) - - for k := range first { - keys = append(keys, k) - } - - sort.Strings(keys) - m.keys = keys - - return nil -} diff --git a/vendor/github.com/uptrace/bun/model_scan.go b/vendor/github.com/uptrace/bun/model_scan.go deleted file mode 100644 index 48149c4b..00000000 --- a/vendor/github.com/uptrace/bun/model_scan.go +++ /dev/null @@ -1,56 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "reflect" - - "github.com/uptrace/bun/schema" -) - -type scanModel struct { - db *DB - - dest []interface{} - scanIndex int -} - -var _ Model = (*scanModel)(nil) - -func newScanModel(db *DB, dest []interface{}) *scanModel { - return &scanModel{ - db: db, - dest: dest, - } -} - -func (m *scanModel) Value() interface{} { - return m.dest -} - -func (m *scanModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - if !rows.Next() { - return 0, rows.Err() - } - - dest := makeDest(m, len(m.dest)) - - m.scanIndex = 0 - if err := rows.Scan(dest...); err != nil { - return 0, err - } - - return 1, nil -} - -func (m *scanModel) ScanRow(ctx context.Context, rows *sql.Rows) error { - return rows.Scan(m.dest...) -} - -func (m *scanModel) Scan(src interface{}) error { - dest := reflect.ValueOf(m.dest[m.scanIndex]) - m.scanIndex++ - - scanner := schema.Scanner(dest.Type()) - return scanner(dest, src) -} diff --git a/vendor/github.com/uptrace/bun/model_slice.go b/vendor/github.com/uptrace/bun/model_slice.go deleted file mode 100644 index bc29db41..00000000 --- a/vendor/github.com/uptrace/bun/model_slice.go +++ /dev/null @@ -1,82 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "reflect" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type sliceInfo struct { - nextElem func() reflect.Value - scan schema.ScannerFunc -} - -type sliceModel struct { - dest []interface{} - values []reflect.Value - scanIndex int - info []sliceInfo -} - -var _ Model = (*sliceModel)(nil) - -func newSliceModel(db *DB, dest []interface{}, values []reflect.Value) *sliceModel { - return &sliceModel{ - dest: dest, - values: values, - } -} - -func (m *sliceModel) Value() interface{} { - return m.dest -} - -func (m *sliceModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.info = make([]sliceInfo, len(m.values)) - for i, v := range m.values { - if v.IsValid() && v.Len() > 0 { - v.Set(v.Slice(0, 0)) - } - - m.info[i] = sliceInfo{ - nextElem: internal.MakeSliceNextElemFunc(v), - scan: schema.Scanner(v.Type().Elem()), - } - } - - if len(columns) == 0 { - return 0, nil - } - dest := makeDest(m, len(columns)) - - var n int - - for rows.Next() { - m.scanIndex = 0 - if err := rows.Scan(dest...); err != nil { - return 0, err - } - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - return n, nil -} - -func (m *sliceModel) Scan(src interface{}) error { - info := m.info[m.scanIndex] - m.scanIndex++ - - dest := info.nextElem() - return info.scan(dest, src) -} diff --git a/vendor/github.com/uptrace/bun/model_table_has_many.go b/vendor/github.com/uptrace/bun/model_table_has_many.go deleted file mode 100644 index 4db3ec12..00000000 --- a/vendor/github.com/uptrace/bun/model_table_has_many.go +++ /dev/null @@ -1,149 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - "reflect" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type hasManyModel struct { - *sliceTableModel - baseTable *schema.Table - rel *schema.Relation - - baseValues map[internal.MapKey][]reflect.Value - structKey []interface{} -} - -var _ TableModel = (*hasManyModel)(nil) - -func newHasManyModel(j *relationJoin) *hasManyModel { - baseTable := j.BaseModel.Table() - joinModel := j.JoinModel.(*sliceTableModel) - baseValues := baseValues(joinModel, j.Relation.BaseFields) - if len(baseValues) == 0 { - return nil - } - m := hasManyModel{ - sliceTableModel: joinModel, - baseTable: baseTable, - rel: j.Relation, - - baseValues: baseValues, - } - if !m.sliceOfPtr { - m.strct = reflect.New(m.table.Type).Elem() - } - return &m -} - -func (m *hasManyModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.columns = columns - dest := makeDest(m, len(columns)) - - var n int - - for rows.Next() { - if m.sliceOfPtr { - m.strct = reflect.New(m.table.Type).Elem() - } else { - m.strct.Set(m.table.ZeroValue) - } - m.structInited = false - - m.scanIndex = 0 - m.structKey = m.structKey[:0] - if err := rows.Scan(dest...); err != nil { - return 0, err - } - - if err := m.parkStruct(); err != nil { - return 0, err - } - - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - return n, nil -} - -func (m *hasManyModel) Scan(src interface{}) error { - column := m.columns[m.scanIndex] - m.scanIndex++ - - field, err := m.table.Field(column) - if err != nil { - return err - } - - if err := field.ScanValue(m.strct, src); err != nil { - return err - } - - for _, f := range m.rel.JoinFields { - if f.Name == field.Name { - m.structKey = append(m.structKey, field.Value(m.strct).Interface()) - break - } - } - - return nil -} - -func (m *hasManyModel) parkStruct() error { - baseValues, ok := m.baseValues[internal.NewMapKey(m.structKey)] - if !ok { - return fmt.Errorf( - "bun: has-many relation=%s does not have base %s with id=%q (check join conditions)", - m.rel.Field.GoName, m.baseTable, m.structKey) - } - - for i, v := range baseValues { - if !m.sliceOfPtr { - v.Set(reflect.Append(v, m.strct)) - continue - } - - if i == 0 { - v.Set(reflect.Append(v, m.strct.Addr())) - continue - } - - clone := reflect.New(m.strct.Type()).Elem() - clone.Set(m.strct) - v.Set(reflect.Append(v, clone.Addr())) - } - - return nil -} - -func baseValues(model TableModel, fields []*schema.Field) map[internal.MapKey][]reflect.Value { - fieldIndex := model.Relation().Field.Index - m := make(map[internal.MapKey][]reflect.Value) - key := make([]interface{}, 0, len(fields)) - walk(model.rootValue(), model.parentIndex(), func(v reflect.Value) { - key = modelKey(key[:0], v, fields) - mapKey := internal.NewMapKey(key) - m[mapKey] = append(m[mapKey], v.FieldByIndex(fieldIndex)) - }) - return m -} - -func modelKey(key []interface{}, strct reflect.Value, fields []*schema.Field) []interface{} { - for _, f := range fields { - key = append(key, f.Value(strct).Interface()) - } - return key -} diff --git a/vendor/github.com/uptrace/bun/model_table_m2m.go b/vendor/github.com/uptrace/bun/model_table_m2m.go deleted file mode 100644 index 88d8a126..00000000 --- a/vendor/github.com/uptrace/bun/model_table_m2m.go +++ /dev/null @@ -1,138 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - "reflect" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type m2mModel struct { - *sliceTableModel - baseTable *schema.Table - rel *schema.Relation - - baseValues map[internal.MapKey][]reflect.Value - structKey []interface{} -} - -var _ TableModel = (*m2mModel)(nil) - -func newM2MModel(j *relationJoin) *m2mModel { - baseTable := j.BaseModel.Table() - joinModel := j.JoinModel.(*sliceTableModel) - baseValues := baseValues(joinModel, baseTable.PKs) - if len(baseValues) == 0 { - return nil - } - m := &m2mModel{ - sliceTableModel: joinModel, - baseTable: baseTable, - rel: j.Relation, - - baseValues: baseValues, - } - if !m.sliceOfPtr { - m.strct = reflect.New(m.table.Type).Elem() - } - return m -} - -func (m *m2mModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.columns = columns - dest := makeDest(m, len(columns)) - - var n int - - for rows.Next() { - if m.sliceOfPtr { - m.strct = reflect.New(m.table.Type).Elem() - } else { - m.strct.Set(m.table.ZeroValue) - } - m.structInited = false - - m.scanIndex = 0 - m.structKey = m.structKey[:0] - if err := rows.Scan(dest...); err != nil { - return 0, err - } - - if err := m.parkStruct(); err != nil { - return 0, err - } - - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - return n, nil -} - -func (m *m2mModel) Scan(src interface{}) error { - column := m.columns[m.scanIndex] - m.scanIndex++ - - field, ok := m.table.FieldMap[column] - if !ok { - return m.scanM2MColumn(column, src) - } - - if err := field.ScanValue(m.strct, src); err != nil { - return err - } - - for _, fk := range m.rel.M2MBaseFields { - if fk.Name == field.Name { - m.structKey = append(m.structKey, field.Value(m.strct).Interface()) - break - } - } - - return nil -} - -func (m *m2mModel) scanM2MColumn(column string, src interface{}) error { - for _, field := range m.rel.M2MBaseFields { - if field.Name == column { - dest := reflect.New(field.IndirectType).Elem() - if err := field.Scan(dest, src); err != nil { - return err - } - m.structKey = append(m.structKey, dest.Interface()) - break - } - } - - _, err := m.scanColumn(column, src) - return err -} - -func (m *m2mModel) parkStruct() error { - baseValues, ok := m.baseValues[internal.NewMapKey(m.structKey)] - if !ok { - return fmt.Errorf( - "bun: m2m relation=%s does not have base %s with key=%q (check join conditions)", - m.rel.Field.GoName, m.baseTable, m.structKey) - } - - for _, v := range baseValues { - if m.sliceOfPtr { - v.Set(reflect.Append(v, m.strct.Addr())) - } else { - v.Set(reflect.Append(v, m.strct)) - } - } - - return nil -} diff --git a/vendor/github.com/uptrace/bun/model_table_slice.go b/vendor/github.com/uptrace/bun/model_table_slice.go deleted file mode 100644 index 67b42146..00000000 --- a/vendor/github.com/uptrace/bun/model_table_slice.go +++ /dev/null @@ -1,126 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "reflect" - "time" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type sliceTableModel struct { - structTableModel - - slice reflect.Value - sliceLen int - sliceOfPtr bool - nextElem func() reflect.Value -} - -var _ TableModel = (*sliceTableModel)(nil) - -func newSliceTableModel( - db *DB, dest interface{}, slice reflect.Value, elemType reflect.Type, -) *sliceTableModel { - m := &sliceTableModel{ - structTableModel: structTableModel{ - db: db, - table: db.Table(elemType), - dest: dest, - root: slice, - }, - - slice: slice, - sliceLen: slice.Len(), - nextElem: internal.MakeSliceNextElemFunc(slice), - } - m.init(slice.Type()) - return m -} - -func (m *sliceTableModel) init(sliceType reflect.Type) { - switch sliceType.Elem().Kind() { - case reflect.Ptr, reflect.Interface: - m.sliceOfPtr = true - } -} - -func (m *sliceTableModel) join(name string) *relationJoin { - return m._join(m.slice, name) -} - -func (m *sliceTableModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - columns, err := rows.Columns() - if err != nil { - return 0, err - } - - m.columns = columns - dest := makeDest(m, len(columns)) - - if m.slice.IsValid() && m.slice.Len() > 0 { - m.slice.Set(m.slice.Slice(0, 0)) - } - - var n int - - for rows.Next() { - m.strct = m.nextElem() - if m.sliceOfPtr { - m.strct = m.strct.Elem() - } - m.structInited = false - - if err := m.scanRow(ctx, rows, dest); err != nil { - return 0, err - } - - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - return n, nil -} - -var _ schema.BeforeAppendModelHook = (*sliceTableModel)(nil) - -func (m *sliceTableModel) BeforeAppendModel(ctx context.Context, query Query) error { - if !m.table.HasBeforeAppendModelHook() || !m.slice.IsValid() { - return nil - } - - sliceLen := m.slice.Len() - for i := 0; i < sliceLen; i++ { - strct := m.slice.Index(i) - if !m.sliceOfPtr { - strct = strct.Addr() - } - err := strct.Interface().(schema.BeforeAppendModelHook).BeforeAppendModel(ctx, query) - if err != nil { - return err - } - } - return nil -} - -// Inherit these hooks from structTableModel. -var ( - _ schema.BeforeScanRowHook = (*sliceTableModel)(nil) - _ schema.AfterScanRowHook = (*sliceTableModel)(nil) -) - -func (m *sliceTableModel) updateSoftDeleteField(tm time.Time) error { - sliceLen := m.slice.Len() - for i := 0; i < sliceLen; i++ { - strct := indirect(m.slice.Index(i)) - fv := m.table.SoftDeleteField.Value(strct) - if err := m.table.UpdateSoftDeleteField(fv, tm); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/model_table_struct.go b/vendor/github.com/uptrace/bun/model_table_struct.go deleted file mode 100644 index fadc9284..00000000 --- a/vendor/github.com/uptrace/bun/model_table_struct.go +++ /dev/null @@ -1,373 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - "reflect" - "strings" - "time" - - "github.com/uptrace/bun/schema" -) - -type structTableModel struct { - db *DB - table *schema.Table - - rel *schema.Relation - joins []relationJoin - - dest interface{} - root reflect.Value - index []int - - strct reflect.Value - structInited bool - structInitErr error - - columns []string - scanIndex int -} - -var _ TableModel = (*structTableModel)(nil) - -func newStructTableModel(db *DB, dest interface{}, table *schema.Table) *structTableModel { - return &structTableModel{ - db: db, - table: table, - dest: dest, - } -} - -func newStructTableModelValue(db *DB, dest interface{}, v reflect.Value) *structTableModel { - return &structTableModel{ - db: db, - table: db.Table(v.Type()), - dest: dest, - root: v, - strct: v, - } -} - -func (m *structTableModel) Value() interface{} { - return m.dest -} - -func (m *structTableModel) Table() *schema.Table { - return m.table -} - -func (m *structTableModel) Relation() *schema.Relation { - return m.rel -} - -func (m *structTableModel) initStruct() error { - if m.structInited { - return m.structInitErr - } - m.structInited = true - - switch m.strct.Kind() { - case reflect.Invalid: - m.structInitErr = errNilModel - return m.structInitErr - case reflect.Interface: - m.strct = m.strct.Elem() - } - - if m.strct.Kind() == reflect.Ptr { - if m.strct.IsNil() { - m.strct.Set(reflect.New(m.strct.Type().Elem())) - m.strct = m.strct.Elem() - } else { - m.strct = m.strct.Elem() - } - } - - m.mountJoins() - - return nil -} - -func (m *structTableModel) mountJoins() { - for i := range m.joins { - j := &m.joins[i] - switch j.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - j.JoinModel.mount(m.strct) - } - } -} - -var _ schema.BeforeAppendModelHook = (*structTableModel)(nil) - -func (m *structTableModel) BeforeAppendModel(ctx context.Context, query Query) error { - if !m.table.HasBeforeAppendModelHook() || !m.strct.IsValid() { - return nil - } - return m.strct.Addr().Interface().(schema.BeforeAppendModelHook).BeforeAppendModel(ctx, query) -} - -var _ schema.BeforeScanRowHook = (*structTableModel)(nil) - -func (m *structTableModel) BeforeScanRow(ctx context.Context) error { - if m.table.HasBeforeScanRowHook() { - return m.strct.Addr().Interface().(schema.BeforeScanRowHook).BeforeScanRow(ctx) - } - if m.table.HasBeforeScanHook() { - return m.strct.Addr().Interface().(schema.BeforeScanHook).BeforeScan(ctx) - } - return nil -} - -var _ schema.AfterScanRowHook = (*structTableModel)(nil) - -func (m *structTableModel) AfterScanRow(ctx context.Context) error { - if !m.structInited { - return nil - } - - if m.table.HasAfterScanRowHook() { - firstErr := m.strct.Addr().Interface().(schema.AfterScanRowHook).AfterScanRow(ctx) - - for _, j := range m.joins { - switch j.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - if err := j.JoinModel.AfterScanRow(ctx); err != nil && firstErr == nil { - firstErr = err - } - } - } - - return firstErr - } - - if m.table.HasAfterScanHook() { - firstErr := m.strct.Addr().Interface().(schema.AfterScanHook).AfterScan(ctx) - - for _, j := range m.joins { - switch j.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - if err := j.JoinModel.AfterScanRow(ctx); err != nil && firstErr == nil { - firstErr = err - } - } - } - - return firstErr - } - - return nil -} - -func (m *structTableModel) getJoin(name string) *relationJoin { - for i := range m.joins { - j := &m.joins[i] - if j.Relation.Field.Name == name || j.Relation.Field.GoName == name { - return j - } - } - return nil -} - -func (m *structTableModel) getJoins() []relationJoin { - return m.joins -} - -func (m *structTableModel) addJoin(j relationJoin) *relationJoin { - m.joins = append(m.joins, j) - return &m.joins[len(m.joins)-1] -} - -func (m *structTableModel) join(name string) *relationJoin { - return m._join(m.strct, name) -} - -func (m *structTableModel) _join(bind reflect.Value, name string) *relationJoin { - path := strings.Split(name, ".") - index := make([]int, 0, len(path)) - - currJoin := relationJoin{ - BaseModel: m, - JoinModel: m, - } - var lastJoin *relationJoin - - for _, name := range path { - relation, ok := currJoin.JoinModel.Table().Relations[name] - if !ok { - return nil - } - - currJoin.Relation = relation - index = append(index, relation.Field.Index...) - - if j := currJoin.JoinModel.getJoin(name); j != nil { - currJoin.BaseModel = j.BaseModel - currJoin.JoinModel = j.JoinModel - - lastJoin = j - } else { - model, err := newTableModelIndex(m.db, m.table, bind, index, relation) - if err != nil { - return nil - } - - currJoin.Parent = lastJoin - currJoin.BaseModel = currJoin.JoinModel - currJoin.JoinModel = model - - lastJoin = currJoin.BaseModel.addJoin(currJoin) - } - } - - return lastJoin -} - -func (m *structTableModel) rootValue() reflect.Value { - return m.root -} - -func (m *structTableModel) parentIndex() []int { - return m.index[:len(m.index)-len(m.rel.Field.Index)] -} - -func (m *structTableModel) mount(host reflect.Value) { - m.strct = host.FieldByIndex(m.rel.Field.Index) - m.structInited = false -} - -func (m *structTableModel) updateSoftDeleteField(tm time.Time) error { - if !m.strct.IsValid() { - return nil - } - fv := m.table.SoftDeleteField.Value(m.strct) - return m.table.UpdateSoftDeleteField(fv, tm) -} - -func (m *structTableModel) ScanRows(ctx context.Context, rows *sql.Rows) (int, error) { - if !rows.Next() { - return 0, rows.Err() - } - - var n int - - if err := m.ScanRow(ctx, rows); err != nil { - return 0, err - } - n++ - - // And discard the rest. This is especially important for SQLite3, which can return - // a row like it was inserted sucessfully and then return an actual error for the next row. - // See issues/100. - for rows.Next() { - n++ - } - if err := rows.Err(); err != nil { - return 0, err - } - - return n, nil -} - -func (m *structTableModel) ScanRow(ctx context.Context, rows *sql.Rows) error { - columns, err := rows.Columns() - if err != nil { - return err - } - - m.columns = columns - dest := makeDest(m, len(columns)) - - return m.scanRow(ctx, rows, dest) -} - -func (m *structTableModel) scanRow(ctx context.Context, rows *sql.Rows, dest []interface{}) error { - if err := m.BeforeScanRow(ctx); err != nil { - return err - } - - m.scanIndex = 0 - if err := rows.Scan(dest...); err != nil { - return err - } - - if err := m.AfterScanRow(ctx); err != nil { - return err - } - - return nil -} - -func (m *structTableModel) Scan(src interface{}) error { - column := m.columns[m.scanIndex] - m.scanIndex++ - - return m.ScanColumn(unquote(column), src) -} - -func (m *structTableModel) ScanColumn(column string, src interface{}) error { - if ok, err := m.scanColumn(column, src); ok { - return err - } - if column == "" || column[0] == '_' || m.db.flags.Has(discardUnknownColumns) { - return nil - } - return fmt.Errorf("bun: %s does not have column %q", m.table.TypeName, column) -} - -func (m *structTableModel) scanColumn(column string, src interface{}) (bool, error) { - if src != nil { - if err := m.initStruct(); err != nil { - return true, err - } - } - - if field, ok := m.table.FieldMap[column]; ok { - if src == nil && m.isNil() { - return true, nil - } - return true, field.ScanValue(m.strct, src) - } - - if joinName, column := splitColumn(column); joinName != "" { - if join := m.getJoin(joinName); join != nil { - return true, join.JoinModel.ScanColumn(column, src) - } - - if m.table.ModelName == joinName { - return true, m.ScanColumn(column, src) - } - } - - return false, nil -} - -func (m *structTableModel) isNil() bool { - return m.strct.Kind() == reflect.Ptr && m.strct.IsNil() -} - -func (m *structTableModel) AppendNamedArg( - fmter schema.Formatter, b []byte, name string, -) ([]byte, bool) { - return m.table.AppendNamedArg(fmter, b, name, m.strct) -} - -// sqlite3 sometimes does not unquote columns. -func unquote(s string) string { - if s == "" { - return s - } - if s[0] == '"' && s[len(s)-1] == '"' { - return s[1 : len(s)-1] - } - return s -} - -func splitColumn(s string) (string, string) { - if i := strings.Index(s, "__"); i >= 0 { - return s[:i], s[i+2:] - } - return "", s -} diff --git a/vendor/github.com/uptrace/bun/package.json b/vendor/github.com/uptrace/bun/package.json deleted file mode 100644 index 6d62f597..00000000 --- a/vendor/github.com/uptrace/bun/package.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "name": "gobun", - "version": "1.1.12", - "main": "index.js", - "repository": "git@github.com:uptrace/bun.git", - "author": "Vladimir Mihailenco ", - "license": "BSD-2-clause" -} diff --git a/vendor/github.com/uptrace/bun/query_base.go b/vendor/github.com/uptrace/bun/query_base.go deleted file mode 100644 index 9df70d1f..00000000 --- a/vendor/github.com/uptrace/bun/query_base.go +++ /dev/null @@ -1,1343 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "database/sql/driver" - "errors" - "fmt" - "time" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -const ( - forceDeleteFlag internal.Flag = 1 << iota - deletedFlag - allWithDeletedFlag -) - -type withQuery struct { - name string - query schema.QueryAppender - recursive bool -} - -// IConn is a common interface for *sql.DB, *sql.Conn, and *sql.Tx. -type IConn interface { - QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error) - ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error) - QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row -} - -var ( - _ IConn = (*sql.DB)(nil) - _ IConn = (*sql.Conn)(nil) - _ IConn = (*sql.Tx)(nil) - _ IConn = (*DB)(nil) - _ IConn = (*Conn)(nil) - _ IConn = (*Tx)(nil) -) - -// IDB is a common interface for *bun.DB, bun.Conn, and bun.Tx. -type IDB interface { - IConn - Dialect() schema.Dialect - - NewValues(model interface{}) *ValuesQuery - NewSelect() *SelectQuery - NewInsert() *InsertQuery - NewUpdate() *UpdateQuery - NewDelete() *DeleteQuery - NewRaw(query string, args ...interface{}) *RawQuery - NewCreateTable() *CreateTableQuery - NewDropTable() *DropTableQuery - NewCreateIndex() *CreateIndexQuery - NewDropIndex() *DropIndexQuery - NewTruncateTable() *TruncateTableQuery - NewAddColumn() *AddColumnQuery - NewDropColumn() *DropColumnQuery - - BeginTx(ctx context.Context, opts *sql.TxOptions) (Tx, error) - RunInTx(ctx context.Context, opts *sql.TxOptions, f func(ctx context.Context, tx Tx) error) error -} - -var ( - _ IDB = (*DB)(nil) - _ IDB = (*Conn)(nil) - _ IDB = (*Tx)(nil) -) - -// QueryBuilder is used for common query methods -type QueryBuilder interface { - Query - Where(query string, args ...interface{}) QueryBuilder - WhereGroup(sep string, fn func(QueryBuilder) QueryBuilder) QueryBuilder - WhereOr(query string, args ...interface{}) QueryBuilder - WhereDeleted() QueryBuilder - WhereAllWithDeleted() QueryBuilder - WherePK(cols ...string) QueryBuilder - Unwrap() interface{} -} - -var ( - _ QueryBuilder = (*selectQueryBuilder)(nil) - _ QueryBuilder = (*updateQueryBuilder)(nil) - _ QueryBuilder = (*deleteQueryBuilder)(nil) -) - -type baseQuery struct { - db *DB - conn IConn - - model Model - err error - - tableModel TableModel - table *schema.Table - - with []withQuery - modelTableName schema.QueryWithArgs - tables []schema.QueryWithArgs - columns []schema.QueryWithArgs - - flags internal.Flag -} - -func (q *baseQuery) DB() *DB { - return q.db -} - -func (q *baseQuery) GetConn() IConn { - return q.conn -} - -func (q *baseQuery) GetModel() Model { - return q.model -} - -func (q *baseQuery) GetTableName() string { - if q.table != nil { - return q.table.Name - } - - for _, wq := range q.with { - if v, ok := wq.query.(Query); ok { - if model := v.GetModel(); model != nil { - return v.GetTableName() - } - } - } - - if q.modelTableName.Query != "" { - return q.modelTableName.Query - } - - if len(q.tables) > 0 { - b, _ := q.tables[0].AppendQuery(q.db.fmter, nil) - if len(b) < 64 { - return string(b) - } - } - - return "" -} - -func (q *baseQuery) setConn(db IConn) { - // Unwrap Bun wrappers to not call query hooks twice. - switch db := db.(type) { - case *DB: - q.conn = db.DB - case Conn: - q.conn = db.Conn - case Tx: - q.conn = db.Tx - default: - q.conn = db - } -} - -func (q *baseQuery) setModel(modeli interface{}) { - model, err := newSingleModel(q.db, modeli) - if err != nil { - q.setErr(err) - return - } - - q.model = model - if tm, ok := model.(TableModel); ok { - q.tableModel = tm - q.table = tm.Table() - } -} - -func (q *baseQuery) setErr(err error) { - if q.err == nil { - q.err = err - } -} - -func (q *baseQuery) getModel(dest []interface{}) (Model, error) { - if len(dest) == 0 { - if q.model != nil { - return q.model, nil - } - return nil, errNilModel - } - return newModel(q.db, dest) -} - -func (q *baseQuery) beforeAppendModel(ctx context.Context, query Query) error { - if q.tableModel != nil { - return q.tableModel.BeforeAppendModel(ctx, query) - } - return nil -} - -func (q *baseQuery) hasFeature(feature feature.Feature) bool { - return q.db.features.Has(feature) -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) checkSoftDelete() error { - if q.table == nil { - return errors.New("bun: can't use soft deletes without a table") - } - if q.table.SoftDeleteField == nil { - return fmt.Errorf("%s does not have a soft delete field", q.table) - } - if q.tableModel == nil { - return errors.New("bun: can't use soft deletes without a table model") - } - return nil -} - -// Deleted adds `WHERE deleted_at IS NOT NULL` clause for soft deleted models. -func (q *baseQuery) whereDeleted() { - if err := q.checkSoftDelete(); err != nil { - q.setErr(err) - return - } - q.flags = q.flags.Set(deletedFlag) - q.flags = q.flags.Remove(allWithDeletedFlag) -} - -// AllWithDeleted changes query to return all rows including soft deleted ones. -func (q *baseQuery) whereAllWithDeleted() { - if err := q.checkSoftDelete(); err != nil { - q.setErr(err) - return - } - q.flags = q.flags.Set(allWithDeletedFlag).Remove(deletedFlag) -} - -func (q *baseQuery) isSoftDelete() bool { - if q.table != nil { - return q.table.SoftDeleteField != nil && - !q.flags.Has(allWithDeletedFlag) && - (!q.flags.Has(forceDeleteFlag) || q.flags.Has(deletedFlag)) - } - return false -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) addWith(name string, query schema.QueryAppender, recursive bool) { - q.with = append(q.with, withQuery{ - name: name, - query: query, - recursive: recursive, - }) -} - -func (q *baseQuery) appendWith(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if len(q.with) == 0 { - return b, nil - } - - b = append(b, "WITH "...) - for i, with := range q.with { - if i > 0 { - b = append(b, ", "...) - } - - if with.recursive { - b = append(b, "RECURSIVE "...) - } - - b, err = q.appendCTE(fmter, b, with) - if err != nil { - return nil, err - } - } - b = append(b, ' ') - return b, nil -} - -func (q *baseQuery) appendCTE( - fmter schema.Formatter, b []byte, cte withQuery, -) (_ []byte, err error) { - if !fmter.Dialect().Features().Has(feature.WithValues) { - if values, ok := cte.query.(*ValuesQuery); ok { - return q.appendSelectFromValues(fmter, b, cte, values) - } - } - - b = fmter.AppendIdent(b, cte.name) - - if q, ok := cte.query.(schema.ColumnsAppender); ok { - b = append(b, " ("...) - b, err = q.AppendColumns(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ")"...) - } - - b = append(b, " AS ("...) - - b, err = cte.query.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, ")"...) - return b, nil -} - -func (q *baseQuery) appendSelectFromValues( - fmter schema.Formatter, b []byte, cte withQuery, values *ValuesQuery, -) (_ []byte, err error) { - b = fmter.AppendIdent(b, cte.name) - b = append(b, " AS (SELECT * FROM ("...) - - b, err = cte.query.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, ") AS t"...) - if q, ok := cte.query.(schema.ColumnsAppender); ok { - b = append(b, " ("...) - b, err = q.AppendColumns(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ")"...) - } - b = append(b, ")"...) - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) addTable(table schema.QueryWithArgs) { - q.tables = append(q.tables, table) -} - -func (q *baseQuery) addColumn(column schema.QueryWithArgs) { - q.columns = append(q.columns, column) -} - -func (q *baseQuery) excludeColumn(columns []string) { - if q.table == nil { - q.setErr(errNilModel) - return - } - - if q.columns == nil { - for _, f := range q.table.Fields { - q.columns = append(q.columns, schema.UnsafeIdent(f.Name)) - } - } - - if len(columns) == 1 && columns[0] == "*" { - q.columns = make([]schema.QueryWithArgs, 0) - return - } - - for _, column := range columns { - if !q._excludeColumn(column) { - q.setErr(fmt.Errorf("bun: can't find column=%q", column)) - return - } - } -} - -func (q *baseQuery) _excludeColumn(column string) bool { - for i, col := range q.columns { - if col.Args == nil && col.Query == column { - q.columns = append(q.columns[:i], q.columns[i+1:]...) - return true - } - } - return false -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) modelHasTableName() bool { - if !q.modelTableName.IsZero() { - return q.modelTableName.Query != "" - } - return q.table != nil -} - -func (q *baseQuery) hasTables() bool { - return q.modelHasTableName() || len(q.tables) > 0 -} - -func (q *baseQuery) appendTables( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - return q._appendTables(fmter, b, false) -} - -func (q *baseQuery) appendTablesWithAlias( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - return q._appendTables(fmter, b, true) -} - -func (q *baseQuery) _appendTables( - fmter schema.Formatter, b []byte, withAlias bool, -) (_ []byte, err error) { - startLen := len(b) - - if q.modelHasTableName() { - if !q.modelTableName.IsZero() { - b, err = q.modelTableName.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } else { - b = fmter.AppendQuery(b, string(q.table.SQLNameForSelects)) - if withAlias && q.table.SQLAlias != q.table.SQLNameForSelects { - b = append(b, " AS "...) - b = append(b, q.table.SQLAlias...) - } - } - } - - for _, table := range q.tables { - if len(b) > startLen { - b = append(b, ", "...) - } - b, err = table.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *baseQuery) appendFirstTable(fmter schema.Formatter, b []byte) ([]byte, error) { - return q._appendFirstTable(fmter, b, false) -} - -func (q *baseQuery) appendFirstTableWithAlias( - fmter schema.Formatter, b []byte, -) ([]byte, error) { - return q._appendFirstTable(fmter, b, true) -} - -func (q *baseQuery) _appendFirstTable( - fmter schema.Formatter, b []byte, withAlias bool, -) ([]byte, error) { - if !q.modelTableName.IsZero() { - return q.modelTableName.AppendQuery(fmter, b) - } - - if q.table != nil { - b = fmter.AppendQuery(b, string(q.table.SQLName)) - if withAlias { - b = append(b, " AS "...) - b = append(b, q.table.SQLAlias...) - } - return b, nil - } - - if len(q.tables) > 0 { - return q.tables[0].AppendQuery(fmter, b) - } - - return nil, errors.New("bun: query does not have a table") -} - -func (q *baseQuery) hasMultiTables() bool { - if q.modelHasTableName() { - return len(q.tables) >= 1 - } - return len(q.tables) >= 2 -} - -func (q *baseQuery) appendOtherTables(fmter schema.Formatter, b []byte) (_ []byte, err error) { - tables := q.tables - if !q.modelHasTableName() { - tables = tables[1:] - } - for i, table := range tables { - if i > 0 { - b = append(b, ", "...) - } - b, err = table.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) appendColumns(fmter schema.Formatter, b []byte) (_ []byte, err error) { - for i, f := range q.columns { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -func (q *baseQuery) getFields() ([]*schema.Field, error) { - if len(q.columns) == 0 { - if q.table == nil { - return nil, errNilModel - } - return q.table.Fields, nil - } - return q._getFields(false) -} - -func (q *baseQuery) getDataFields() ([]*schema.Field, error) { - if len(q.columns) == 0 { - if q.table == nil { - return nil, errNilModel - } - return q.table.DataFields, nil - } - return q._getFields(true) -} - -func (q *baseQuery) _getFields(omitPK bool) ([]*schema.Field, error) { - fields := make([]*schema.Field, 0, len(q.columns)) - for _, col := range q.columns { - if col.Args != nil { - continue - } - - field, err := q.table.Field(col.Query) - if err != nil { - return nil, err - } - - if omitPK && field.IsPK { - continue - } - - fields = append(fields, field) - } - return fields, nil -} - -func (q *baseQuery) scan( - ctx context.Context, - iquery Query, - query string, - model Model, - hasDest bool, -) (sql.Result, error) { - ctx, event := q.db.beforeQuery(ctx, iquery, query, nil, query, q.model) - - rows, err := q.conn.QueryContext(ctx, query) - if err != nil { - q.db.afterQuery(ctx, event, nil, err) - return nil, err - } - defer rows.Close() - - numRow, err := model.ScanRows(ctx, rows) - if err != nil { - q.db.afterQuery(ctx, event, nil, err) - return nil, err - } - - if numRow == 0 && hasDest && isSingleRowModel(model) { - err = sql.ErrNoRows - } - - res := driver.RowsAffected(numRow) - q.db.afterQuery(ctx, event, res, err) - - return res, err -} - -func (q *baseQuery) exec( - ctx context.Context, - iquery Query, - query string, -) (sql.Result, error) { - ctx, event := q.db.beforeQuery(ctx, iquery, query, nil, query, q.model) - res, err := q.conn.ExecContext(ctx, query) - q.db.afterQuery(ctx, event, nil, err) - return res, err -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) AppendNamedArg(fmter schema.Formatter, b []byte, name string) ([]byte, bool) { - if q.table == nil { - return b, false - } - - if m, ok := q.tableModel.(*structTableModel); ok { - if b, ok := m.AppendNamedArg(fmter, b, name); ok { - return b, ok - } - } - - switch name { - case "TableName": - b = fmter.AppendQuery(b, string(q.table.SQLName)) - return b, true - case "TableAlias": - b = fmter.AppendQuery(b, string(q.table.SQLAlias)) - return b, true - case "PKs": - b = appendColumns(b, "", q.table.PKs) - return b, true - case "TablePKs": - b = appendColumns(b, q.table.SQLAlias, q.table.PKs) - return b, true - case "Columns": - b = appendColumns(b, "", q.table.Fields) - return b, true - case "TableColumns": - b = appendColumns(b, q.table.SQLAlias, q.table.Fields) - return b, true - } - - return b, false -} - -//------------------------------------------------------------------------------ - -func (q *baseQuery) Dialect() schema.Dialect { - return q.db.Dialect() -} - -func (q *baseQuery) NewValues(model interface{}) *ValuesQuery { - return NewValuesQuery(q.db, model).Conn(q.conn) -} - -func (q *baseQuery) NewSelect() *SelectQuery { - return NewSelectQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewInsert() *InsertQuery { - return NewInsertQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewUpdate() *UpdateQuery { - return NewUpdateQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewDelete() *DeleteQuery { - return NewDeleteQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewRaw(query string, args ...interface{}) *RawQuery { - return NewRawQuery(q.db, query, args...).Conn(q.conn) -} - -func (q *baseQuery) NewCreateTable() *CreateTableQuery { - return NewCreateTableQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewDropTable() *DropTableQuery { - return NewDropTableQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewCreateIndex() *CreateIndexQuery { - return NewCreateIndexQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewDropIndex() *DropIndexQuery { - return NewDropIndexQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewTruncateTable() *TruncateTableQuery { - return NewTruncateTableQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewAddColumn() *AddColumnQuery { - return NewAddColumnQuery(q.db).Conn(q.conn) -} - -func (q *baseQuery) NewDropColumn() *DropColumnQuery { - return NewDropColumnQuery(q.db).Conn(q.conn) -} - -//------------------------------------------------------------------------------ - -func appendColumns(b []byte, table schema.Safe, fields []*schema.Field) []byte { - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - - if len(table) > 0 { - b = append(b, table...) - b = append(b, '.') - } - b = append(b, f.SQLName...) - } - return b -} - -func formatterWithModel(fmter schema.Formatter, model schema.NamedArgAppender) schema.Formatter { - if fmter.IsNop() { - return fmter - } - return fmter.WithArg(model) -} - -//------------------------------------------------------------------------------ - -type whereBaseQuery struct { - baseQuery - - where []schema.QueryWithSep - whereFields []*schema.Field -} - -func (q *whereBaseQuery) addWhere(where schema.QueryWithSep) { - q.where = append(q.where, where) -} - -func (q *whereBaseQuery) addWhereGroup(sep string, where []schema.QueryWithSep) { - if len(where) == 0 { - return - } - - q.addWhere(schema.SafeQueryWithSep("", nil, sep)) - q.addWhere(schema.SafeQueryWithSep("", nil, "(")) - - where[0].Sep = "" - q.where = append(q.where, where...) - - q.addWhere(schema.SafeQueryWithSep("", nil, ")")) -} - -func (q *whereBaseQuery) addWhereCols(cols []string) { - if q.table == nil { - err := fmt.Errorf("bun: got %T, but WherePK requires a struct or slice-based model", q.model) - q.setErr(err) - return - } - if q.whereFields != nil { - err := errors.New("bun: WherePK can only be called once") - q.setErr(err) - return - } - - if cols == nil { - if err := q.table.CheckPKs(); err != nil { - q.setErr(err) - return - } - q.whereFields = q.table.PKs - return - } - - q.whereFields = make([]*schema.Field, len(cols)) - for i, col := range cols { - field, err := q.table.Field(col) - if err != nil { - q.setErr(err) - return - } - q.whereFields[i] = field - } -} - -func (q *whereBaseQuery) mustAppendWhere( - fmter schema.Formatter, b []byte, withAlias bool, -) ([]byte, error) { - if len(q.where) == 0 && q.whereFields == nil && !q.flags.Has(deletedFlag) { - err := errors.New("bun: Update and Delete queries require at least one Where") - return nil, err - } - return q.appendWhere(fmter, b, withAlias) -} - -func (q *whereBaseQuery) appendWhere( - fmter schema.Formatter, b []byte, withAlias bool, -) (_ []byte, err error) { - if len(q.where) == 0 && q.whereFields == nil && !q.isSoftDelete() { - return b, nil - } - - b = append(b, " WHERE "...) - startLen := len(b) - - if len(q.where) > 0 { - b, err = appendWhere(fmter, b, q.where) - if err != nil { - return nil, err - } - } - - if q.isSoftDelete() { - if len(b) > startLen { - b = append(b, " AND "...) - } - - if withAlias { - b = append(b, q.tableModel.Table().SQLAlias...) - } else { - b = append(b, q.tableModel.Table().SQLName...) - } - b = append(b, '.') - - field := q.tableModel.Table().SoftDeleteField - b = append(b, field.SQLName...) - - if field.IsPtr || field.NullZero { - if q.flags.Has(deletedFlag) { - b = append(b, " IS NOT NULL"...) - } else { - b = append(b, " IS NULL"...) - } - } else { - if q.flags.Has(deletedFlag) { - b = append(b, " != "...) - } else { - b = append(b, " = "...) - } - b = fmter.Dialect().AppendTime(b, time.Time{}) - } - } - - if q.whereFields != nil { - if len(b) > startLen { - b = append(b, " AND "...) - } - b, err = q.appendWhereFields(fmter, b, q.whereFields, withAlias) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func appendWhere( - fmter schema.Formatter, b []byte, where []schema.QueryWithSep, -) (_ []byte, err error) { - for i, where := range where { - if i > 0 { - b = append(b, where.Sep...) - } - - if where.Query == "" { - continue - } - - b = append(b, '(') - b, err = where.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ')') - } - return b, nil -} - -func (q *whereBaseQuery) appendWhereFields( - fmter schema.Formatter, b []byte, fields []*schema.Field, withAlias bool, -) (_ []byte, err error) { - if q.table == nil { - err := fmt.Errorf("bun: got %T, but WherePK requires struct or slice-based model", q.model) - return nil, err - } - - switch model := q.tableModel.(type) { - case *structTableModel: - return q.appendWhereStructFields(fmter, b, model, fields, withAlias) - case *sliceTableModel: - return q.appendWhereSliceFields(fmter, b, model, fields, withAlias) - default: - return nil, fmt.Errorf("bun: WhereColumn does not support %T", q.tableModel) - } -} - -func (q *whereBaseQuery) appendWhereStructFields( - fmter schema.Formatter, - b []byte, - model *structTableModel, - fields []*schema.Field, - withAlias bool, -) (_ []byte, err error) { - if !model.strct.IsValid() { - return nil, errNilModel - } - - isTemplate := fmter.IsNop() - b = append(b, '(') - for i, f := range fields { - if i > 0 { - b = append(b, " AND "...) - } - if withAlias { - b = append(b, q.table.SQLAlias...) - b = append(b, '.') - } - b = append(b, f.SQLName...) - b = append(b, " = "...) - if isTemplate { - b = append(b, '?') - } else { - b = f.AppendValue(fmter, b, model.strct) - } - } - b = append(b, ')') - return b, nil -} - -func (q *whereBaseQuery) appendWhereSliceFields( - fmter schema.Formatter, - b []byte, - model *sliceTableModel, - fields []*schema.Field, - withAlias bool, -) (_ []byte, err error) { - if len(fields) > 1 { - b = append(b, '(') - } - if withAlias { - b = appendColumns(b, q.table.SQLAlias, fields) - } else { - b = appendColumns(b, "", fields) - } - if len(fields) > 1 { - b = append(b, ')') - } - - b = append(b, " IN ("...) - - isTemplate := fmter.IsNop() - slice := model.slice - sliceLen := slice.Len() - for i := 0; i < sliceLen; i++ { - if i > 0 { - if isTemplate { - break - } - b = append(b, ", "...) - } - - el := indirect(slice.Index(i)) - - if len(fields) > 1 { - b = append(b, '(') - } - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - if isTemplate { - b = append(b, '?') - } else { - b = f.AppendValue(fmter, b, el) - } - } - if len(fields) > 1 { - b = append(b, ')') - } - } - - b = append(b, ')') - - return b, nil -} - -//------------------------------------------------------------------------------ - -type returningQuery struct { - returning []schema.QueryWithArgs - returningFields []*schema.Field -} - -func (q *returningQuery) addReturning(ret schema.QueryWithArgs) { - q.returning = append(q.returning, ret) -} - -func (q *returningQuery) addReturningField(field *schema.Field) { - if len(q.returning) > 0 { - return - } - for _, f := range q.returningFields { - if f == field { - return - } - } - q.returningFields = append(q.returningFields, field) -} - -func (q *returningQuery) appendReturning( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - return q._appendReturning(fmter, b, "") -} - -func (q *returningQuery) appendOutput( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - return q._appendReturning(fmter, b, "INSERTED") -} - -func (q *returningQuery) _appendReturning( - fmter schema.Formatter, b []byte, table string, -) (_ []byte, err error) { - for i, f := range q.returning { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - if len(q.returning) > 0 { - return b, nil - } - - b = appendColumns(b, schema.Safe(table), q.returningFields) - return b, nil -} - -func (q *returningQuery) hasReturning() bool { - if len(q.returning) == 1 { - if ret := q.returning[0]; len(ret.Args) == 0 { - switch ret.Query { - case "", "null", "NULL": - return false - } - } - } - return len(q.returning) > 0 || len(q.returningFields) > 0 -} - -//------------------------------------------------------------------------------ - -type columnValue struct { - column string - value schema.QueryWithArgs -} - -type customValueQuery struct { - modelValues map[string]schema.QueryWithArgs - extraValues []columnValue -} - -func (q *customValueQuery) addValue( - table *schema.Table, column string, value string, args []interface{}, -) { - if _, ok := table.FieldMap[column]; ok { - if q.modelValues == nil { - q.modelValues = make(map[string]schema.QueryWithArgs) - } - q.modelValues[column] = schema.SafeQuery(value, args) - } else { - q.extraValues = append(q.extraValues, columnValue{ - column: column, - value: schema.SafeQuery(value, args), - }) - } -} - -//------------------------------------------------------------------------------ - -type setQuery struct { - set []schema.QueryWithArgs -} - -func (q *setQuery) addSet(set schema.QueryWithArgs) { - q.set = append(q.set, set) -} - -func (q setQuery) appendSet(fmter schema.Formatter, b []byte) (_ []byte, err error) { - for i, f := range q.set { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -//------------------------------------------------------------------------------ - -type cascadeQuery struct { - cascade bool - restrict bool -} - -func (q cascadeQuery) appendCascade(fmter schema.Formatter, b []byte) []byte { - if !fmter.HasFeature(feature.TableCascade) { - return b - } - if q.cascade { - b = append(b, " CASCADE"...) - } - if q.restrict { - b = append(b, " RESTRICT"...) - } - return b -} - -//------------------------------------------------------------------------------ - -type idxHintsQuery struct { - use *indexHints - ignore *indexHints - force *indexHints -} - -type indexHints struct { - names []schema.QueryWithArgs - forJoin []schema.QueryWithArgs - forOrderBy []schema.QueryWithArgs - forGroupBy []schema.QueryWithArgs -} - -func (ih *idxHintsQuery) lazyUse() *indexHints { - if ih.use == nil { - ih.use = new(indexHints) - } - return ih.use -} - -func (ih *idxHintsQuery) lazyIgnore() *indexHints { - if ih.ignore == nil { - ih.ignore = new(indexHints) - } - return ih.ignore -} - -func (ih *idxHintsQuery) lazyForce() *indexHints { - if ih.force == nil { - ih.force = new(indexHints) - } - return ih.force -} - -func (ih *idxHintsQuery) appendIndexes(hints []schema.QueryWithArgs, indexes ...string) []schema.QueryWithArgs { - for _, idx := range indexes { - hints = append(hints, schema.UnsafeIdent(idx)) - } - return hints -} - -func (ih *idxHintsQuery) addUseIndex(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyUse().names = ih.appendIndexes(ih.use.names, indexes...) -} - -func (ih *idxHintsQuery) addUseIndexForJoin(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyUse().forJoin = ih.appendIndexes(ih.use.forJoin, indexes...) -} - -func (ih *idxHintsQuery) addUseIndexForOrderBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyUse().forOrderBy = ih.appendIndexes(ih.use.forOrderBy, indexes...) -} - -func (ih *idxHintsQuery) addUseIndexForGroupBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyUse().forGroupBy = ih.appendIndexes(ih.use.forGroupBy, indexes...) -} - -func (ih *idxHintsQuery) addIgnoreIndex(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyIgnore().names = ih.appendIndexes(ih.ignore.names, indexes...) -} - -func (ih *idxHintsQuery) addIgnoreIndexForJoin(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyIgnore().forJoin = ih.appendIndexes(ih.ignore.forJoin, indexes...) -} - -func (ih *idxHintsQuery) addIgnoreIndexForOrderBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyIgnore().forOrderBy = ih.appendIndexes(ih.ignore.forOrderBy, indexes...) -} - -func (ih *idxHintsQuery) addIgnoreIndexForGroupBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyIgnore().forGroupBy = ih.appendIndexes(ih.ignore.forGroupBy, indexes...) -} - -func (ih *idxHintsQuery) addForceIndex(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyForce().names = ih.appendIndexes(ih.force.names, indexes...) -} - -func (ih *idxHintsQuery) addForceIndexForJoin(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyForce().forJoin = ih.appendIndexes(ih.force.forJoin, indexes...) -} - -func (ih *idxHintsQuery) addForceIndexForOrderBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyForce().forOrderBy = ih.appendIndexes(ih.force.forOrderBy, indexes...) -} - -func (ih *idxHintsQuery) addForceIndexForGroupBy(indexes ...string) { - if len(indexes) == 0 { - return - } - ih.lazyForce().forGroupBy = ih.appendIndexes(ih.force.forGroupBy, indexes...) -} - -func (ih *idxHintsQuery) appendIndexHints( - fmter schema.Formatter, b []byte, -) ([]byte, error) { - type IdxHint struct { - Name string - Values []schema.QueryWithArgs - } - - var hints []IdxHint - if ih.use != nil { - hints = append(hints, []IdxHint{ - { - Name: "USE INDEX", - Values: ih.use.names, - }, - { - Name: "USE INDEX FOR JOIN", - Values: ih.use.forJoin, - }, - { - Name: "USE INDEX FOR ORDER BY", - Values: ih.use.forOrderBy, - }, - { - Name: "USE INDEX FOR GROUP BY", - Values: ih.use.forGroupBy, - }, - }...) - } - - if ih.ignore != nil { - hints = append(hints, []IdxHint{ - { - Name: "IGNORE INDEX", - Values: ih.ignore.names, - }, - { - Name: "IGNORE INDEX FOR JOIN", - Values: ih.ignore.forJoin, - }, - { - Name: "IGNORE INDEX FOR ORDER BY", - Values: ih.ignore.forOrderBy, - }, - { - Name: "IGNORE INDEX FOR GROUP BY", - Values: ih.ignore.forGroupBy, - }, - }...) - } - - if ih.force != nil { - hints = append(hints, []IdxHint{ - { - Name: "FORCE INDEX", - Values: ih.force.names, - }, - { - Name: "FORCE INDEX FOR JOIN", - Values: ih.force.forJoin, - }, - { - Name: "FORCE INDEX FOR ORDER BY", - Values: ih.force.forOrderBy, - }, - { - Name: "FORCE INDEX FOR GROUP BY", - Values: ih.force.forGroupBy, - }, - }...) - } - - var err error - for _, h := range hints { - b, err = ih.bufIndexHint(h.Name, h.Values, fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -func (ih *idxHintsQuery) bufIndexHint( - name string, - hints []schema.QueryWithArgs, - fmter schema.Formatter, b []byte, -) ([]byte, error) { - var err error - if len(hints) == 0 { - return b, nil - } - b = append(b, fmt.Sprintf(" %s (", name)...) - for i, f := range hints { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - b = append(b, ")"...) - return b, nil -} diff --git a/vendor/github.com/uptrace/bun/query_column_add.go b/vendor/github.com/uptrace/bun/query_column_add.go deleted file mode 100644 index 32a21338..00000000 --- a/vendor/github.com/uptrace/bun/query_column_add.go +++ /dev/null @@ -1,128 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type AddColumnQuery struct { - baseQuery - - ifNotExists bool -} - -var _ Query = (*AddColumnQuery)(nil) - -func NewAddColumnQuery(db *DB) *AddColumnQuery { - q := &AddColumnQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - return q -} - -func (q *AddColumnQuery) Conn(db IConn) *AddColumnQuery { - q.setConn(db) - return q -} - -func (q *AddColumnQuery) Model(model interface{}) *AddColumnQuery { - q.setModel(model) - return q -} - -func (q *AddColumnQuery) Err(err error) *AddColumnQuery { - q.setErr(err) - return q -} - -func (q *AddColumnQuery) Apply(fn func(*AddColumnQuery) *AddColumnQuery) *AddColumnQuery { - if fn != nil { - return fn(q) - } - return q -} - -//------------------------------------------------------------------------------ - -func (q *AddColumnQuery) Table(tables ...string) *AddColumnQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *AddColumnQuery) TableExpr(query string, args ...interface{}) *AddColumnQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *AddColumnQuery) ModelTableExpr(query string, args ...interface{}) *AddColumnQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *AddColumnQuery) ColumnExpr(query string, args ...interface{}) *AddColumnQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -func (q *AddColumnQuery) IfNotExists() *AddColumnQuery { - q.ifNotExists = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *AddColumnQuery) Operation() string { - return "ADD COLUMN" -} - -func (q *AddColumnQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - if len(q.columns) != 1 { - return nil, fmt.Errorf("bun: AddColumnQuery requires exactly one column") - } - - b = append(b, "ALTER TABLE "...) - - b, err = q.appendFirstTable(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " ADD "...) - - if q.ifNotExists { - b = append(b, "IF NOT EXISTS "...) - } - - b, err = q.columns[0].AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *AddColumnQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - return q.exec(ctx, q, query) -} diff --git a/vendor/github.com/uptrace/bun/query_column_drop.go b/vendor/github.com/uptrace/bun/query_column_drop.go deleted file mode 100644 index 1439ed9b..00000000 --- a/vendor/github.com/uptrace/bun/query_column_drop.go +++ /dev/null @@ -1,130 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type DropColumnQuery struct { - baseQuery -} - -var _ Query = (*DropColumnQuery)(nil) - -func NewDropColumnQuery(db *DB) *DropColumnQuery { - q := &DropColumnQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - return q -} - -func (q *DropColumnQuery) Conn(db IConn) *DropColumnQuery { - q.setConn(db) - return q -} - -func (q *DropColumnQuery) Model(model interface{}) *DropColumnQuery { - q.setModel(model) - return q -} - -func (q *DropColumnQuery) Err(err error) *DropColumnQuery { - q.setErr(err) - return q -} - -func (q *DropColumnQuery) Apply(fn func(*DropColumnQuery) *DropColumnQuery) *DropColumnQuery { - if fn != nil { - return fn(q) - } - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropColumnQuery) Table(tables ...string) *DropColumnQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *DropColumnQuery) TableExpr(query string, args ...interface{}) *DropColumnQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *DropColumnQuery) ModelTableExpr(query string, args ...interface{}) *DropColumnQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropColumnQuery) Column(columns ...string) *DropColumnQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -func (q *DropColumnQuery) ColumnExpr(query string, args ...interface{}) *DropColumnQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropColumnQuery) Operation() string { - return "DROP COLUMN" -} - -func (q *DropColumnQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - if len(q.columns) != 1 { - return nil, fmt.Errorf("bun: DropColumnQuery requires exactly one column") - } - - b = append(b, "ALTER TABLE "...) - - b, err = q.appendFirstTable(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " DROP COLUMN "...) - - b, err = q.columns[0].AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *DropColumnQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - return res, nil -} diff --git a/vendor/github.com/uptrace/bun/query_delete.go b/vendor/github.com/uptrace/bun/query_delete.go deleted file mode 100644 index 49a750cc..00000000 --- a/vendor/github.com/uptrace/bun/query_delete.go +++ /dev/null @@ -1,381 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "time" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type DeleteQuery struct { - whereBaseQuery - returningQuery -} - -var _ Query = (*DeleteQuery)(nil) - -func NewDeleteQuery(db *DB) *DeleteQuery { - q := &DeleteQuery{ - whereBaseQuery: whereBaseQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - }, - } - return q -} - -func (q *DeleteQuery) Conn(db IConn) *DeleteQuery { - q.setConn(db) - return q -} - -func (q *DeleteQuery) Model(model interface{}) *DeleteQuery { - q.setModel(model) - return q -} - -func (q *DeleteQuery) Err(err error) *DeleteQuery { - q.setErr(err) - return q -} - -// Apply calls the fn passing the DeleteQuery as an argument. -func (q *DeleteQuery) Apply(fn func(*DeleteQuery) *DeleteQuery) *DeleteQuery { - if fn != nil { - return fn(q) - } - return q -} - -func (q *DeleteQuery) With(name string, query schema.QueryAppender) *DeleteQuery { - q.addWith(name, query, false) - return q -} - -func (q *DeleteQuery) WithRecursive(name string, query schema.QueryAppender) *DeleteQuery { - q.addWith(name, query, true) - return q -} - -func (q *DeleteQuery) Table(tables ...string) *DeleteQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *DeleteQuery) TableExpr(query string, args ...interface{}) *DeleteQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *DeleteQuery) ModelTableExpr(query string, args ...interface{}) *DeleteQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DeleteQuery) WherePK(cols ...string) *DeleteQuery { - q.addWhereCols(cols) - return q -} - -func (q *DeleteQuery) Where(query string, args ...interface{}) *DeleteQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " AND ")) - return q -} - -func (q *DeleteQuery) WhereOr(query string, args ...interface{}) *DeleteQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " OR ")) - return q -} - -func (q *DeleteQuery) WhereGroup(sep string, fn func(*DeleteQuery) *DeleteQuery) *DeleteQuery { - saved := q.where - q.where = nil - - q = fn(q) - - where := q.where - q.where = saved - - q.addWhereGroup(sep, where) - - return q -} - -func (q *DeleteQuery) WhereDeleted() *DeleteQuery { - q.whereDeleted() - return q -} - -func (q *DeleteQuery) WhereAllWithDeleted() *DeleteQuery { - q.whereAllWithDeleted() - return q -} - -func (q *DeleteQuery) ForceDelete() *DeleteQuery { - q.flags = q.flags.Set(forceDeleteFlag) - return q -} - -//------------------------------------------------------------------------------ - -// Returning adds a RETURNING clause to the query. -// -// To suppress the auto-generated RETURNING clause, use `Returning("NULL")`. -func (q *DeleteQuery) Returning(query string, args ...interface{}) *DeleteQuery { - q.addReturning(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DeleteQuery) Operation() string { - return "DELETE" -} - -func (q *DeleteQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - fmter = formatterWithModel(fmter, q) - - if q.isSoftDelete() { - now := time.Now() - - if err := q.tableModel.updateSoftDeleteField(now); err != nil { - return nil, err - } - - upd := &UpdateQuery{ - whereBaseQuery: q.whereBaseQuery, - returningQuery: q.returningQuery, - } - upd.Set(q.softDeleteSet(fmter, now)) - - return upd.AppendQuery(fmter, b) - } - - withAlias := q.db.features.Has(feature.DeleteTableAlias) - - b, err = q.appendWith(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, "DELETE FROM "...) - - if withAlias { - b, err = q.appendFirstTableWithAlias(fmter, b) - } else { - b, err = q.appendFirstTable(fmter, b) - } - if err != nil { - return nil, err - } - - if q.hasMultiTables() { - b = append(b, " USING "...) - b, err = q.appendOtherTables(fmter, b) - if err != nil { - return nil, err - } - } - - if q.hasFeature(feature.Output) && q.hasReturning() { - b = append(b, " OUTPUT "...) - b, err = q.appendOutput(fmter, b) - if err != nil { - return nil, err - } - } - - b, err = q.mustAppendWhere(fmter, b, withAlias) - if err != nil { - return nil, err - } - - if q.hasFeature(feature.Returning) && q.hasReturning() { - b = append(b, " RETURNING "...) - b, err = q.appendReturning(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *DeleteQuery) isSoftDelete() bool { - return q.tableModel != nil && q.table.SoftDeleteField != nil && !q.flags.Has(forceDeleteFlag) -} - -func (q *DeleteQuery) softDeleteSet(fmter schema.Formatter, tm time.Time) string { - b := make([]byte, 0, 32) - if fmter.HasFeature(feature.UpdateMultiTable) { - b = append(b, q.table.SQLAlias...) - b = append(b, '.') - } - b = append(b, q.table.SoftDeleteField.SQLName...) - b = append(b, " = "...) - b = schema.Append(fmter, b, tm) - return internal.String(b) -} - -//------------------------------------------------------------------------------ - -func (q *DeleteQuery) Scan(ctx context.Context, dest ...interface{}) error { - _, err := q.scanOrExec(ctx, dest, true) - return err -} - -func (q *DeleteQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - return q.scanOrExec(ctx, dest, len(dest) > 0) -} - -func (q *DeleteQuery) scanOrExec( - ctx context.Context, dest []interface{}, hasDest bool, -) (sql.Result, error) { - if q.err != nil { - return nil, q.err - } - - if q.table != nil { - if err := q.beforeDeleteHook(ctx); err != nil { - return nil, err - } - } - - // Run append model hooks before generating the query. - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - // Generate the query before checking hasReturning. - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - useScan := hasDest || (q.hasReturning() && q.hasFeature(feature.Returning|feature.Output)) - var model Model - - if useScan { - var err error - model, err = q.getModel(dest) - if err != nil { - return nil, err - } - } - - query := internal.String(queryBytes) - - var res sql.Result - - if useScan { - res, err = q.scan(ctx, q, query, model, hasDest) - if err != nil { - return nil, err - } - } else { - res, err = q.exec(ctx, q, query) - if err != nil { - return nil, err - } - } - - if q.table != nil { - if err := q.afterDeleteHook(ctx); err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *DeleteQuery) beforeDeleteHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeDeleteHook); ok { - if err := hook.BeforeDelete(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *DeleteQuery) afterDeleteHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterDeleteHook); ok { - if err := hook.AfterDelete(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *DeleteQuery) String() string { - buf, err := q.AppendQuery(q.db.Formatter(), nil) - if err != nil { - panic(err) - } - - return string(buf) -} - -//------------------------------------------------------------------------------ - -func (q *DeleteQuery) QueryBuilder() QueryBuilder { - return &deleteQueryBuilder{q} -} - -func (q *DeleteQuery) ApplyQueryBuilder(fn func(QueryBuilder) QueryBuilder) *DeleteQuery { - return fn(q.QueryBuilder()).Unwrap().(*DeleteQuery) -} - -type deleteQueryBuilder struct { - *DeleteQuery -} - -func (q *deleteQueryBuilder) WhereGroup( - sep string, fn func(QueryBuilder) QueryBuilder, -) QueryBuilder { - q.DeleteQuery = q.DeleteQuery.WhereGroup(sep, func(qs *DeleteQuery) *DeleteQuery { - return fn(q).(*deleteQueryBuilder).DeleteQuery - }) - return q -} - -func (q *deleteQueryBuilder) Where(query string, args ...interface{}) QueryBuilder { - q.DeleteQuery.Where(query, args...) - return q -} - -func (q *deleteQueryBuilder) WhereOr(query string, args ...interface{}) QueryBuilder { - q.DeleteQuery.WhereOr(query, args...) - return q -} - -func (q *deleteQueryBuilder) WhereDeleted() QueryBuilder { - q.DeleteQuery.WhereDeleted() - return q -} - -func (q *deleteQueryBuilder) WhereAllWithDeleted() QueryBuilder { - q.DeleteQuery.WhereAllWithDeleted() - return q -} - -func (q *deleteQueryBuilder) WherePK(cols ...string) QueryBuilder { - q.DeleteQuery.WherePK(cols...) - return q -} - -func (q *deleteQueryBuilder) Unwrap() interface{} { - return q.DeleteQuery -} diff --git a/vendor/github.com/uptrace/bun/query_index_create.go b/vendor/github.com/uptrace/bun/query_index_create.go deleted file mode 100644 index 11824cfa..00000000 --- a/vendor/github.com/uptrace/bun/query_index_create.go +++ /dev/null @@ -1,254 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type CreateIndexQuery struct { - whereBaseQuery - - unique bool - fulltext bool - spatial bool - concurrently bool - ifNotExists bool - - index schema.QueryWithArgs - using schema.QueryWithArgs - include []schema.QueryWithArgs -} - -var _ Query = (*CreateIndexQuery)(nil) - -func NewCreateIndexQuery(db *DB) *CreateIndexQuery { - q := &CreateIndexQuery{ - whereBaseQuery: whereBaseQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - }, - } - return q -} - -func (q *CreateIndexQuery) Conn(db IConn) *CreateIndexQuery { - q.setConn(db) - return q -} - -func (q *CreateIndexQuery) Model(model interface{}) *CreateIndexQuery { - q.setModel(model) - return q -} - -func (q *CreateIndexQuery) Err(err error) *CreateIndexQuery { - q.setErr(err) - return q -} - -func (q *CreateIndexQuery) Unique() *CreateIndexQuery { - q.unique = true - return q -} - -func (q *CreateIndexQuery) Concurrently() *CreateIndexQuery { - q.concurrently = true - return q -} - -func (q *CreateIndexQuery) IfNotExists() *CreateIndexQuery { - q.ifNotExists = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Index(query string) *CreateIndexQuery { - q.index = schema.UnsafeIdent(query) - return q -} - -func (q *CreateIndexQuery) IndexExpr(query string, args ...interface{}) *CreateIndexQuery { - q.index = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Table(tables ...string) *CreateIndexQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *CreateIndexQuery) TableExpr(query string, args ...interface{}) *CreateIndexQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *CreateIndexQuery) ModelTableExpr(query string, args ...interface{}) *CreateIndexQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -func (q *CreateIndexQuery) Using(query string, args ...interface{}) *CreateIndexQuery { - q.using = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Column(columns ...string) *CreateIndexQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -func (q *CreateIndexQuery) ColumnExpr(query string, args ...interface{}) *CreateIndexQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -func (q *CreateIndexQuery) ExcludeColumn(columns ...string) *CreateIndexQuery { - q.excludeColumn(columns) - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Include(columns ...string) *CreateIndexQuery { - for _, column := range columns { - q.include = append(q.include, schema.UnsafeIdent(column)) - } - return q -} - -func (q *CreateIndexQuery) IncludeExpr(query string, args ...interface{}) *CreateIndexQuery { - q.include = append(q.include, schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Where(query string, args ...interface{}) *CreateIndexQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " AND ")) - return q -} - -func (q *CreateIndexQuery) WhereOr(query string, args ...interface{}) *CreateIndexQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " OR ")) - return q -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Operation() string { - return "CREATE INDEX" -} - -func (q *CreateIndexQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - b = append(b, "CREATE "...) - - if q.unique { - b = append(b, "UNIQUE "...) - } - if q.fulltext { - b = append(b, "FULLTEXT "...) - } - if q.spatial { - b = append(b, "SPATIAL "...) - } - - b = append(b, "INDEX "...) - - if q.concurrently { - b = append(b, "CONCURRENTLY "...) - } - if q.ifNotExists { - b = append(b, "IF NOT EXISTS "...) - } - - b, err = q.index.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " ON "...) - b, err = q.appendFirstTable(fmter, b) - if err != nil { - return nil, err - } - - if !q.using.IsZero() { - b = append(b, " USING "...) - b, err = q.using.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - b = append(b, " ("...) - for i, col := range q.columns { - if i > 0 { - b = append(b, ", "...) - } - b, err = col.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - b = append(b, ')') - - if len(q.include) > 0 { - b = append(b, " INCLUDE ("...) - for i, col := range q.include { - if i > 0 { - b = append(b, ", "...) - } - b, err = col.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - b = append(b, ')') - } - - if len(q.where) > 0 { - b = append(b, " WHERE "...) - b, err = appendWhere(fmter, b, q.where) - if err != nil { - return nil, err - } - } - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *CreateIndexQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - return res, nil -} diff --git a/vendor/github.com/uptrace/bun/query_index_drop.go b/vendor/github.com/uptrace/bun/query_index_drop.go deleted file mode 100644 index ae28e795..00000000 --- a/vendor/github.com/uptrace/bun/query_index_drop.go +++ /dev/null @@ -1,121 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type DropIndexQuery struct { - baseQuery - cascadeQuery - - concurrently bool - ifExists bool - - index schema.QueryWithArgs -} - -var _ Query = (*DropIndexQuery)(nil) - -func NewDropIndexQuery(db *DB) *DropIndexQuery { - q := &DropIndexQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - return q -} - -func (q *DropIndexQuery) Conn(db IConn) *DropIndexQuery { - q.setConn(db) - return q -} - -func (q *DropIndexQuery) Model(model interface{}) *DropIndexQuery { - q.setModel(model) - return q -} - -func (q *DropIndexQuery) Err(err error) *DropIndexQuery { - q.setErr(err) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropIndexQuery) Concurrently() *DropIndexQuery { - q.concurrently = true - return q -} - -func (q *DropIndexQuery) IfExists() *DropIndexQuery { - q.ifExists = true - return q -} - -func (q *DropIndexQuery) Cascade() *DropIndexQuery { - q.cascade = true - return q -} - -func (q *DropIndexQuery) Restrict() *DropIndexQuery { - q.restrict = true - return q -} - -func (q *DropIndexQuery) Index(query string, args ...interface{}) *DropIndexQuery { - q.index = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropIndexQuery) Operation() string { - return "DROP INDEX" -} - -func (q *DropIndexQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - b = append(b, "DROP INDEX "...) - - if q.concurrently { - b = append(b, "CONCURRENTLY "...) - } - if q.ifExists { - b = append(b, "IF EXISTS "...) - } - - b, err = q.index.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - b = q.appendCascade(fmter, b) - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *DropIndexQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - return res, nil -} diff --git a/vendor/github.com/uptrace/bun/query_insert.go b/vendor/github.com/uptrace/bun/query_insert.go deleted file mode 100644 index 7cf05375..00000000 --- a/vendor/github.com/uptrace/bun/query_insert.go +++ /dev/null @@ -1,684 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - "reflect" - "strings" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type InsertQuery struct { - whereBaseQuery - returningQuery - customValueQuery - - on schema.QueryWithArgs - setQuery - - ignore bool - replace bool -} - -var _ Query = (*InsertQuery)(nil) - -func NewInsertQuery(db *DB) *InsertQuery { - q := &InsertQuery{ - whereBaseQuery: whereBaseQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - }, - } - return q -} - -func (q *InsertQuery) Conn(db IConn) *InsertQuery { - q.setConn(db) - return q -} - -func (q *InsertQuery) Model(model interface{}) *InsertQuery { - q.setModel(model) - return q -} - -func (q *InsertQuery) Err(err error) *InsertQuery { - q.setErr(err) - return q -} - -// Apply calls the fn passing the SelectQuery as an argument. -func (q *InsertQuery) Apply(fn func(*InsertQuery) *InsertQuery) *InsertQuery { - if fn != nil { - return fn(q) - } - return q -} - -func (q *InsertQuery) With(name string, query schema.QueryAppender) *InsertQuery { - q.addWith(name, query, false) - return q -} - -func (q *InsertQuery) WithRecursive(name string, query schema.QueryAppender) *InsertQuery { - q.addWith(name, query, true) - return q -} - -//------------------------------------------------------------------------------ - -func (q *InsertQuery) Table(tables ...string) *InsertQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *InsertQuery) TableExpr(query string, args ...interface{}) *InsertQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *InsertQuery) ModelTableExpr(query string, args ...interface{}) *InsertQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *InsertQuery) Column(columns ...string) *InsertQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -func (q *InsertQuery) ColumnExpr(query string, args ...interface{}) *InsertQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -func (q *InsertQuery) ExcludeColumn(columns ...string) *InsertQuery { - q.excludeColumn(columns) - return q -} - -// Value overwrites model value for the column. -func (q *InsertQuery) Value(column string, expr string, args ...interface{}) *InsertQuery { - if q.table == nil { - q.err = errNilModel - return q - } - q.addValue(q.table, column, expr, args) - return q -} - -func (q *InsertQuery) Where(query string, args ...interface{}) *InsertQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " AND ")) - return q -} - -func (q *InsertQuery) WhereOr(query string, args ...interface{}) *InsertQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " OR ")) - return q -} - -//------------------------------------------------------------------------------ - -// Returning adds a RETURNING clause to the query. -// -// To suppress the auto-generated RETURNING clause, use `Returning("")`. -func (q *InsertQuery) Returning(query string, args ...interface{}) *InsertQuery { - q.addReturning(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -// Ignore generates different queries depending on the DBMS: -// - On MySQL, it generates `INSERT IGNORE INTO`. -// - On PostgreSQL, it generates `ON CONFLICT DO NOTHING`. -func (q *InsertQuery) Ignore() *InsertQuery { - if q.db.fmter.HasFeature(feature.InsertOnConflict) { - return q.On("CONFLICT DO NOTHING") - } - if q.db.fmter.HasFeature(feature.InsertIgnore) { - q.ignore = true - } - return q -} - -// Replaces generates a `REPLACE INTO` query (MySQL and MariaDB). -func (q *InsertQuery) Replace() *InsertQuery { - q.replace = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *InsertQuery) Operation() string { - return "INSERT" -} - -func (q *InsertQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - fmter = formatterWithModel(fmter, q) - - b, err = q.appendWith(fmter, b) - if err != nil { - return nil, err - } - - if q.replace { - b = append(b, "REPLACE "...) - } else { - b = append(b, "INSERT "...) - if q.ignore { - b = append(b, "IGNORE "...) - } - } - b = append(b, "INTO "...) - - if q.db.features.Has(feature.InsertTableAlias) && !q.on.IsZero() { - b, err = q.appendFirstTableWithAlias(fmter, b) - } else { - b, err = q.appendFirstTable(fmter, b) - } - if err != nil { - return nil, err - } - - b, err = q.appendColumnsValues(fmter, b, false) - if err != nil { - return nil, err - } - - b, err = q.appendOn(fmter, b) - if err != nil { - return nil, err - } - - if q.hasFeature(feature.InsertReturning) && q.hasReturning() { - b = append(b, " RETURNING "...) - b, err = q.appendReturning(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *InsertQuery) appendColumnsValues( - fmter schema.Formatter, b []byte, skipOutput bool, -) (_ []byte, err error) { - if q.hasMultiTables() { - if q.columns != nil { - b = append(b, " ("...) - b, err = q.appendColumns(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ")"...) - } - - if q.hasFeature(feature.Output) && q.hasReturning() { - b = append(b, " OUTPUT "...) - b, err = q.appendOutput(fmter, b) - if err != nil { - return nil, err - } - } - - b = append(b, " SELECT "...) - - if q.columns != nil { - b, err = q.appendColumns(fmter, b) - if err != nil { - return nil, err - } - } else { - b = append(b, "*"...) - } - - b = append(b, " FROM "...) - b, err = q.appendOtherTables(fmter, b) - if err != nil { - return nil, err - } - - return b, nil - } - - if m, ok := q.model.(*mapModel); ok { - return m.appendColumnsValues(fmter, b), nil - } - if _, ok := q.model.(*mapSliceModel); ok { - return nil, fmt.Errorf("Insert(*[]map[string]interface{}) is not supported") - } - - if q.model == nil { - return nil, errNilModel - } - - // Build fields to populate RETURNING clause. - fields, err := q.getFields() - if err != nil { - return nil, err - } - - b = append(b, " ("...) - b = q.appendFields(fmter, b, fields) - b = append(b, ")"...) - - if q.hasFeature(feature.Output) && q.hasReturning() && !skipOutput { - b = append(b, " OUTPUT "...) - b, err = q.appendOutput(fmter, b) - if err != nil { - return nil, err - } - } - - b = append(b, " VALUES ("...) - - switch model := q.tableModel.(type) { - case *structTableModel: - b, err = q.appendStructValues(fmter, b, fields, model.strct) - if err != nil { - return nil, err - } - case *sliceTableModel: - b, err = q.appendSliceValues(fmter, b, fields, model.slice) - if err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("bun: Insert does not support %T", q.tableModel) - } - - b = append(b, ')') - - return b, nil -} - -func (q *InsertQuery) appendStructValues( - fmter schema.Formatter, b []byte, fields []*schema.Field, strct reflect.Value, -) (_ []byte, err error) { - isTemplate := fmter.IsNop() - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - - app, ok := q.modelValues[f.Name] - if ok { - b, err = app.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - q.addReturningField(f) - continue - } - - switch { - case isTemplate: - b = append(b, '?') - case (f.IsPtr && f.HasNilValue(strct)) || (f.NullZero && f.HasZeroValue(strct)): - if q.db.features.Has(feature.DefaultPlaceholder) { - b = append(b, "DEFAULT"...) - } else if f.SQLDefault != "" { - b = append(b, f.SQLDefault...) - } else { - b = append(b, "NULL"...) - } - q.addReturningField(f) - default: - b = f.AppendValue(fmter, b, strct) - } - } - - for i, v := range q.extraValues { - if i > 0 || len(fields) > 0 { - b = append(b, ", "...) - } - - b, err = v.value.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *InsertQuery) appendSliceValues( - fmter schema.Formatter, b []byte, fields []*schema.Field, slice reflect.Value, -) (_ []byte, err error) { - if fmter.IsNop() { - return q.appendStructValues(fmter, b, fields, reflect.Value{}) - } - - sliceLen := slice.Len() - for i := 0; i < sliceLen; i++ { - if i > 0 { - b = append(b, "), ("...) - } - el := indirect(slice.Index(i)) - b, err = q.appendStructValues(fmter, b, fields, el) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *InsertQuery) getFields() ([]*schema.Field, error) { - hasIdentity := q.db.features.Has(feature.Identity) - - if len(q.columns) > 0 || q.db.features.Has(feature.DefaultPlaceholder) && !hasIdentity { - return q.baseQuery.getFields() - } - - var strct reflect.Value - - switch model := q.tableModel.(type) { - case *structTableModel: - strct = model.strct - case *sliceTableModel: - if model.sliceLen == 0 { - return nil, fmt.Errorf("bun: Insert(empty %T)", model.slice.Type()) - } - strct = indirect(model.slice.Index(0)) - default: - return nil, errNilModel - } - - fields := make([]*schema.Field, 0, len(q.table.Fields)) - - for _, f := range q.table.Fields { - if hasIdentity && f.AutoIncrement { - q.addReturningField(f) - continue - } - if f.NotNull && f.SQLDefault == "" { - if (f.IsPtr && f.HasNilValue(strct)) || (f.NullZero && f.HasZeroValue(strct)) { - q.addReturningField(f) - continue - } - } - fields = append(fields, f) - } - - return fields, nil -} - -func (q *InsertQuery) appendFields( - fmter schema.Formatter, b []byte, fields []*schema.Field, -) []byte { - b = appendColumns(b, "", fields) - for i, v := range q.extraValues { - if i > 0 || len(fields) > 0 { - b = append(b, ", "...) - } - b = fmter.AppendIdent(b, v.column) - } - return b -} - -//------------------------------------------------------------------------------ - -func (q *InsertQuery) On(s string, args ...interface{}) *InsertQuery { - q.on = schema.SafeQuery(s, args) - return q -} - -func (q *InsertQuery) Set(query string, args ...interface{}) *InsertQuery { - q.addSet(schema.SafeQuery(query, args)) - return q -} - -func (q *InsertQuery) appendOn(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.on.IsZero() { - return b, nil - } - - b = append(b, " ON "...) - b, err = q.on.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - if len(q.set) > 0 { - if fmter.HasFeature(feature.InsertOnDuplicateKey) { - b = append(b, ' ') - } else { - b = append(b, " SET "...) - } - - b, err = q.appendSet(fmter, b) - if err != nil { - return nil, err - } - } else if q.onConflictDoUpdate() { - fields, err := q.getDataFields() - if err != nil { - return nil, err - } - - if len(fields) == 0 { - fields = q.tableModel.Table().DataFields - } - - b = q.appendSetExcluded(b, fields) - } else if q.onDuplicateKeyUpdate() { - fields, err := q.getDataFields() - if err != nil { - return nil, err - } - - if len(fields) == 0 { - fields = q.tableModel.Table().DataFields - } - - b = q.appendSetValues(b, fields) - } - - if len(q.where) > 0 { - b = append(b, " WHERE "...) - - b, err = appendWhere(fmter, b, q.where) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *InsertQuery) onConflictDoUpdate() bool { - return strings.HasSuffix(strings.ToUpper(q.on.Query), " DO UPDATE") -} - -func (q *InsertQuery) onDuplicateKeyUpdate() bool { - return strings.ToUpper(q.on.Query) == "DUPLICATE KEY UPDATE" -} - -func (q *InsertQuery) appendSetExcluded(b []byte, fields []*schema.Field) []byte { - b = append(b, " SET "...) - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - b = append(b, f.SQLName...) - b = append(b, " = EXCLUDED."...) - b = append(b, f.SQLName...) - } - return b -} - -func (q *InsertQuery) appendSetValues(b []byte, fields []*schema.Field) []byte { - b = append(b, " "...) - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - b = append(b, f.SQLName...) - b = append(b, " = VALUES("...) - b = append(b, f.SQLName...) - b = append(b, ")"...) - } - return b -} - -//------------------------------------------------------------------------------ - -func (q *InsertQuery) Scan(ctx context.Context, dest ...interface{}) error { - _, err := q.scanOrExec(ctx, dest, true) - return err -} - -func (q *InsertQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - return q.scanOrExec(ctx, dest, len(dest) > 0) -} - -func (q *InsertQuery) scanOrExec( - ctx context.Context, dest []interface{}, hasDest bool, -) (sql.Result, error) { - if q.err != nil { - return nil, q.err - } - - if q.table != nil { - if err := q.beforeInsertHook(ctx); err != nil { - return nil, err - } - } - - // Run append model hooks before generating the query. - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - // Generate the query before checking hasReturning. - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - useScan := hasDest || (q.hasReturning() && q.hasFeature(feature.InsertReturning|feature.Output)) - var model Model - - if useScan { - var err error - model, err = q.getModel(dest) - if err != nil { - return nil, err - } - } - - query := internal.String(queryBytes) - var res sql.Result - - if useScan { - res, err = q.scan(ctx, q, query, model, hasDest) - if err != nil { - return nil, err - } - } else { - res, err = q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - if err := q.tryLastInsertID(res, dest); err != nil { - return nil, err - } - } - - if q.table != nil { - if err := q.afterInsertHook(ctx); err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *InsertQuery) beforeInsertHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeInsertHook); ok { - if err := hook.BeforeInsert(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *InsertQuery) afterInsertHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterInsertHook); ok { - if err := hook.AfterInsert(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *InsertQuery) tryLastInsertID(res sql.Result, dest []interface{}) error { - if q.db.features.Has(feature.Returning) || - q.db.features.Has(feature.Output) || - q.table == nil || - len(q.table.PKs) != 1 || - !q.table.PKs[0].AutoIncrement { - return nil - } - - id, err := res.LastInsertId() - if err != nil { - return err - } - if id == 0 { - return nil - } - - model, err := q.getModel(dest) - if err != nil { - return err - } - - pk := q.table.PKs[0] - switch model := model.(type) { - case *structTableModel: - if err := pk.ScanValue(model.strct, id); err != nil { - return err - } - case *sliceTableModel: - sliceLen := model.slice.Len() - for i := 0; i < sliceLen; i++ { - strct := indirect(model.slice.Index(i)) - if err := pk.ScanValue(strct, id); err != nil { - return err - } - id++ - } - } - - return nil -} - -func (q *InsertQuery) String() string { - buf, err := q.AppendQuery(q.db.Formatter(), nil) - if err != nil { - panic(err) - } - - return string(buf) -} diff --git a/vendor/github.com/uptrace/bun/query_merge.go b/vendor/github.com/uptrace/bun/query_merge.go deleted file mode 100644 index 706dc20a..00000000 --- a/vendor/github.com/uptrace/bun/query_merge.go +++ /dev/null @@ -1,322 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "errors" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type MergeQuery struct { - baseQuery - returningQuery - - using schema.QueryWithArgs - on schema.QueryWithArgs - when []schema.QueryAppender -} - -var _ Query = (*MergeQuery)(nil) - -func NewMergeQuery(db *DB) *MergeQuery { - q := &MergeQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - if !(q.db.dialect.Name() == dialect.MSSQL || q.db.dialect.Name() == dialect.PG) { - q.err = errors.New("bun: merge not supported for current dialect") - } - return q -} - -func (q *MergeQuery) Conn(db IConn) *MergeQuery { - q.setConn(db) - return q -} - -func (q *MergeQuery) Model(model interface{}) *MergeQuery { - q.setModel(model) - return q -} - -func (q *MergeQuery) Err(err error) *MergeQuery { - q.setErr(err) - return q -} - -// Apply calls the fn passing the MergeQuery as an argument. -func (q *MergeQuery) Apply(fn func(*MergeQuery) *MergeQuery) *MergeQuery { - if fn != nil { - return fn(q) - } - return q -} - -func (q *MergeQuery) With(name string, query schema.QueryAppender) *MergeQuery { - q.addWith(name, query, false) - return q -} - -func (q *MergeQuery) WithRecursive(name string, query schema.QueryAppender) *MergeQuery { - q.addWith(name, query, true) - return q -} - -//------------------------------------------------------------------------------ - -func (q *MergeQuery) Table(tables ...string) *MergeQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *MergeQuery) TableExpr(query string, args ...interface{}) *MergeQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *MergeQuery) ModelTableExpr(query string, args ...interface{}) *MergeQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -// Returning adds a RETURNING clause to the query. -// -// To suppress the auto-generated RETURNING clause, use `Returning("NULL")`. -// Only for mssql output, postgres not supported returning in merge query -func (q *MergeQuery) Returning(query string, args ...interface{}) *MergeQuery { - q.addReturning(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *MergeQuery) Using(s string, args ...interface{}) *MergeQuery { - q.using = schema.SafeQuery(s, args) - return q -} - -func (q *MergeQuery) On(s string, args ...interface{}) *MergeQuery { - q.on = schema.SafeQuery(s, args) - return q -} - -// WhenInsert for when insert clause. -func (q *MergeQuery) WhenInsert(expr string, fn func(q *InsertQuery) *InsertQuery) *MergeQuery { - sq := NewInsertQuery(q.db) - // apply the model as default into sub query, since appendColumnsValues required - if q.model != nil { - sq = sq.Model(q.model) - } - sq = sq.Apply(fn) - q.when = append(q.when, &whenInsert{expr: expr, query: sq}) - return q -} - -// WhenUpdate for when update clause. -func (q *MergeQuery) WhenUpdate(expr string, fn func(q *UpdateQuery) *UpdateQuery) *MergeQuery { - sq := NewUpdateQuery(q.db) - // apply the model as default into sub query - if q.model != nil { - sq = sq.Model(q.model) - } - sq = sq.Apply(fn) - q.when = append(q.when, &whenUpdate{expr: expr, query: sq}) - return q -} - -// WhenDelete for when delete clause. -func (q *MergeQuery) WhenDelete(expr string) *MergeQuery { - q.when = append(q.when, &whenDelete{expr: expr}) - return q -} - -// When for raw expression clause. -func (q *MergeQuery) When(expr string, args ...interface{}) *MergeQuery { - q.when = append(q.when, schema.SafeQuery(expr, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *MergeQuery) Operation() string { - return "MERGE" -} - -func (q *MergeQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - fmter = formatterWithModel(fmter, q) - - b, err = q.appendWith(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, "MERGE "...) - if q.db.dialect.Name() == dialect.PG { - b = append(b, "INTO "...) - } - - b, err = q.appendFirstTableWithAlias(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " USING "...) - b, err = q.using.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " ON "...) - b, err = q.on.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - for _, w := range q.when { - b = append(b, " WHEN "...) - b, err = w.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - if q.hasFeature(feature.Output) && q.hasReturning() { - b = append(b, " OUTPUT "...) - b, err = q.appendOutput(fmter, b) - if err != nil { - return nil, err - } - } - - // A MERGE statement must be terminated by a semi-colon (;). - b = append(b, ";"...) - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *MergeQuery) Scan(ctx context.Context, dest ...interface{}) error { - _, err := q.scanOrExec(ctx, dest, true) - return err -} - -func (q *MergeQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - return q.scanOrExec(ctx, dest, len(dest) > 0) -} - -func (q *MergeQuery) scanOrExec( - ctx context.Context, dest []interface{}, hasDest bool, -) (sql.Result, error) { - if q.err != nil { - return nil, q.err - } - - // Run append model hooks before generating the query. - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - // Generate the query before checking hasReturning. - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - useScan := hasDest || (q.hasReturning() && q.hasFeature(feature.InsertReturning|feature.Output)) - var model Model - - if useScan { - var err error - model, err = q.getModel(dest) - if err != nil { - return nil, err - } - } - - query := internal.String(queryBytes) - var res sql.Result - - if useScan { - res, err = q.scan(ctx, q, query, model, true) - if err != nil { - return nil, err - } - } else { - res, err = q.exec(ctx, q, query) - if err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *MergeQuery) String() string { - buf, err := q.AppendQuery(q.db.Formatter(), nil) - if err != nil { - panic(err) - } - - return string(buf) -} - -//------------------------------------------------------------------------------ - -type whenInsert struct { - expr string - query *InsertQuery -} - -func (w *whenInsert) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, w.expr...) - if w.query != nil { - b = append(b, " THEN INSERT"...) - b, err = w.query.appendColumnsValues(fmter, b, true) - if err != nil { - return nil, err - } - } - return b, nil -} - -type whenUpdate struct { - expr string - query *UpdateQuery -} - -func (w *whenUpdate) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, w.expr...) - if w.query != nil { - b = append(b, " THEN UPDATE SET "...) - b, err = w.query.appendSet(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -type whenDelete struct { - expr string -} - -func (w *whenDelete) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, w.expr...) - b = append(b, " THEN DELETE"...) - return b, nil -} diff --git a/vendor/github.com/uptrace/bun/query_raw.go b/vendor/github.com/uptrace/bun/query_raw.go deleted file mode 100644 index 7afa4d53..00000000 --- a/vendor/github.com/uptrace/bun/query_raw.go +++ /dev/null @@ -1,70 +0,0 @@ -package bun - -import ( - "context" - - "github.com/uptrace/bun/schema" -) - -type RawQuery struct { - baseQuery - - query string - args []interface{} -} - -// Deprecated: Use NewRaw instead. When add it to IDB, it conflicts with the sql.Conn#Raw -func (db *DB) Raw(query string, args ...interface{}) *RawQuery { - return &RawQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - query: query, - args: args, - } -} - -func NewRawQuery(db *DB, query string, args ...interface{}) *RawQuery { - return &RawQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - query: query, - args: args, - } -} - -func (q *RawQuery) Conn(db IConn) *RawQuery { - q.setConn(db) - return q -} - -func (q *RawQuery) Err(err error) *RawQuery { - q.setErr(err) - return q -} - -func (q *RawQuery) Scan(ctx context.Context, dest ...interface{}) error { - if q.err != nil { - return q.err - } - - model, err := q.getModel(dest) - if err != nil { - return err - } - - query := q.db.format(q.query, q.args) - _, err = q.scan(ctx, q, query, model, true) - return err -} - -func (q *RawQuery) AppendQuery(fmter schema.Formatter, b []byte) ([]byte, error) { - return fmter.AppendQuery(b, q.query, q.args...), nil -} - -func (q *RawQuery) Operation() string { - return "SELECT" -} diff --git a/vendor/github.com/uptrace/bun/query_select.go b/vendor/github.com/uptrace/bun/query_select.go deleted file mode 100644 index a24a9f6f..00000000 --- a/vendor/github.com/uptrace/bun/query_select.go +++ /dev/null @@ -1,1210 +0,0 @@ -package bun - -import ( - "bytes" - "context" - "database/sql" - "errors" - "fmt" - "strconv" - "strings" - "sync" - - "github.com/uptrace/bun/dialect" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type union struct { - expr string - query *SelectQuery -} - -type SelectQuery struct { - whereBaseQuery - idxHintsQuery - - distinctOn []schema.QueryWithArgs - joins []joinQuery - group []schema.QueryWithArgs - having []schema.QueryWithArgs - order []schema.QueryWithArgs - limit int32 - offset int32 - selFor schema.QueryWithArgs - - union []union -} - -var _ Query = (*SelectQuery)(nil) - -func NewSelectQuery(db *DB) *SelectQuery { - return &SelectQuery{ - whereBaseQuery: whereBaseQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - }, - } -} - -func (q *SelectQuery) Conn(db IConn) *SelectQuery { - q.setConn(db) - return q -} - -func (q *SelectQuery) Model(model interface{}) *SelectQuery { - q.setModel(model) - return q -} - -func (q *SelectQuery) Err(err error) *SelectQuery { - q.setErr(err) - return q -} - -// Apply calls the fn passing the SelectQuery as an argument. -func (q *SelectQuery) Apply(fn func(*SelectQuery) *SelectQuery) *SelectQuery { - if fn != nil { - return fn(q) - } - return q -} - -func (q *SelectQuery) With(name string, query schema.QueryAppender) *SelectQuery { - q.addWith(name, query, false) - return q -} - -func (q *SelectQuery) WithRecursive(name string, query schema.QueryAppender) *SelectQuery { - q.addWith(name, query, true) - return q -} - -func (q *SelectQuery) Distinct() *SelectQuery { - q.distinctOn = make([]schema.QueryWithArgs, 0) - return q -} - -func (q *SelectQuery) DistinctOn(query string, args ...interface{}) *SelectQuery { - q.distinctOn = append(q.distinctOn, schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Table(tables ...string) *SelectQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *SelectQuery) TableExpr(query string, args ...interface{}) *SelectQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *SelectQuery) ModelTableExpr(query string, args ...interface{}) *SelectQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Column(columns ...string) *SelectQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -func (q *SelectQuery) ColumnExpr(query string, args ...interface{}) *SelectQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -func (q *SelectQuery) ExcludeColumn(columns ...string) *SelectQuery { - q.excludeColumn(columns) - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) WherePK(cols ...string) *SelectQuery { - q.addWhereCols(cols) - return q -} - -func (q *SelectQuery) Where(query string, args ...interface{}) *SelectQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " AND ")) - return q -} - -func (q *SelectQuery) WhereOr(query string, args ...interface{}) *SelectQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " OR ")) - return q -} - -func (q *SelectQuery) WhereGroup(sep string, fn func(*SelectQuery) *SelectQuery) *SelectQuery { - saved := q.where - q.where = nil - - q = fn(q) - - where := q.where - q.where = saved - - q.addWhereGroup(sep, where) - - return q -} - -func (q *SelectQuery) WhereDeleted() *SelectQuery { - q.whereDeleted() - return q -} - -func (q *SelectQuery) WhereAllWithDeleted() *SelectQuery { - q.whereAllWithDeleted() - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) UseIndex(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addUseIndex(indexes...) - } - return q -} - -func (q *SelectQuery) UseIndexForJoin(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addUseIndexForJoin(indexes...) - } - return q -} - -func (q *SelectQuery) UseIndexForOrderBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addUseIndexForOrderBy(indexes...) - } - return q -} - -func (q *SelectQuery) UseIndexForGroupBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addUseIndexForGroupBy(indexes...) - } - return q -} - -func (q *SelectQuery) IgnoreIndex(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addIgnoreIndex(indexes...) - } - return q -} - -func (q *SelectQuery) IgnoreIndexForJoin(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addIgnoreIndexForJoin(indexes...) - } - return q -} - -func (q *SelectQuery) IgnoreIndexForOrderBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addIgnoreIndexForOrderBy(indexes...) - } - return q -} - -func (q *SelectQuery) IgnoreIndexForGroupBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addIgnoreIndexForGroupBy(indexes...) - } - return q -} - -func (q *SelectQuery) ForceIndex(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addForceIndex(indexes...) - } - return q -} - -func (q *SelectQuery) ForceIndexForJoin(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addForceIndexForJoin(indexes...) - } - return q -} - -func (q *SelectQuery) ForceIndexForOrderBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addForceIndexForOrderBy(indexes...) - } - return q -} - -func (q *SelectQuery) ForceIndexForGroupBy(indexes ...string) *SelectQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addForceIndexForGroupBy(indexes...) - } - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Group(columns ...string) *SelectQuery { - for _, column := range columns { - q.group = append(q.group, schema.UnsafeIdent(column)) - } - return q -} - -func (q *SelectQuery) GroupExpr(group string, args ...interface{}) *SelectQuery { - q.group = append(q.group, schema.SafeQuery(group, args)) - return q -} - -func (q *SelectQuery) Having(having string, args ...interface{}) *SelectQuery { - q.having = append(q.having, schema.SafeQuery(having, args)) - return q -} - -func (q *SelectQuery) Order(orders ...string) *SelectQuery { - for _, order := range orders { - if order == "" { - continue - } - - index := strings.IndexByte(order, ' ') - if index == -1 { - q.order = append(q.order, schema.UnsafeIdent(order)) - continue - } - - field := order[:index] - sort := order[index+1:] - - switch strings.ToUpper(sort) { - case "ASC", "DESC", "ASC NULLS FIRST", "DESC NULLS FIRST", - "ASC NULLS LAST", "DESC NULLS LAST": - q.order = append(q.order, schema.SafeQuery("? ?", []interface{}{ - Ident(field), - Safe(sort), - })) - default: - q.order = append(q.order, schema.UnsafeIdent(order)) - } - } - return q -} - -func (q *SelectQuery) OrderExpr(query string, args ...interface{}) *SelectQuery { - q.order = append(q.order, schema.SafeQuery(query, args)) - return q -} - -func (q *SelectQuery) Limit(n int) *SelectQuery { - q.limit = int32(n) - return q -} - -func (q *SelectQuery) Offset(n int) *SelectQuery { - q.offset = int32(n) - return q -} - -func (q *SelectQuery) For(s string, args ...interface{}) *SelectQuery { - q.selFor = schema.SafeQuery(s, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Union(other *SelectQuery) *SelectQuery { - return q.addUnion(" UNION ", other) -} - -func (q *SelectQuery) UnionAll(other *SelectQuery) *SelectQuery { - return q.addUnion(" UNION ALL ", other) -} - -func (q *SelectQuery) Intersect(other *SelectQuery) *SelectQuery { - return q.addUnion(" INTERSECT ", other) -} - -func (q *SelectQuery) IntersectAll(other *SelectQuery) *SelectQuery { - return q.addUnion(" INTERSECT ALL ", other) -} - -func (q *SelectQuery) Except(other *SelectQuery) *SelectQuery { - return q.addUnion(" EXCEPT ", other) -} - -func (q *SelectQuery) ExceptAll(other *SelectQuery) *SelectQuery { - return q.addUnion(" EXCEPT ALL ", other) -} - -func (q *SelectQuery) addUnion(expr string, other *SelectQuery) *SelectQuery { - q.union = append(q.union, union{ - expr: expr, - query: other, - }) - return q -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Join(join string, args ...interface{}) *SelectQuery { - q.joins = append(q.joins, joinQuery{ - join: schema.SafeQuery(join, args), - }) - return q -} - -func (q *SelectQuery) JoinOn(cond string, args ...interface{}) *SelectQuery { - return q.joinOn(cond, args, " AND ") -} - -func (q *SelectQuery) JoinOnOr(cond string, args ...interface{}) *SelectQuery { - return q.joinOn(cond, args, " OR ") -} - -func (q *SelectQuery) joinOn(cond string, args []interface{}, sep string) *SelectQuery { - if len(q.joins) == 0 { - q.err = errors.New("bun: query has no joins") - return q - } - j := &q.joins[len(q.joins)-1] - j.on = append(j.on, schema.SafeQueryWithSep(cond, args, sep)) - return q -} - -//------------------------------------------------------------------------------ - -// Relation adds a relation to the query. -func (q *SelectQuery) Relation(name string, apply ...func(*SelectQuery) *SelectQuery) *SelectQuery { - if len(apply) > 1 { - panic("only one apply function is supported") - } - - if q.tableModel == nil { - q.setErr(errNilModel) - return q - } - - join := q.tableModel.join(name) - if join == nil { - q.setErr(fmt.Errorf("%s does not have relation=%q", q.table, name)) - return q - } - - var apply1, apply2 func(*SelectQuery) *SelectQuery - - if len(join.Relation.Condition) > 0 { - apply1 = func(q *SelectQuery) *SelectQuery { - for _, opt := range join.Relation.Condition { - q.addWhere(schema.SafeQueryWithSep(opt, nil, " AND ")) - } - - return q - } - } - - if len(apply) == 1 { - apply2 = apply[0] - } - - join.apply = func(q *SelectQuery) *SelectQuery { - if apply1 != nil { - q = apply1(q) - } - if apply2 != nil { - q = apply2(q) - } - - return q - } - - return q -} - -func (q *SelectQuery) forEachInlineRelJoin(fn func(*relationJoin) error) error { - if q.tableModel == nil { - return nil - } - return q._forEachInlineRelJoin(fn, q.tableModel.getJoins()) -} - -func (q *SelectQuery) _forEachInlineRelJoin(fn func(*relationJoin) error, joins []relationJoin) error { - for i := range joins { - j := &joins[i] - switch j.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - if err := fn(j); err != nil { - return err - } - if err := q._forEachInlineRelJoin(fn, j.JoinModel.getJoins()); err != nil { - return err - } - } - } - return nil -} - -func (q *SelectQuery) selectJoins(ctx context.Context, joins []relationJoin) error { - for i := range joins { - j := &joins[i] - - var err error - - switch j.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - err = q.selectJoins(ctx, j.JoinModel.getJoins()) - case schema.HasManyRelation: - err = j.selectMany(ctx, q.db.NewSelect().Conn(q.conn)) - case schema.ManyToManyRelation: - err = j.selectM2M(ctx, q.db.NewSelect().Conn(q.conn)) - default: - panic("not reached") - } - - if err != nil { - return err - } - } - return nil -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Operation() string { - return "SELECT" -} - -func (q *SelectQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - return q.appendQuery(fmter, b, false) -} - -func (q *SelectQuery) appendQuery( - fmter schema.Formatter, b []byte, count bool, -) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - fmter = formatterWithModel(fmter, q) - - cteCount := count && (len(q.group) > 0 || q.distinctOn != nil) - if cteCount { - b = append(b, "WITH _count_wrapper AS ("...) - } - - if len(q.union) > 0 { - b = append(b, '(') - } - - b, err = q.appendWith(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, "SELECT "...) - - if len(q.distinctOn) > 0 { - b = append(b, "DISTINCT ON ("...) - for i, app := range q.distinctOn { - if i > 0 { - b = append(b, ", "...) - } - b, err = app.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - b = append(b, ") "...) - } else if q.distinctOn != nil { - b = append(b, "DISTINCT "...) - } - - if count && !cteCount { - b = append(b, "count(*)"...) - } else { - b, err = q.appendColumns(fmter, b) - if err != nil { - return nil, err - } - } - - if q.hasTables() { - b, err = q.appendTables(fmter, b) - if err != nil { - return nil, err - } - } - - if err := q.forEachInlineRelJoin(func(j *relationJoin) error { - b = append(b, ' ') - b, err = j.appendHasOneJoin(fmter, b, q) - return err - }); err != nil { - return nil, err - } - - for _, j := range q.joins { - b, err = j.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - b, err = q.appendIndexHints(fmter, b) - if err != nil { - return nil, err - } - - b, err = q.appendWhere(fmter, b, true) - if err != nil { - return nil, err - } - - if len(q.group) > 0 { - b = append(b, " GROUP BY "...) - for i, f := range q.group { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - } - - if len(q.having) > 0 { - b = append(b, " HAVING "...) - for i, f := range q.having { - if i > 0 { - b = append(b, " AND "...) - } - b = append(b, '(') - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ')') - } - } - - if !count { - b, err = q.appendOrder(fmter, b) - if err != nil { - return nil, err - } - - if fmter.Dialect().Features().Has(feature.OffsetFetch) { - if q.limit > 0 && q.offset > 0 { - b = append(b, " OFFSET "...) - b = strconv.AppendInt(b, int64(q.offset), 10) - b = append(b, " ROWS"...) - - b = append(b, " FETCH NEXT "...) - b = strconv.AppendInt(b, int64(q.limit), 10) - b = append(b, " ROWS ONLY"...) - } else if q.limit > 0 { - b = append(b, " OFFSET 0 ROWS"...) - - b = append(b, " FETCH NEXT "...) - b = strconv.AppendInt(b, int64(q.limit), 10) - b = append(b, " ROWS ONLY"...) - } else if q.offset > 0 { - b = append(b, " OFFSET "...) - b = strconv.AppendInt(b, int64(q.offset), 10) - b = append(b, " ROWS"...) - } - } else { - if q.limit > 0 { - b = append(b, " LIMIT "...) - b = strconv.AppendInt(b, int64(q.limit), 10) - } - if q.offset > 0 { - b = append(b, " OFFSET "...) - b = strconv.AppendInt(b, int64(q.offset), 10) - } - } - - if !q.selFor.IsZero() { - b = append(b, " FOR "...) - b, err = q.selFor.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - } - - if len(q.union) > 0 { - b = append(b, ')') - - for _, u := range q.union { - b = append(b, u.expr...) - b = append(b, '(') - b, err = u.query.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ')') - } - } - - if cteCount { - b = append(b, ") SELECT count(*) FROM _count_wrapper"...) - } - - return b, nil -} - -func (q *SelectQuery) appendColumns(fmter schema.Formatter, b []byte) (_ []byte, err error) { - start := len(b) - - switch { - case q.columns != nil: - for i, col := range q.columns { - if i > 0 { - b = append(b, ", "...) - } - - if col.Args == nil && q.table != nil { - if field, ok := q.table.FieldMap[col.Query]; ok { - b = append(b, q.table.SQLAlias...) - b = append(b, '.') - b = append(b, field.SQLName...) - continue - } - } - - b, err = col.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - case q.table != nil: - if len(q.table.Fields) > 10 && fmter.IsNop() { - b = append(b, q.table.SQLAlias...) - b = append(b, '.') - b = fmter.Dialect().AppendString(b, fmt.Sprintf("%d columns", len(q.table.Fields))) - } else { - b = appendColumns(b, q.table.SQLAlias, q.table.Fields) - } - default: - b = append(b, '*') - } - - if err := q.forEachInlineRelJoin(func(join *relationJoin) error { - if len(b) != start { - b = append(b, ", "...) - start = len(b) - } - - b, err = q.appendInlineRelColumns(fmter, b, join) - if err != nil { - return err - } - - return nil - }); err != nil { - return nil, err - } - - b = bytes.TrimSuffix(b, []byte(", ")) - - return b, nil -} - -func (q *SelectQuery) appendInlineRelColumns( - fmter schema.Formatter, b []byte, join *relationJoin, -) (_ []byte, err error) { - join.applyTo(q) - - if join.columns != nil { - table := join.JoinModel.Table() - for i, col := range join.columns { - if i > 0 { - b = append(b, ", "...) - } - - if col.Args == nil { - if field, ok := table.FieldMap[col.Query]; ok { - b = join.appendAlias(fmter, b) - b = append(b, '.') - b = append(b, field.SQLName...) - b = append(b, " AS "...) - b = join.appendAliasColumn(fmter, b, field.Name) - continue - } - } - - b, err = col.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil - } - - for i, field := range join.JoinModel.Table().Fields { - if i > 0 { - b = append(b, ", "...) - } - b = join.appendAlias(fmter, b) - b = append(b, '.') - b = append(b, field.SQLName...) - b = append(b, " AS "...) - b = join.appendAliasColumn(fmter, b, field.Name) - } - return b, nil -} - -func (q *SelectQuery) appendTables(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, " FROM "...) - return q.appendTablesWithAlias(fmter, b) -} - -func (q *SelectQuery) appendOrder(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if len(q.order) > 0 { - b = append(b, " ORDER BY "...) - - for i, f := range q.order { - if i > 0 { - b = append(b, ", "...) - } - b, err = f.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil - } - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) Rows(ctx context.Context) (*sql.Rows, error) { - if q.err != nil { - return nil, q.err - } - - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - return q.conn.QueryContext(ctx, query) -} - -func (q *SelectQuery) Exec(ctx context.Context, dest ...interface{}) (res sql.Result, err error) { - if q.err != nil { - return nil, q.err - } - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - if len(dest) > 0 { - model, err := q.getModel(dest) - if err != nil { - return nil, err - } - - res, err = q.scan(ctx, q, query, model, true) - if err != nil { - return nil, err - } - } else { - res, err = q.exec(ctx, q, query) - if err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *SelectQuery) Scan(ctx context.Context, dest ...interface{}) error { - if q.err != nil { - return q.err - } - - model, err := q.getModel(dest) - if err != nil { - return err - } - - if q.table != nil { - if err := q.beforeSelectHook(ctx); err != nil { - return err - } - } - - if err := q.beforeAppendModel(ctx, q); err != nil { - return err - } - - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return err - } - - query := internal.String(queryBytes) - - res, err := q.scan(ctx, q, query, model, true) - if err != nil { - return err - } - - if n, _ := res.RowsAffected(); n > 0 { - if tableModel, ok := model.(TableModel); ok { - if err := q.selectJoins(ctx, tableModel.getJoins()); err != nil { - return err - } - } - } - - if q.table != nil { - if err := q.afterSelectHook(ctx); err != nil { - return err - } - } - - return nil -} - -func (q *SelectQuery) beforeSelectHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeSelectHook); ok { - if err := hook.BeforeSelect(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *SelectQuery) afterSelectHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterSelectHook); ok { - if err := hook.AfterSelect(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *SelectQuery) Count(ctx context.Context) (int, error) { - if q.err != nil { - return 0, q.err - } - - qq := countQuery{q} - - queryBytes, err := qq.AppendQuery(q.db.fmter, nil) - if err != nil { - return 0, err - } - - query := internal.String(queryBytes) - ctx, event := q.db.beforeQuery(ctx, qq, query, nil, query, q.model) - - var num int - err = q.conn.QueryRowContext(ctx, query).Scan(&num) - - q.db.afterQuery(ctx, event, nil, err) - - return num, err -} - -func (q *SelectQuery) ScanAndCount(ctx context.Context, dest ...interface{}) (int, error) { - if _, ok := q.conn.(*DB); ok { - return q.scanAndCountConc(ctx, dest...) - } - return q.scanAndCountSeq(ctx, dest...) -} - -func (q *SelectQuery) scanAndCountConc(ctx context.Context, dest ...interface{}) (int, error) { - var count int - var wg sync.WaitGroup - var mu sync.Mutex - var firstErr error - - if q.limit >= 0 { - wg.Add(1) - go func() { - defer wg.Done() - - if err := q.Scan(ctx, dest...); err != nil { - mu.Lock() - if firstErr == nil { - firstErr = err - } - mu.Unlock() - } - }() - } - - wg.Add(1) - go func() { - defer wg.Done() - - var err error - count, err = q.Count(ctx) - if err != nil { - mu.Lock() - if firstErr == nil { - firstErr = err - } - mu.Unlock() - } - }() - - wg.Wait() - return count, firstErr -} - -func (q *SelectQuery) scanAndCountSeq(ctx context.Context, dest ...interface{}) (int, error) { - var firstErr error - - if q.limit >= 0 { - firstErr = q.Scan(ctx, dest...) - } - - count, err := q.Count(ctx) - if err != nil && firstErr == nil { - firstErr = err - } - - return count, firstErr -} - -func (q *SelectQuery) Exists(ctx context.Context) (bool, error) { - if q.err != nil { - return false, q.err - } - - if q.hasFeature(feature.SelectExists) { - return q.selectExists(ctx) - } - return q.whereExists(ctx) -} - -func (q *SelectQuery) selectExists(ctx context.Context) (bool, error) { - qq := selectExistsQuery{q} - - queryBytes, err := qq.AppendQuery(q.db.fmter, nil) - if err != nil { - return false, err - } - - query := internal.String(queryBytes) - ctx, event := q.db.beforeQuery(ctx, qq, query, nil, query, q.model) - - var exists bool - err = q.conn.QueryRowContext(ctx, query).Scan(&exists) - - q.db.afterQuery(ctx, event, nil, err) - - return exists, err -} - -func (q *SelectQuery) whereExists(ctx context.Context) (bool, error) { - qq := whereExistsQuery{q} - - queryBytes, err := qq.AppendQuery(q.db.fmter, nil) - if err != nil { - return false, err - } - - query := internal.String(queryBytes) - res, err := q.exec(ctx, qq, query) - if err != nil { - return false, err - } - - n, err := res.RowsAffected() - if err != nil { - return false, err - } - - return n == 1, nil -} - -func (q *SelectQuery) String() string { - buf, err := q.AppendQuery(q.db.Formatter(), nil) - if err != nil { - panic(err) - } - - return string(buf) -} - -//------------------------------------------------------------------------------ - -func (q *SelectQuery) QueryBuilder() QueryBuilder { - return &selectQueryBuilder{q} -} - -func (q *SelectQuery) ApplyQueryBuilder(fn func(QueryBuilder) QueryBuilder) *SelectQuery { - return fn(q.QueryBuilder()).Unwrap().(*SelectQuery) -} - -type selectQueryBuilder struct { - *SelectQuery -} - -func (q *selectQueryBuilder) WhereGroup( - sep string, fn func(QueryBuilder) QueryBuilder, -) QueryBuilder { - q.SelectQuery = q.SelectQuery.WhereGroup(sep, func(qs *SelectQuery) *SelectQuery { - return fn(q).(*selectQueryBuilder).SelectQuery - }) - return q -} - -func (q *selectQueryBuilder) Where(query string, args ...interface{}) QueryBuilder { - q.SelectQuery.Where(query, args...) - return q -} - -func (q *selectQueryBuilder) WhereOr(query string, args ...interface{}) QueryBuilder { - q.SelectQuery.WhereOr(query, args...) - return q -} - -func (q *selectQueryBuilder) WhereDeleted() QueryBuilder { - q.SelectQuery.WhereDeleted() - return q -} - -func (q *selectQueryBuilder) WhereAllWithDeleted() QueryBuilder { - q.SelectQuery.WhereAllWithDeleted() - return q -} - -func (q *selectQueryBuilder) WherePK(cols ...string) QueryBuilder { - q.SelectQuery.WherePK(cols...) - return q -} - -func (q *selectQueryBuilder) Unwrap() interface{} { - return q.SelectQuery -} - -//------------------------------------------------------------------------------ - -type joinQuery struct { - join schema.QueryWithArgs - on []schema.QueryWithSep -} - -func (j *joinQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, ' ') - - b, err = j.join.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - - if len(j.on) > 0 { - b = append(b, " ON "...) - for i, on := range j.on { - if i > 0 { - b = append(b, on.Sep...) - } - - b = append(b, '(') - b, err = on.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - b = append(b, ')') - } - } - - return b, nil -} - -//------------------------------------------------------------------------------ - -type countQuery struct { - *SelectQuery -} - -func (q countQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - return q.appendQuery(fmter, b, true) -} - -//------------------------------------------------------------------------------ - -type selectExistsQuery struct { - *SelectQuery -} - -func (q selectExistsQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - b = append(b, "SELECT EXISTS ("...) - - b, err = q.appendQuery(fmter, b, false) - if err != nil { - return nil, err - } - - b = append(b, ")"...) - - return b, nil -} - -//------------------------------------------------------------------------------ - -type whereExistsQuery struct { - *SelectQuery -} - -func (q whereExistsQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - b = append(b, "SELECT 1 WHERE EXISTS ("...) - - b, err = q.appendQuery(fmter, b, false) - if err != nil { - return nil, err - } - - b = append(b, ")"...) - - return b, nil -} diff --git a/vendor/github.com/uptrace/bun/query_table_create.go b/vendor/github.com/uptrace/bun/query_table_create.go deleted file mode 100644 index 518dbfd1..00000000 --- a/vendor/github.com/uptrace/bun/query_table_create.go +++ /dev/null @@ -1,366 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "fmt" - "sort" - "strconv" - "strings" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type CreateTableQuery struct { - baseQuery - - temp bool - ifNotExists bool - - // varchar changes the default length for VARCHAR columns. - // Because some dialects require that length is always specified for VARCHAR type, - // we will use the exact user-defined type if length is set explicitly, as in `bun:",type:varchar(5)"`, - // but assume the new default length when it's omitted, e.g. `bun:",type:varchar"`. - varchar int - - fks []schema.QueryWithArgs - partitionBy schema.QueryWithArgs - tablespace schema.QueryWithArgs -} - -var _ Query = (*CreateTableQuery)(nil) - -func NewCreateTableQuery(db *DB) *CreateTableQuery { - q := &CreateTableQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - varchar: db.Dialect().DefaultVarcharLen(), - } - return q -} - -func (q *CreateTableQuery) Conn(db IConn) *CreateTableQuery { - q.setConn(db) - return q -} - -func (q *CreateTableQuery) Model(model interface{}) *CreateTableQuery { - q.setModel(model) - return q -} - -func (q *CreateTableQuery) Err(err error) *CreateTableQuery { - q.setErr(err) - return q -} - -// ------------------------------------------------------------------------------ - -func (q *CreateTableQuery) Table(tables ...string) *CreateTableQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *CreateTableQuery) TableExpr(query string, args ...interface{}) *CreateTableQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *CreateTableQuery) ModelTableExpr(query string, args ...interface{}) *CreateTableQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -func (q *CreateTableQuery) ColumnExpr(query string, args ...interface{}) *CreateTableQuery { - q.addColumn(schema.SafeQuery(query, args)) - return q -} - -// ------------------------------------------------------------------------------ - -func (q *CreateTableQuery) Temp() *CreateTableQuery { - q.temp = true - return q -} - -func (q *CreateTableQuery) IfNotExists() *CreateTableQuery { - q.ifNotExists = true - return q -} - -// Varchar sets default length for VARCHAR columns. -func (q *CreateTableQuery) Varchar(n int) *CreateTableQuery { - if n <= 0 { - q.setErr(fmt.Errorf("bun: illegal VARCHAR length: %d", n)) - return q - } - q.varchar = n - return q -} - -func (q *CreateTableQuery) ForeignKey(query string, args ...interface{}) *CreateTableQuery { - q.fks = append(q.fks, schema.SafeQuery(query, args)) - return q -} - -func (q *CreateTableQuery) PartitionBy(query string, args ...interface{}) *CreateTableQuery { - q.partitionBy = schema.SafeQuery(query, args) - return q -} - -func (q *CreateTableQuery) TableSpace(tablespace string) *CreateTableQuery { - q.tablespace = schema.UnsafeIdent(tablespace) - return q -} - -func (q *CreateTableQuery) WithForeignKeys() *CreateTableQuery { - for _, relation := range q.tableModel.Table().Relations { - if relation.Type == schema.ManyToManyRelation || - relation.Type == schema.HasManyRelation { - continue - } - - q = q.ForeignKey("(?) REFERENCES ? (?) ? ?", - Safe(appendColumns(nil, "", relation.BaseFields)), - relation.JoinTable.SQLName, - Safe(appendColumns(nil, "", relation.JoinFields)), - Safe(relation.OnUpdate), - Safe(relation.OnDelete), - ) - } - return q -} - -// ------------------------------------------------------------------------------ - -func (q *CreateTableQuery) Operation() string { - return "CREATE TABLE" -} - -func (q *CreateTableQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - if q.table == nil { - return nil, errNilModel - } - - b = append(b, "CREATE "...) - if q.temp { - b = append(b, "TEMP "...) - } - b = append(b, "TABLE "...) - if q.ifNotExists && fmter.Dialect().Features().Has(feature.TableNotExists) { - b = append(b, "IF NOT EXISTS "...) - } - b, err = q.appendFirstTable(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, " ("...) - - for i, field := range q.table.Fields { - if i > 0 { - b = append(b, ", "...) - } - - b = append(b, field.SQLName...) - b = append(b, " "...) - b = q.appendSQLType(b, field) - if field.NotNull { - b = append(b, " NOT NULL"...) - } - if field.AutoIncrement { - switch { - case fmter.Dialect().Features().Has(feature.AutoIncrement): - b = append(b, " AUTO_INCREMENT"...) - case fmter.Dialect().Features().Has(feature.Identity): - b = append(b, " IDENTITY"...) - } - } - if field.Identity { - if fmter.Dialect().Features().Has(feature.GeneratedIdentity) { - b = append(b, " GENERATED BY DEFAULT AS IDENTITY"...) - } - } - if field.SQLDefault != "" { - b = append(b, " DEFAULT "...) - b = append(b, field.SQLDefault...) - } - } - - for i, col := range q.columns { - // Only pre-pend the comma if we are on subsequent iterations, or if there were fields/columns appended before - // this. This way if we are only appending custom column expressions we will not produce a syntax error with a - // leading comma. - if i > 0 || len(q.table.Fields) > 0 { - b = append(b, ", "...) - } - b, err = col.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - b = q.appendPKConstraint(b, q.table.PKs) - b = q.appendUniqueConstraints(fmter, b) - b, err = q.appendFKConstraints(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, ")"...) - - if !q.partitionBy.IsZero() { - b = append(b, " PARTITION BY "...) - b, err = q.partitionBy.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - if !q.tablespace.IsZero() { - b = append(b, " TABLESPACE "...) - b, err = q.tablespace.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *CreateTableQuery) appendSQLType(b []byte, field *schema.Field) []byte { - // Most of the time these two will match, but for the cases where DiscoveredSQLType is dialect-specific, - // e.g. pgdialect would change sqltype.SmallInt to pgTypeSmallSerial for columns that have `bun:",autoincrement"` - if !strings.EqualFold(field.CreateTableSQLType, field.DiscoveredSQLType) { - return append(b, field.CreateTableSQLType...) - } - - // For all common SQL types except VARCHAR, both UserDefinedSQLType and DiscoveredSQLType specify the correct type, - // and we needn't modify it. For VARCHAR columns, we will stop to check if a valid length has been set in .Varchar(int). - if !strings.EqualFold(field.CreateTableSQLType, sqltype.VarChar) || q.varchar <= 0 { - return append(b, field.CreateTableSQLType...) - } - - b = append(b, sqltype.VarChar...) - b = append(b, "("...) - b = strconv.AppendInt(b, int64(q.varchar), 10) - b = append(b, ")"...) - return b -} - -func (q *CreateTableQuery) appendUniqueConstraints(fmter schema.Formatter, b []byte) []byte { - unique := q.table.Unique - - keys := make([]string, 0, len(unique)) - for key := range unique { - keys = append(keys, key) - } - sort.Strings(keys) - - for _, key := range keys { - if key == "" { - for _, field := range unique[key] { - b = q.appendUniqueConstraint(fmter, b, key, field) - } - continue - } - b = q.appendUniqueConstraint(fmter, b, key, unique[key]...) - } - - return b -} - -func (q *CreateTableQuery) appendUniqueConstraint( - fmter schema.Formatter, b []byte, name string, fields ...*schema.Field, -) []byte { - if name != "" { - b = append(b, ", CONSTRAINT "...) - b = fmter.AppendIdent(b, name) - } else { - b = append(b, ","...) - } - b = append(b, " UNIQUE ("...) - b = appendColumns(b, "", fields) - b = append(b, ")"...) - return b -} - -func (q *CreateTableQuery) appendFKConstraints( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - for _, fk := range q.fks { - b = append(b, ", FOREIGN KEY "...) - b, err = fk.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - return b, nil -} - -func (q *CreateTableQuery) appendPKConstraint(b []byte, pks []*schema.Field) []byte { - if len(pks) == 0 { - return b - } - - b = append(b, ", PRIMARY KEY ("...) - b = appendColumns(b, "", pks) - b = append(b, ")"...) - return b -} - -// ------------------------------------------------------------------------------ - -func (q *CreateTableQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - if err := q.beforeCreateTableHook(ctx); err != nil { - return nil, err - } - - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - if q.table != nil { - if err := q.afterCreateTableHook(ctx); err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *CreateTableQuery) beforeCreateTableHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeCreateTableHook); ok { - if err := hook.BeforeCreateTable(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *CreateTableQuery) afterCreateTableHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterCreateTableHook); ok { - if err := hook.AfterCreateTable(ctx, q); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/query_table_drop.go b/vendor/github.com/uptrace/bun/query_table_drop.go deleted file mode 100644 index e4447a8d..00000000 --- a/vendor/github.com/uptrace/bun/query_table_drop.go +++ /dev/null @@ -1,153 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type DropTableQuery struct { - baseQuery - cascadeQuery - - ifExists bool -} - -var _ Query = (*DropTableQuery)(nil) - -func NewDropTableQuery(db *DB) *DropTableQuery { - q := &DropTableQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - return q -} - -func (q *DropTableQuery) Conn(db IConn) *DropTableQuery { - q.setConn(db) - return q -} - -func (q *DropTableQuery) Model(model interface{}) *DropTableQuery { - q.setModel(model) - return q -} - -func (q *DropTableQuery) Err(err error) *DropTableQuery { - q.setErr(err) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropTableQuery) Table(tables ...string) *DropTableQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *DropTableQuery) TableExpr(query string, args ...interface{}) *DropTableQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *DropTableQuery) ModelTableExpr(query string, args ...interface{}) *DropTableQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropTableQuery) IfExists() *DropTableQuery { - q.ifExists = true - return q -} - -func (q *DropTableQuery) Cascade() *DropTableQuery { - q.cascade = true - return q -} - -func (q *DropTableQuery) Restrict() *DropTableQuery { - q.restrict = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *DropTableQuery) Operation() string { - return "DROP TABLE" -} - -func (q *DropTableQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - b = append(b, "DROP TABLE "...) - if q.ifExists { - b = append(b, "IF EXISTS "...) - } - - b, err = q.appendTables(fmter, b) - if err != nil { - return nil, err - } - - b = q.appendCascade(fmter, b) - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *DropTableQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - if q.table != nil { - if err := q.beforeDropTableHook(ctx); err != nil { - return nil, err - } - } - - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - if q.table != nil { - if err := q.afterDropTableHook(ctx); err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *DropTableQuery) beforeDropTableHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeDropTableHook); ok { - if err := hook.BeforeDropTable(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *DropTableQuery) afterDropTableHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterDropTableHook); ok { - if err := hook.AfterDropTable(ctx, q); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/uptrace/bun/query_table_truncate.go b/vendor/github.com/uptrace/bun/query_table_truncate.go deleted file mode 100644 index a704b7b1..00000000 --- a/vendor/github.com/uptrace/bun/query_table_truncate.go +++ /dev/null @@ -1,137 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type TruncateTableQuery struct { - baseQuery - cascadeQuery - - continueIdentity bool -} - -var _ Query = (*TruncateTableQuery)(nil) - -func NewTruncateTableQuery(db *DB) *TruncateTableQuery { - q := &TruncateTableQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - return q -} - -func (q *TruncateTableQuery) Conn(db IConn) *TruncateTableQuery { - q.setConn(db) - return q -} - -func (q *TruncateTableQuery) Model(model interface{}) *TruncateTableQuery { - q.setModel(model) - return q -} - -func (q *TruncateTableQuery) Err(err error) *TruncateTableQuery { - q.setErr(err) - return q -} - -//------------------------------------------------------------------------------ - -func (q *TruncateTableQuery) Table(tables ...string) *TruncateTableQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *TruncateTableQuery) TableExpr(query string, args ...interface{}) *TruncateTableQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *TruncateTableQuery) ContinueIdentity() *TruncateTableQuery { - q.continueIdentity = true - return q -} - -func (q *TruncateTableQuery) Cascade() *TruncateTableQuery { - q.cascade = true - return q -} - -func (q *TruncateTableQuery) Restrict() *TruncateTableQuery { - q.restrict = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *TruncateTableQuery) Operation() string { - return "TRUNCATE TABLE" -} - -func (q *TruncateTableQuery) AppendQuery( - fmter schema.Formatter, b []byte, -) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - if !fmter.HasFeature(feature.TableTruncate) { - b = append(b, "DELETE FROM "...) - - b, err = q.appendTables(fmter, b) - if err != nil { - return nil, err - } - - return b, nil - } - - b = append(b, "TRUNCATE TABLE "...) - - b, err = q.appendTables(fmter, b) - if err != nil { - return nil, err - } - - if q.db.features.Has(feature.TableIdentity) { - if q.continueIdentity { - b = append(b, " CONTINUE IDENTITY"...) - } else { - b = append(b, " RESTART IDENTITY"...) - } - } - - b = q.appendCascade(fmter, b) - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *TruncateTableQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - query := internal.String(queryBytes) - - res, err := q.exec(ctx, q, query) - if err != nil { - return nil, err - } - - return res, nil -} diff --git a/vendor/github.com/uptrace/bun/query_update.go b/vendor/github.com/uptrace/bun/query_update.go deleted file mode 100644 index 708bcfbc..00000000 --- a/vendor/github.com/uptrace/bun/query_update.go +++ /dev/null @@ -1,623 +0,0 @@ -package bun - -import ( - "context" - "database/sql" - "errors" - "fmt" - - "github.com/uptrace/bun/dialect" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type UpdateQuery struct { - whereBaseQuery - returningQuery - customValueQuery - setQuery - idxHintsQuery - - omitZero bool -} - -var _ Query = (*UpdateQuery)(nil) - -func NewUpdateQuery(db *DB) *UpdateQuery { - q := &UpdateQuery{ - whereBaseQuery: whereBaseQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - }, - } - return q -} - -func (q *UpdateQuery) Conn(db IConn) *UpdateQuery { - q.setConn(db) - return q -} - -func (q *UpdateQuery) Model(model interface{}) *UpdateQuery { - q.setModel(model) - return q -} - -func (q *UpdateQuery) Err(err error) *UpdateQuery { - q.setErr(err) - return q -} - -// Apply calls the fn passing the SelectQuery as an argument. -func (q *UpdateQuery) Apply(fn func(*UpdateQuery) *UpdateQuery) *UpdateQuery { - if fn != nil { - return fn(q) - } - return q -} - -func (q *UpdateQuery) With(name string, query schema.QueryAppender) *UpdateQuery { - q.addWith(name, query, false) - return q -} - -func (q *UpdateQuery) WithRecursive(name string, query schema.QueryAppender) *UpdateQuery { - q.addWith(name, query, true) - return q -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) Table(tables ...string) *UpdateQuery { - for _, table := range tables { - q.addTable(schema.UnsafeIdent(table)) - } - return q -} - -func (q *UpdateQuery) TableExpr(query string, args ...interface{}) *UpdateQuery { - q.addTable(schema.SafeQuery(query, args)) - return q -} - -func (q *UpdateQuery) ModelTableExpr(query string, args ...interface{}) *UpdateQuery { - q.modelTableName = schema.SafeQuery(query, args) - return q -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) Column(columns ...string) *UpdateQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -func (q *UpdateQuery) ExcludeColumn(columns ...string) *UpdateQuery { - q.excludeColumn(columns) - return q -} - -func (q *UpdateQuery) Set(query string, args ...interface{}) *UpdateQuery { - q.addSet(schema.SafeQuery(query, args)) - return q -} - -func (q *UpdateQuery) SetColumn(column string, query string, args ...interface{}) *UpdateQuery { - if q.db.HasFeature(feature.UpdateMultiTable) { - column = q.table.Alias + "." + column - } - q.addSet(schema.SafeQuery(column+" = "+query, args)) - return q -} - -// Value overwrites model value for the column. -func (q *UpdateQuery) Value(column string, query string, args ...interface{}) *UpdateQuery { - if q.table == nil { - q.err = errNilModel - return q - } - q.addValue(q.table, column, query, args) - return q -} - -func (q *UpdateQuery) OmitZero() *UpdateQuery { - q.omitZero = true - return q -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) WherePK(cols ...string) *UpdateQuery { - q.addWhereCols(cols) - return q -} - -func (q *UpdateQuery) Where(query string, args ...interface{}) *UpdateQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " AND ")) - return q -} - -func (q *UpdateQuery) WhereOr(query string, args ...interface{}) *UpdateQuery { - q.addWhere(schema.SafeQueryWithSep(query, args, " OR ")) - return q -} - -func (q *UpdateQuery) WhereGroup(sep string, fn func(*UpdateQuery) *UpdateQuery) *UpdateQuery { - saved := q.where - q.where = nil - - q = fn(q) - - where := q.where - q.where = saved - - q.addWhereGroup(sep, where) - - return q -} - -func (q *UpdateQuery) WhereDeleted() *UpdateQuery { - q.whereDeleted() - return q -} - -func (q *UpdateQuery) WhereAllWithDeleted() *UpdateQuery { - q.whereAllWithDeleted() - return q -} - -//------------------------------------------------------------------------------ - -// Returning adds a RETURNING clause to the query. -// -// To suppress the auto-generated RETURNING clause, use `Returning("NULL")`. -func (q *UpdateQuery) Returning(query string, args ...interface{}) *UpdateQuery { - q.addReturning(schema.SafeQuery(query, args)) - return q -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) Operation() string { - return "UPDATE" -} - -func (q *UpdateQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - - fmter = formatterWithModel(fmter, q) - - b, err = q.appendWith(fmter, b) - if err != nil { - return nil, err - } - - b = append(b, "UPDATE "...) - - if fmter.HasFeature(feature.UpdateMultiTable) { - b, err = q.appendTablesWithAlias(fmter, b) - } else if fmter.HasFeature(feature.UpdateTableAlias) { - b, err = q.appendFirstTableWithAlias(fmter, b) - } else { - b, err = q.appendFirstTable(fmter, b) - } - if err != nil { - return nil, err - } - - b, err = q.appendIndexHints(fmter, b) - if err != nil { - return nil, err - } - - b, err = q.mustAppendSet(fmter, b) - if err != nil { - return nil, err - } - - if !fmter.HasFeature(feature.UpdateMultiTable) { - b, err = q.appendOtherTables(fmter, b) - if err != nil { - return nil, err - } - } - - if q.hasFeature(feature.Output) && q.hasReturning() { - b = append(b, " OUTPUT "...) - b, err = q.appendOutput(fmter, b) - if err != nil { - return nil, err - } - } - - b, err = q.mustAppendWhere(fmter, b, q.hasTableAlias(fmter)) - if err != nil { - return nil, err - } - - if q.hasFeature(feature.Returning) && q.hasReturning() { - b = append(b, " RETURNING "...) - b, err = q.appendReturning(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *UpdateQuery) mustAppendSet(fmter schema.Formatter, b []byte) (_ []byte, err error) { - b = append(b, " SET "...) - - if len(q.set) > 0 { - return q.appendSet(fmter, b) - } - - if m, ok := q.model.(*mapModel); ok { - return m.appendSet(fmter, b), nil - } - - if q.tableModel == nil { - return nil, errNilModel - } - - switch model := q.tableModel.(type) { - case *structTableModel: - b, err = q.appendSetStruct(fmter, b, model) - if err != nil { - return nil, err - } - case *sliceTableModel: - return nil, errors.New("bun: to bulk Update, use CTE and VALUES") - default: - return nil, fmt.Errorf("bun: Update does not support %T", q.tableModel) - } - - return b, nil -} - -func (q *UpdateQuery) appendSetStruct( - fmter schema.Formatter, b []byte, model *structTableModel, -) ([]byte, error) { - fields, err := q.getDataFields() - if err != nil { - return nil, err - } - - isTemplate := fmter.IsNop() - pos := len(b) - for _, f := range fields { - if f.SkipUpdate() { - continue - } - - app, hasValue := q.modelValues[f.Name] - - if !hasValue && q.omitZero && f.HasZeroValue(model.strct) { - continue - } - - if len(b) != pos { - b = append(b, ", "...) - pos = len(b) - } - - b = append(b, f.SQLName...) - b = append(b, " = "...) - - if isTemplate { - b = append(b, '?') - continue - } - - if hasValue { - b, err = app.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } else { - b = f.AppendValue(fmter, b, model.strct) - } - } - - for i, v := range q.extraValues { - if i > 0 || len(fields) > 0 { - b = append(b, ", "...) - } - - b = append(b, v.column...) - b = append(b, " = "...) - - b, err = v.value.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - } - - return b, nil -} - -func (q *UpdateQuery) appendOtherTables(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if !q.hasMultiTables() { - return b, nil - } - - b = append(b, " FROM "...) - - b, err = q.whereBaseQuery.appendOtherTables(fmter, b) - if err != nil { - return nil, err - } - - return b, nil -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) Bulk() *UpdateQuery { - model, ok := q.model.(*sliceTableModel) - if !ok { - q.setErr(fmt.Errorf("bun: Bulk requires a slice, got %T", q.model)) - return q - } - - set, err := q.updateSliceSet(q.db.fmter, model) - if err != nil { - q.setErr(err) - return q - } - - values := q.db.NewValues(model) - values.customValueQuery = q.customValueQuery - - return q.With("_data", values). - Model(model). - TableExpr("_data"). - Set(set). - Where(q.updateSliceWhere(q.db.fmter, model)) -} - -func (q *UpdateQuery) updateSliceSet( - fmter schema.Formatter, model *sliceTableModel, -) (string, error) { - fields, err := q.getDataFields() - if err != nil { - return "", err - } - - var b []byte - pos := len(b) - for _, field := range fields { - if field.SkipUpdate() { - continue - } - if len(b) != pos { - b = append(b, ", "...) - pos = len(b) - } - if fmter.HasFeature(feature.UpdateMultiTable) { - b = append(b, model.table.SQLAlias...) - b = append(b, '.') - } - b = append(b, field.SQLName...) - b = append(b, " = _data."...) - b = append(b, field.SQLName...) - } - return internal.String(b), nil -} - -func (q *UpdateQuery) updateSliceWhere(fmter schema.Formatter, model *sliceTableModel) string { - var b []byte - for i, pk := range model.table.PKs { - if i > 0 { - b = append(b, " AND "...) - } - if q.hasTableAlias(fmter) { - b = append(b, model.table.SQLAlias...) - } else { - b = append(b, model.table.SQLName...) - } - b = append(b, '.') - b = append(b, pk.SQLName...) - b = append(b, " = _data."...) - b = append(b, pk.SQLName...) - } - return internal.String(b) -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) Scan(ctx context.Context, dest ...interface{}) error { - _, err := q.scanOrExec(ctx, dest, true) - return err -} - -func (q *UpdateQuery) Exec(ctx context.Context, dest ...interface{}) (sql.Result, error) { - return q.scanOrExec(ctx, dest, len(dest) > 0) -} - -func (q *UpdateQuery) scanOrExec( - ctx context.Context, dest []interface{}, hasDest bool, -) (sql.Result, error) { - if q.err != nil { - return nil, q.err - } - - if q.table != nil { - if err := q.beforeUpdateHook(ctx); err != nil { - return nil, err - } - } - - // Run append model hooks before generating the query. - if err := q.beforeAppendModel(ctx, q); err != nil { - return nil, err - } - - // Generate the query before checking hasReturning. - queryBytes, err := q.AppendQuery(q.db.fmter, q.db.makeQueryBytes()) - if err != nil { - return nil, err - } - - useScan := hasDest || (q.hasReturning() && q.hasFeature(feature.Returning|feature.Output)) - var model Model - - if useScan { - var err error - model, err = q.getModel(dest) - if err != nil { - return nil, err - } - } - - query := internal.String(queryBytes) - - var res sql.Result - - if useScan { - res, err = q.scan(ctx, q, query, model, hasDest) - if err != nil { - return nil, err - } - } else { - res, err = q.exec(ctx, q, query) - if err != nil { - return nil, err - } - } - - if q.table != nil { - if err := q.afterUpdateHook(ctx); err != nil { - return nil, err - } - } - - return res, nil -} - -func (q *UpdateQuery) beforeUpdateHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(BeforeUpdateHook); ok { - if err := hook.BeforeUpdate(ctx, q); err != nil { - return err - } - } - return nil -} - -func (q *UpdateQuery) afterUpdateHook(ctx context.Context) error { - if hook, ok := q.table.ZeroIface.(AfterUpdateHook); ok { - if err := hook.AfterUpdate(ctx, q); err != nil { - return err - } - } - return nil -} - -// FQN returns a fully qualified column name, for example, table_name.column_name or -// table_alias.column_alias. -func (q *UpdateQuery) FQN(column string) Ident { - if q.table == nil { - panic("UpdateQuery.FQN requires a model") - } - if q.hasTableAlias(q.db.fmter) { - return Ident(q.table.Alias + "." + column) - } - return Ident(q.table.Name + "." + column) -} - -func (q *UpdateQuery) hasTableAlias(fmter schema.Formatter) bool { - return fmter.HasFeature(feature.UpdateMultiTable | feature.UpdateTableAlias) -} - -func (q *UpdateQuery) String() string { - buf, err := q.AppendQuery(q.db.Formatter(), nil) - if err != nil { - panic(err) - } - - return string(buf) -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) QueryBuilder() QueryBuilder { - return &updateQueryBuilder{q} -} - -func (q *UpdateQuery) ApplyQueryBuilder(fn func(QueryBuilder) QueryBuilder) *UpdateQuery { - return fn(q.QueryBuilder()).Unwrap().(*UpdateQuery) -} - -type updateQueryBuilder struct { - *UpdateQuery -} - -func (q *updateQueryBuilder) WhereGroup( - sep string, fn func(QueryBuilder) QueryBuilder, -) QueryBuilder { - q.UpdateQuery = q.UpdateQuery.WhereGroup(sep, func(qs *UpdateQuery) *UpdateQuery { - return fn(q).(*updateQueryBuilder).UpdateQuery - }) - return q -} - -func (q *updateQueryBuilder) Where(query string, args ...interface{}) QueryBuilder { - q.UpdateQuery.Where(query, args...) - return q -} - -func (q *updateQueryBuilder) WhereOr(query string, args ...interface{}) QueryBuilder { - q.UpdateQuery.WhereOr(query, args...) - return q -} - -func (q *updateQueryBuilder) WhereDeleted() QueryBuilder { - q.UpdateQuery.WhereDeleted() - return q -} - -func (q *updateQueryBuilder) WhereAllWithDeleted() QueryBuilder { - q.UpdateQuery.WhereAllWithDeleted() - return q -} - -func (q *updateQueryBuilder) WherePK(cols ...string) QueryBuilder { - q.UpdateQuery.WherePK(cols...) - return q -} - -func (q *updateQueryBuilder) Unwrap() interface{} { - return q.UpdateQuery -} - -//------------------------------------------------------------------------------ - -func (q *UpdateQuery) UseIndex(indexes ...string) *UpdateQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addUseIndex(indexes...) - } - return q -} - -func (q *UpdateQuery) IgnoreIndex(indexes ...string) *UpdateQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addIgnoreIndex(indexes...) - } - return q -} - -func (q *UpdateQuery) ForceIndex(indexes ...string) *UpdateQuery { - if q.db.dialect.Name() == dialect.MySQL { - q.addForceIndex(indexes...) - } - return q -} diff --git a/vendor/github.com/uptrace/bun/query_values.go b/vendor/github.com/uptrace/bun/query_values.go deleted file mode 100644 index 5c2abef6..00000000 --- a/vendor/github.com/uptrace/bun/query_values.go +++ /dev/null @@ -1,227 +0,0 @@ -package bun - -import ( - "fmt" - "reflect" - "strconv" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/schema" -) - -type ValuesQuery struct { - baseQuery - customValueQuery - - withOrder bool -} - -var ( - _ Query = (*ValuesQuery)(nil) - _ schema.NamedArgAppender = (*ValuesQuery)(nil) -) - -func NewValuesQuery(db *DB, model interface{}) *ValuesQuery { - q := &ValuesQuery{ - baseQuery: baseQuery{ - db: db, - conn: db.DB, - }, - } - q.setModel(model) - return q -} - -func (q *ValuesQuery) Conn(db IConn) *ValuesQuery { - q.setConn(db) - return q -} - -func (q *ValuesQuery) Err(err error) *ValuesQuery { - q.setErr(err) - return q -} - -func (q *ValuesQuery) Column(columns ...string) *ValuesQuery { - for _, column := range columns { - q.addColumn(schema.UnsafeIdent(column)) - } - return q -} - -// Value overwrites model value for the column. -func (q *ValuesQuery) Value(column string, expr string, args ...interface{}) *ValuesQuery { - if q.table == nil { - q.err = errNilModel - return q - } - q.addValue(q.table, column, expr, args) - return q -} - -func (q *ValuesQuery) WithOrder() *ValuesQuery { - q.withOrder = true - return q -} - -func (q *ValuesQuery) AppendNamedArg(fmter schema.Formatter, b []byte, name string) ([]byte, bool) { - switch name { - case "Columns": - bb, err := q.AppendColumns(fmter, b) - if err != nil { - q.setErr(err) - return b, true - } - return bb, true - } - return b, false -} - -// AppendColumns appends the table columns. It is used by CTE. -func (q *ValuesQuery) AppendColumns(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - if q.model == nil { - return nil, errNilModel - } - - if q.tableModel != nil { - fields, err := q.getFields() - if err != nil { - return nil, err - } - - b = appendColumns(b, "", fields) - - if q.withOrder { - b = append(b, ", _order"...) - } - - return b, nil - } - - switch model := q.model.(type) { - case *mapSliceModel: - return model.appendColumns(fmter, b) - } - - return nil, fmt.Errorf("bun: Values does not support %T", q.model) -} - -func (q *ValuesQuery) Operation() string { - return "VALUES" -} - -func (q *ValuesQuery) AppendQuery(fmter schema.Formatter, b []byte) (_ []byte, err error) { - if q.err != nil { - return nil, q.err - } - if q.model == nil { - return nil, errNilModel - } - - fmter = formatterWithModel(fmter, q) - - if q.tableModel != nil { - fields, err := q.getFields() - if err != nil { - return nil, err - } - return q.appendQuery(fmter, b, fields) - } - - switch model := q.model.(type) { - case *mapSliceModel: - return model.appendValues(fmter, b) - } - - return nil, fmt.Errorf("bun: Values does not support %T", q.model) -} - -func (q *ValuesQuery) appendQuery( - fmter schema.Formatter, - b []byte, - fields []*schema.Field, -) (_ []byte, err error) { - b = append(b, "VALUES "...) - if q.db.features.Has(feature.ValuesRow) { - b = append(b, "ROW("...) - } else { - b = append(b, '(') - } - - switch model := q.tableModel.(type) { - case *structTableModel: - b, err = q.appendValues(fmter, b, fields, model.strct) - if err != nil { - return nil, err - } - - if q.withOrder { - b = append(b, ", "...) - b = strconv.AppendInt(b, 0, 10) - } - case *sliceTableModel: - slice := model.slice - sliceLen := slice.Len() - for i := 0; i < sliceLen; i++ { - if i > 0 { - b = append(b, "), "...) - if q.db.features.Has(feature.ValuesRow) { - b = append(b, "ROW("...) - } else { - b = append(b, '(') - } - } - - b, err = q.appendValues(fmter, b, fields, slice.Index(i)) - if err != nil { - return nil, err - } - - if q.withOrder { - b = append(b, ", "...) - b = strconv.AppendInt(b, int64(i), 10) - } - } - default: - return nil, fmt.Errorf("bun: Values does not support %T", q.model) - } - - b = append(b, ')') - - return b, nil -} - -func (q *ValuesQuery) appendValues( - fmter schema.Formatter, b []byte, fields []*schema.Field, strct reflect.Value, -) (_ []byte, err error) { - isTemplate := fmter.IsNop() - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - - app, ok := q.modelValues[f.Name] - if ok { - b, err = app.AppendQuery(fmter, b) - if err != nil { - return nil, err - } - continue - } - - if isTemplate { - b = append(b, '?') - } else { - b = f.AppendValue(fmter, b, indirect(strct)) - } - - if fmter.HasFeature(feature.DoubleColonCast) { - b = append(b, "::"...) - b = append(b, f.UserSQLType...) - } - } - return b, nil -} diff --git a/vendor/github.com/uptrace/bun/relation_join.go b/vendor/github.com/uptrace/bun/relation_join.go deleted file mode 100644 index 200f6758..00000000 --- a/vendor/github.com/uptrace/bun/relation_join.go +++ /dev/null @@ -1,412 +0,0 @@ -package bun - -import ( - "context" - "reflect" - "time" - - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/schema" -) - -type relationJoin struct { - Parent *relationJoin - BaseModel TableModel - JoinModel TableModel - Relation *schema.Relation - - apply func(*SelectQuery) *SelectQuery - columns []schema.QueryWithArgs -} - -func (j *relationJoin) applyTo(q *SelectQuery) { - if j.apply == nil { - return - } - - var table *schema.Table - var columns []schema.QueryWithArgs - - // Save state. - table, q.table = q.table, j.JoinModel.Table() - columns, q.columns = q.columns, nil - - q = j.apply(q) - - // Restore state. - q.table = table - j.columns, q.columns = q.columns, columns -} - -func (j *relationJoin) Select(ctx context.Context, q *SelectQuery) error { - switch j.Relation.Type { - } - panic("not reached") -} - -func (j *relationJoin) selectMany(ctx context.Context, q *SelectQuery) error { - q = j.manyQuery(q) - if q == nil { - return nil - } - return q.Scan(ctx) -} - -func (j *relationJoin) manyQuery(q *SelectQuery) *SelectQuery { - hasManyModel := newHasManyModel(j) - if hasManyModel == nil { - return nil - } - - q = q.Model(hasManyModel) - - var where []byte - - if q.db.dialect.Features().Has(feature.CompositeIn) { - return j.manyQueryCompositeIn(where, q) - } - return j.manyQueryMulti(where, q) -} - -func (j *relationJoin) manyQueryCompositeIn(where []byte, q *SelectQuery) *SelectQuery { - if len(j.Relation.JoinFields) > 1 { - where = append(where, '(') - } - where = appendColumns(where, j.JoinModel.Table().SQLAlias, j.Relation.JoinFields) - if len(j.Relation.JoinFields) > 1 { - where = append(where, ')') - } - where = append(where, " IN ("...) - where = appendChildValues( - q.db.Formatter(), - where, - j.JoinModel.rootValue(), - j.JoinModel.parentIndex(), - j.Relation.BaseFields, - ) - where = append(where, ")"...) - q = q.Where(internal.String(where)) - - if j.Relation.PolymorphicField != nil { - q = q.Where("? = ?", j.Relation.PolymorphicField.SQLName, j.Relation.PolymorphicValue) - } - - j.applyTo(q) - q = q.Apply(j.hasManyColumns) - - return q -} - -func (j *relationJoin) manyQueryMulti(where []byte, q *SelectQuery) *SelectQuery { - where = appendMultiValues( - q.db.Formatter(), - where, - j.JoinModel.rootValue(), - j.JoinModel.parentIndex(), - j.Relation.BaseFields, - j.Relation.JoinFields, - j.JoinModel.Table().SQLAlias, - ) - - q = q.Where(internal.String(where)) - - if j.Relation.PolymorphicField != nil { - q = q.Where("? = ?", j.Relation.PolymorphicField.SQLName, j.Relation.PolymorphicValue) - } - - j.applyTo(q) - q = q.Apply(j.hasManyColumns) - - return q -} - -func (j *relationJoin) hasManyColumns(q *SelectQuery) *SelectQuery { - b := make([]byte, 0, 32) - - joinTable := j.JoinModel.Table() - if len(j.columns) > 0 { - for i, col := range j.columns { - if i > 0 { - b = append(b, ", "...) - } - - if col.Args == nil { - if field, ok := joinTable.FieldMap[col.Query]; ok { - b = append(b, joinTable.SQLAlias...) - b = append(b, '.') - b = append(b, field.SQLName...) - continue - } - } - - var err error - b, err = col.AppendQuery(q.db.fmter, b) - if err != nil { - q.setErr(err) - return q - } - - } - } else { - b = appendColumns(b, joinTable.SQLAlias, joinTable.Fields) - } - - q = q.ColumnExpr(internal.String(b)) - - return q -} - -func (j *relationJoin) selectM2M(ctx context.Context, q *SelectQuery) error { - q = j.m2mQuery(q) - if q == nil { - return nil - } - return q.Scan(ctx) -} - -func (j *relationJoin) m2mQuery(q *SelectQuery) *SelectQuery { - fmter := q.db.fmter - - m2mModel := newM2MModel(j) - if m2mModel == nil { - return nil - } - q = q.Model(m2mModel) - - index := j.JoinModel.parentIndex() - baseTable := j.BaseModel.Table() - - if j.Relation.M2MTable != nil { - q = q.ColumnExpr(string(j.Relation.M2MTable.SQLAlias) + ".*") - } - - //nolint - var join []byte - join = append(join, "JOIN "...) - join = fmter.AppendQuery(join, string(j.Relation.M2MTable.SQLName)) - join = append(join, " AS "...) - join = append(join, j.Relation.M2MTable.SQLAlias...) - join = append(join, " ON ("...) - for i, col := range j.Relation.M2MBaseFields { - if i > 0 { - join = append(join, ", "...) - } - join = append(join, j.Relation.M2MTable.SQLAlias...) - join = append(join, '.') - join = append(join, col.SQLName...) - } - join = append(join, ") IN ("...) - join = appendChildValues(fmter, join, j.BaseModel.rootValue(), index, baseTable.PKs) - join = append(join, ")"...) - q = q.Join(internal.String(join)) - - joinTable := j.JoinModel.Table() - for i, m2mJoinField := range j.Relation.M2MJoinFields { - joinField := j.Relation.JoinFields[i] - q = q.Where("?.? = ?.?", - joinTable.SQLAlias, joinField.SQLName, - j.Relation.M2MTable.SQLAlias, m2mJoinField.SQLName) - } - - j.applyTo(q) - q = q.Apply(j.hasManyColumns) - - return q -} - -func (j *relationJoin) hasParent() bool { - if j.Parent != nil { - switch j.Parent.Relation.Type { - case schema.HasOneRelation, schema.BelongsToRelation: - return true - } - } - return false -} - -func (j *relationJoin) appendAlias(fmter schema.Formatter, b []byte) []byte { - quote := fmter.IdentQuote() - - b = append(b, quote) - b = appendAlias(b, j) - b = append(b, quote) - return b -} - -func (j *relationJoin) appendAliasColumn(fmter schema.Formatter, b []byte, column string) []byte { - quote := fmter.IdentQuote() - - b = append(b, quote) - b = appendAlias(b, j) - b = append(b, "__"...) - b = append(b, column...) - b = append(b, quote) - return b -} - -func (j *relationJoin) appendBaseAlias(fmter schema.Formatter, b []byte) []byte { - quote := fmter.IdentQuote() - - if j.hasParent() { - b = append(b, quote) - b = appendAlias(b, j.Parent) - b = append(b, quote) - return b - } - return append(b, j.BaseModel.Table().SQLAlias...) -} - -func (j *relationJoin) appendSoftDelete(fmter schema.Formatter, b []byte, flags internal.Flag) []byte { - b = append(b, '.') - - field := j.JoinModel.Table().SoftDeleteField - b = append(b, field.SQLName...) - - if field.IsPtr || field.NullZero { - if flags.Has(deletedFlag) { - b = append(b, " IS NOT NULL"...) - } else { - b = append(b, " IS NULL"...) - } - } else { - if flags.Has(deletedFlag) { - b = append(b, " != "...) - } else { - b = append(b, " = "...) - } - b = fmter.Dialect().AppendTime(b, time.Time{}) - } - - return b -} - -func appendAlias(b []byte, j *relationJoin) []byte { - if j.hasParent() { - b = appendAlias(b, j.Parent) - b = append(b, "__"...) - } - b = append(b, j.Relation.Field.Name...) - return b -} - -func (j *relationJoin) appendHasOneJoin( - fmter schema.Formatter, b []byte, q *SelectQuery, -) (_ []byte, err error) { - isSoftDelete := j.JoinModel.Table().SoftDeleteField != nil && !q.flags.Has(allWithDeletedFlag) - - b = append(b, "LEFT JOIN "...) - b = fmter.AppendQuery(b, string(j.JoinModel.Table().SQLNameForSelects)) - b = append(b, " AS "...) - b = j.appendAlias(fmter, b) - - b = append(b, " ON "...) - - b = append(b, '(') - for i, baseField := range j.Relation.BaseFields { - if i > 0 { - b = append(b, " AND "...) - } - b = j.appendAlias(fmter, b) - b = append(b, '.') - b = append(b, j.Relation.JoinFields[i].SQLName...) - b = append(b, " = "...) - b = j.appendBaseAlias(fmter, b) - b = append(b, '.') - b = append(b, baseField.SQLName...) - } - b = append(b, ')') - - if isSoftDelete { - b = append(b, " AND "...) - b = j.appendAlias(fmter, b) - b = j.appendSoftDelete(fmter, b, q.flags) - } - - return b, nil -} - -func appendChildValues( - fmter schema.Formatter, b []byte, v reflect.Value, index []int, fields []*schema.Field, -) []byte { - seen := make(map[string]struct{}) - walk(v, index, func(v reflect.Value) { - start := len(b) - - if len(fields) > 1 { - b = append(b, '(') - } - for i, f := range fields { - if i > 0 { - b = append(b, ", "...) - } - b = f.AppendValue(fmter, b, v) - } - if len(fields) > 1 { - b = append(b, ')') - } - b = append(b, ", "...) - - if _, ok := seen[string(b[start:])]; ok { - b = b[:start] - } else { - seen[string(b[start:])] = struct{}{} - } - }) - if len(seen) > 0 { - b = b[:len(b)-2] // trim ", " - } - return b -} - -// appendMultiValues is an alternative to appendChildValues that doesn't use the sql keyword ID -// but instead use a old style ((k1=v1) AND (k2=v2)) OR (...) of conditions. -func appendMultiValues( - fmter schema.Formatter, b []byte, v reflect.Value, index []int, baseFields, joinFields []*schema.Field, joinTable schema.Safe, -) []byte { - // This is based on a mix of appendChildValues and query_base.appendColumns - - // These should never missmatch in length but nice to know if it does - if len(joinFields) != len(baseFields) { - panic("not reached") - } - - // walk the relations - b = append(b, '(') - seen := make(map[string]struct{}) - walk(v, index, func(v reflect.Value) { - start := len(b) - for i, f := range baseFields { - if i > 0 { - b = append(b, " AND "...) - } - if len(baseFields) > 1 { - b = append(b, '(') - } - // Field name - b = append(b, joinTable...) - b = append(b, '.') - b = append(b, []byte(joinFields[i].SQLName)...) - - // Equals value - b = append(b, '=') - b = f.AppendValue(fmter, b, v) - if len(baseFields) > 1 { - b = append(b, ')') - } - } - - b = append(b, ") OR ("...) - - if _, ok := seen[string(b[start:])]; ok { - b = b[:start] - } else { - seen[string(b[start:])] = struct{}{} - } - }) - if len(seen) > 0 { - b = b[:len(b)-6] // trim ") OR (" - } - b = append(b, ')') - return b -} diff --git a/vendor/github.com/uptrace/bun/schema/append.go b/vendor/github.com/uptrace/bun/schema/append.go deleted file mode 100644 index 04538c03..00000000 --- a/vendor/github.com/uptrace/bun/schema/append.go +++ /dev/null @@ -1,101 +0,0 @@ -package schema - -import ( - "fmt" - "reflect" - "strconv" - "time" - - "github.com/uptrace/bun/dialect" -) - -func Append(fmter Formatter, b []byte, v interface{}) []byte { - switch v := v.(type) { - case nil: - return dialect.AppendNull(b) - case bool: - return dialect.AppendBool(b, v) - case int: - return strconv.AppendInt(b, int64(v), 10) - case int32: - return strconv.AppendInt(b, int64(v), 10) - case int64: - return strconv.AppendInt(b, v, 10) - case uint: - return strconv.AppendInt(b, int64(v), 10) - case uint32: - return fmter.Dialect().AppendUint32(b, v) - case uint64: - return fmter.Dialect().AppendUint64(b, v) - case float32: - return dialect.AppendFloat32(b, v) - case float64: - return dialect.AppendFloat64(b, v) - case string: - return fmter.Dialect().AppendString(b, v) - case time.Time: - return fmter.Dialect().AppendTime(b, v) - case []byte: - return fmter.Dialect().AppendBytes(b, v) - case QueryAppender: - return AppendQueryAppender(fmter, b, v) - default: - vv := reflect.ValueOf(v) - if vv.Kind() == reflect.Ptr && vv.IsNil() { - return dialect.AppendNull(b) - } - appender := Appender(fmter.Dialect(), vv.Type()) - return appender(fmter, b, vv) - } -} - -//------------------------------------------------------------------------------ - -func In(slice interface{}) QueryAppender { - v := reflect.ValueOf(slice) - if v.Kind() != reflect.Slice { - return &inValues{ - err: fmt.Errorf("bun: In(non-slice %T)", slice), - } - } - return &inValues{ - slice: v, - } -} - -type inValues struct { - slice reflect.Value - err error -} - -var _ QueryAppender = (*inValues)(nil) - -func (in *inValues) AppendQuery(fmter Formatter, b []byte) (_ []byte, err error) { - if in.err != nil { - return nil, in.err - } - return appendIn(fmter, b, in.slice), nil -} - -func appendIn(fmter Formatter, b []byte, slice reflect.Value) []byte { - sliceLen := slice.Len() - for i := 0; i < sliceLen; i++ { - if i > 0 { - b = append(b, ", "...) - } - - elem := slice.Index(i) - if elem.Kind() == reflect.Interface { - elem = elem.Elem() - } - - if elem.Kind() == reflect.Slice && elem.Type() != bytesType { - b = append(b, '(') - b = appendIn(fmter, b, elem) - b = append(b, ')') - } else { - b = fmter.AppendValue(b, elem) - } - } - return b -} diff --git a/vendor/github.com/uptrace/bun/schema/append_value.go b/vendor/github.com/uptrace/bun/schema/append_value.go deleted file mode 100644 index 9f0782e0..00000000 --- a/vendor/github.com/uptrace/bun/schema/append_value.go +++ /dev/null @@ -1,317 +0,0 @@ -package schema - -import ( - "database/sql/driver" - "fmt" - "net" - "reflect" - "strconv" - "strings" - "sync" - "time" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/extra/bunjson" - "github.com/uptrace/bun/internal" - "github.com/vmihailenco/msgpack/v5" -) - -type ( - AppenderFunc func(fmter Formatter, b []byte, v reflect.Value) []byte - CustomAppender func(typ reflect.Type) AppenderFunc -) - -var appenders = []AppenderFunc{ - reflect.Bool: AppendBoolValue, - reflect.Int: AppendIntValue, - reflect.Int8: AppendIntValue, - reflect.Int16: AppendIntValue, - reflect.Int32: AppendIntValue, - reflect.Int64: AppendIntValue, - reflect.Uint: AppendUintValue, - reflect.Uint8: AppendUintValue, - reflect.Uint16: AppendUintValue, - reflect.Uint32: appendUint32Value, - reflect.Uint64: appendUint64Value, - reflect.Uintptr: nil, - reflect.Float32: AppendFloat32Value, - reflect.Float64: AppendFloat64Value, - reflect.Complex64: nil, - reflect.Complex128: nil, - reflect.Array: AppendJSONValue, - reflect.Chan: nil, - reflect.Func: nil, - reflect.Interface: nil, - reflect.Map: AppendJSONValue, - reflect.Ptr: nil, - reflect.Slice: AppendJSONValue, - reflect.String: AppendStringValue, - reflect.Struct: AppendJSONValue, - reflect.UnsafePointer: nil, -} - -var appenderMap sync.Map - -func FieldAppender(dialect Dialect, field *Field) AppenderFunc { - if field.Tag.HasOption("msgpack") { - return appendMsgpack - } - - fieldType := field.StructField.Type - - switch strings.ToUpper(field.UserSQLType) { - case sqltype.JSON, sqltype.JSONB: - if fieldType.Implements(driverValuerType) { - return appendDriverValue - } - - if fieldType.Kind() != reflect.Ptr { - if reflect.PtrTo(fieldType).Implements(driverValuerType) { - return addrAppender(appendDriverValue) - } - } - - return AppendJSONValue - } - - return Appender(dialect, fieldType) -} - -func Appender(dialect Dialect, typ reflect.Type) AppenderFunc { - if v, ok := appenderMap.Load(typ); ok { - return v.(AppenderFunc) - } - - fn := appender(dialect, typ) - - if v, ok := appenderMap.LoadOrStore(typ, fn); ok { - return v.(AppenderFunc) - } - return fn -} - -func appender(dialect Dialect, typ reflect.Type) AppenderFunc { - switch typ { - case bytesType: - return appendBytesValue - case timeType: - return appendTimeValue - case timePtrType: - return PtrAppender(appendTimeValue) - case ipType: - return appendIPValue - case ipNetType: - return appendIPNetValue - case jsonRawMessageType: - return appendJSONRawMessageValue - } - - kind := typ.Kind() - - if typ.Implements(queryAppenderType) { - if kind == reflect.Ptr { - return nilAwareAppender(appendQueryAppenderValue) - } - return appendQueryAppenderValue - } - if typ.Implements(driverValuerType) { - if kind == reflect.Ptr { - return nilAwareAppender(appendDriverValue) - } - return appendDriverValue - } - - if kind != reflect.Ptr { - ptr := reflect.PtrTo(typ) - if ptr.Implements(queryAppenderType) { - return addrAppender(appendQueryAppenderValue) - } - if ptr.Implements(driverValuerType) { - return addrAppender(appendDriverValue) - } - } - - switch kind { - case reflect.Interface: - return ifaceAppenderFunc - case reflect.Ptr: - if typ.Implements(jsonMarshalerType) { - return nilAwareAppender(AppendJSONValue) - } - if fn := Appender(dialect, typ.Elem()); fn != nil { - return PtrAppender(fn) - } - case reflect.Slice: - if typ.Elem().Kind() == reflect.Uint8 { - return appendBytesValue - } - case reflect.Array: - if typ.Elem().Kind() == reflect.Uint8 { - return appendArrayBytesValue - } - } - - return appenders[typ.Kind()] -} - -func ifaceAppenderFunc(fmter Formatter, b []byte, v reflect.Value) []byte { - if v.IsNil() { - return dialect.AppendNull(b) - } - elem := v.Elem() - appender := Appender(fmter.Dialect(), elem.Type()) - return appender(fmter, b, elem) -} - -func nilAwareAppender(fn AppenderFunc) AppenderFunc { - return func(fmter Formatter, b []byte, v reflect.Value) []byte { - if v.IsNil() { - return dialect.AppendNull(b) - } - return fn(fmter, b, v) - } -} - -func PtrAppender(fn AppenderFunc) AppenderFunc { - return func(fmter Formatter, b []byte, v reflect.Value) []byte { - if v.IsNil() { - return dialect.AppendNull(b) - } - return fn(fmter, b, v.Elem()) - } -} - -func AppendBoolValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return fmter.Dialect().AppendBool(b, v.Bool()) -} - -func AppendIntValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return strconv.AppendInt(b, v.Int(), 10) -} - -func AppendUintValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return strconv.AppendUint(b, v.Uint(), 10) -} - -func appendUint32Value(fmter Formatter, b []byte, v reflect.Value) []byte { - return fmter.Dialect().AppendUint32(b, uint32(v.Uint())) -} - -func appendUint64Value(fmter Formatter, b []byte, v reflect.Value) []byte { - return fmter.Dialect().AppendUint64(b, v.Uint()) -} - -func AppendFloat32Value(fmter Formatter, b []byte, v reflect.Value) []byte { - return dialect.AppendFloat32(b, float32(v.Float())) -} - -func AppendFloat64Value(fmter Formatter, b []byte, v reflect.Value) []byte { - return dialect.AppendFloat64(b, float64(v.Float())) -} - -func appendBytesValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return fmter.Dialect().AppendBytes(b, v.Bytes()) -} - -func appendArrayBytesValue(fmter Formatter, b []byte, v reflect.Value) []byte { - if v.CanAddr() { - return fmter.Dialect().AppendBytes(b, v.Slice(0, v.Len()).Bytes()) - } - - tmp := make([]byte, v.Len()) - reflect.Copy(reflect.ValueOf(tmp), v) - b = fmter.Dialect().AppendBytes(b, tmp) - return b -} - -func AppendStringValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return fmter.Dialect().AppendString(b, v.String()) -} - -func AppendJSONValue(fmter Formatter, b []byte, v reflect.Value) []byte { - bb, err := bunjson.Marshal(v.Interface()) - if err != nil { - return dialect.AppendError(b, err) - } - - if len(bb) > 0 && bb[len(bb)-1] == '\n' { - bb = bb[:len(bb)-1] - } - - return fmter.Dialect().AppendJSON(b, bb) -} - -func appendTimeValue(fmter Formatter, b []byte, v reflect.Value) []byte { - tm := v.Interface().(time.Time) - return fmter.Dialect().AppendTime(b, tm) -} - -func appendIPValue(fmter Formatter, b []byte, v reflect.Value) []byte { - ip := v.Interface().(net.IP) - return fmter.Dialect().AppendString(b, ip.String()) -} - -func appendIPNetValue(fmter Formatter, b []byte, v reflect.Value) []byte { - ipnet := v.Interface().(net.IPNet) - return fmter.Dialect().AppendString(b, ipnet.String()) -} - -func appendJSONRawMessageValue(fmter Formatter, b []byte, v reflect.Value) []byte { - bytes := v.Bytes() - if bytes == nil { - return dialect.AppendNull(b) - } - return fmter.Dialect().AppendString(b, internal.String(bytes)) -} - -func appendQueryAppenderValue(fmter Formatter, b []byte, v reflect.Value) []byte { - return AppendQueryAppender(fmter, b, v.Interface().(QueryAppender)) -} - -func appendDriverValue(fmter Formatter, b []byte, v reflect.Value) []byte { - value, err := v.Interface().(driver.Valuer).Value() - if err != nil { - return dialect.AppendError(b, err) - } - if _, ok := value.(driver.Valuer); ok { - return dialect.AppendError(b, fmt.Errorf("driver.Valuer returns unsupported type %T", value)) - } - return Append(fmter, b, value) -} - -func addrAppender(fn AppenderFunc) AppenderFunc { - return func(fmter Formatter, b []byte, v reflect.Value) []byte { - if !v.CanAddr() { - err := fmt.Errorf("bun: Append(nonaddressable %T)", v.Interface()) - return dialect.AppendError(b, err) - } - return fn(fmter, b, v.Addr()) - } -} - -func appendMsgpack(fmter Formatter, b []byte, v reflect.Value) []byte { - hexEnc := internal.NewHexEncoder(b) - - enc := msgpack.GetEncoder() - defer msgpack.PutEncoder(enc) - - enc.Reset(hexEnc) - if err := enc.EncodeValue(v); err != nil { - return dialect.AppendError(b, err) - } - - if err := hexEnc.Close(); err != nil { - return dialect.AppendError(b, err) - } - - return hexEnc.Bytes() -} - -func AppendQueryAppender(fmter Formatter, b []byte, app QueryAppender) []byte { - bb, err := app.AppendQuery(fmter, b) - if err != nil { - return dialect.AppendError(b, err) - } - return bb -} diff --git a/vendor/github.com/uptrace/bun/schema/dialect.go b/vendor/github.com/uptrace/bun/schema/dialect.go deleted file mode 100644 index fea8238d..00000000 --- a/vendor/github.com/uptrace/bun/schema/dialect.go +++ /dev/null @@ -1,179 +0,0 @@ -package schema - -import ( - "database/sql" - "encoding/hex" - "strconv" - "time" - "unicode/utf8" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal/parser" -) - -type Dialect interface { - Init(db *sql.DB) - - Name() dialect.Name - Features() feature.Feature - - Tables() *Tables - OnTable(table *Table) - - IdentQuote() byte - - AppendUint32(b []byte, n uint32) []byte - AppendUint64(b []byte, n uint64) []byte - AppendTime(b []byte, tm time.Time) []byte - AppendString(b []byte, s string) []byte - AppendBytes(b []byte, bs []byte) []byte - AppendJSON(b, jsonb []byte) []byte - AppendBool(b []byte, v bool) []byte - - // DefaultVarcharLen should be returned for dialects in which specifying VARCHAR length - // is mandatory in queries that modify the schema (CREATE TABLE / ADD COLUMN, etc). - // Dialects that do not have such requirement may return 0, which should be interpreted so by the caller. - DefaultVarcharLen() int -} - -// ------------------------------------------------------------------------------ - -type BaseDialect struct{} - -func (BaseDialect) AppendUint32(b []byte, n uint32) []byte { - return strconv.AppendUint(b, uint64(n), 10) -} - -func (BaseDialect) AppendUint64(b []byte, n uint64) []byte { - return strconv.AppendUint(b, n, 10) -} - -func (BaseDialect) AppendTime(b []byte, tm time.Time) []byte { - b = append(b, '\'') - b = tm.UTC().AppendFormat(b, "2006-01-02 15:04:05.999999-07:00") - b = append(b, '\'') - return b -} - -func (BaseDialect) AppendString(b []byte, s string) []byte { - b = append(b, '\'') - for _, r := range s { - if r == '\000' { - continue - } - - if r == '\'' { - b = append(b, '\'', '\'') - continue - } - - if r < utf8.RuneSelf { - b = append(b, byte(r)) - continue - } - - l := len(b) - if cap(b)-l < utf8.UTFMax { - b = append(b, make([]byte, utf8.UTFMax)...) - } - n := utf8.EncodeRune(b[l:l+utf8.UTFMax], r) - b = b[:l+n] - } - b = append(b, '\'') - return b -} - -func (BaseDialect) AppendBytes(b, bs []byte) []byte { - if bs == nil { - return dialect.AppendNull(b) - } - - b = append(b, `'\x`...) - - s := len(b) - b = append(b, make([]byte, hex.EncodedLen(len(bs)))...) - hex.Encode(b[s:], bs) - - b = append(b, '\'') - - return b -} - -func (BaseDialect) AppendJSON(b, jsonb []byte) []byte { - b = append(b, '\'') - - p := parser.New(jsonb) - for p.Valid() { - c := p.Read() - switch c { - case '"': - b = append(b, '"') - case '\'': - b = append(b, "''"...) - case '\000': - continue - case '\\': - if p.SkipBytes([]byte("u0000")) { - b = append(b, `\\u0000`...) - } else { - b = append(b, '\\') - if p.Valid() { - b = append(b, p.Read()) - } - } - default: - b = append(b, c) - } - } - - b = append(b, '\'') - - return b -} - -func (BaseDialect) AppendBool(b []byte, v bool) []byte { - return dialect.AppendBool(b, v) -} - -// ------------------------------------------------------------------------------ - -type nopDialect struct { - BaseDialect - - tables *Tables - features feature.Feature -} - -func newNopDialect() *nopDialect { - d := new(nopDialect) - d.tables = NewTables(d) - d.features = feature.Returning - return d -} - -func (d *nopDialect) Init(*sql.DB) {} - -func (d *nopDialect) Name() dialect.Name { - return dialect.Invalid -} - -func (d *nopDialect) Features() feature.Feature { - return d.features -} - -func (d *nopDialect) Tables() *Tables { - return d.tables -} - -func (d *nopDialect) OnField(field *Field) {} - -func (d *nopDialect) OnTable(table *Table) {} - -func (d *nopDialect) IdentQuote() byte { - return '"' -} - -func (d *nopDialect) DefaultVarcharLen() int { - return 0 -} diff --git a/vendor/github.com/uptrace/bun/schema/field.go b/vendor/github.com/uptrace/bun/schema/field.go deleted file mode 100644 index 283a3b99..00000000 --- a/vendor/github.com/uptrace/bun/schema/field.go +++ /dev/null @@ -1,138 +0,0 @@ -package schema - -import ( - "fmt" - "reflect" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/internal/tagparser" -) - -type Field struct { - StructField reflect.StructField - IsPtr bool - - Tag tagparser.Tag - IndirectType reflect.Type - Index []int - - Name string // SQL name, .e.g. id - SQLName Safe // escaped SQL name, e.g. "id" - GoName string // struct field name, e.g. Id - - DiscoveredSQLType string - UserSQLType string - CreateTableSQLType string - SQLDefault string - - OnDelete string - OnUpdate string - - IsPK bool - NotNull bool - NullZero bool - AutoIncrement bool - Identity bool - - Append AppenderFunc - Scan ScannerFunc - IsZero IsZeroerFunc -} - -func (f *Field) String() string { - return f.Name -} - -func (f *Field) Clone() *Field { - cp := *f - cp.Index = cp.Index[:len(f.Index):len(f.Index)] - return &cp -} - -func (f *Field) Value(strct reflect.Value) reflect.Value { - return fieldByIndexAlloc(strct, f.Index) -} - -func (f *Field) HasNilValue(v reflect.Value) bool { - if len(f.Index) == 1 { - return v.Field(f.Index[0]).IsNil() - } - - for _, index := range f.Index { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - return true - } - v = v.Elem() - } - v = v.Field(index) - } - return v.IsNil() -} - -func (f *Field) HasZeroValue(v reflect.Value) bool { - if len(f.Index) == 1 { - return f.IsZero(v.Field(f.Index[0])) - } - - for _, index := range f.Index { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - return true - } - v = v.Elem() - } - v = v.Field(index) - } - return f.IsZero(v) -} - -func (f *Field) AppendValue(fmter Formatter, b []byte, strct reflect.Value) []byte { - fv, ok := fieldByIndex(strct, f.Index) - if !ok { - return dialect.AppendNull(b) - } - - if (f.IsPtr && fv.IsNil()) || (f.NullZero && f.IsZero(fv)) { - return dialect.AppendNull(b) - } - if f.Append == nil { - panic(fmt.Errorf("bun: AppendValue(unsupported %s)", fv.Type())) - } - return f.Append(fmter, b, fv) -} - -func (f *Field) ScanWithCheck(fv reflect.Value, src interface{}) error { - if f.Scan == nil { - return fmt.Errorf("bun: Scan(unsupported %s)", f.IndirectType) - } - return f.Scan(fv, src) -} - -func (f *Field) ScanValue(strct reflect.Value, src interface{}) error { - if src == nil { - if fv, ok := fieldByIndex(strct, f.Index); ok { - return f.ScanWithCheck(fv, src) - } - return nil - } - - fv := fieldByIndexAlloc(strct, f.Index) - return f.ScanWithCheck(fv, src) -} - -func (f *Field) SkipUpdate() bool { - return f.Tag.HasOption("skipupdate") -} - -func indexEqual(ind1, ind2 []int) bool { - if len(ind1) != len(ind2) { - return false - } - for i, ind := range ind1 { - if ind != ind2[i] { - return false - } - } - return true -} diff --git a/vendor/github.com/uptrace/bun/schema/formatter.go b/vendor/github.com/uptrace/bun/schema/formatter.go deleted file mode 100644 index 1fba1b59..00000000 --- a/vendor/github.com/uptrace/bun/schema/formatter.go +++ /dev/null @@ -1,246 +0,0 @@ -package schema - -import ( - "reflect" - "strconv" - "strings" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/feature" - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/internal/parser" -) - -var nopFormatter = Formatter{ - dialect: newNopDialect(), -} - -type Formatter struct { - dialect Dialect - args *namedArgList -} - -func NewFormatter(dialect Dialect) Formatter { - return Formatter{ - dialect: dialect, - } -} - -func NewNopFormatter() Formatter { - return nopFormatter -} - -func (f Formatter) IsNop() bool { - return f.dialect.Name() == dialect.Invalid -} - -func (f Formatter) Dialect() Dialect { - return f.dialect -} - -func (f Formatter) IdentQuote() byte { - return f.dialect.IdentQuote() -} - -func (f Formatter) AppendIdent(b []byte, ident string) []byte { - return dialect.AppendIdent(b, ident, f.IdentQuote()) -} - -func (f Formatter) AppendValue(b []byte, v reflect.Value) []byte { - if v.Kind() == reflect.Ptr && v.IsNil() { - return dialect.AppendNull(b) - } - appender := Appender(f.dialect, v.Type()) - return appender(f, b, v) -} - -func (f Formatter) HasFeature(feature feature.Feature) bool { - return f.dialect.Features().Has(feature) -} - -func (f Formatter) WithArg(arg NamedArgAppender) Formatter { - return Formatter{ - dialect: f.dialect, - args: f.args.WithArg(arg), - } -} - -func (f Formatter) WithNamedArg(name string, value interface{}) Formatter { - return Formatter{ - dialect: f.dialect, - args: f.args.WithArg(&namedArg{name: name, value: value}), - } -} - -func (f Formatter) FormatQuery(query string, args ...interface{}) string { - if f.IsNop() || (args == nil && f.args == nil) || strings.IndexByte(query, '?') == -1 { - return query - } - return internal.String(f.AppendQuery(nil, query, args...)) -} - -func (f Formatter) AppendQuery(dst []byte, query string, args ...interface{}) []byte { - if f.IsNop() || (args == nil && f.args == nil) || strings.IndexByte(query, '?') == -1 { - return append(dst, query...) - } - return f.append(dst, parser.NewString(query), args) -} - -func (f Formatter) append(dst []byte, p *parser.Parser, args []interface{}) []byte { - var namedArgs NamedArgAppender - if len(args) == 1 { - if v, ok := args[0].(NamedArgAppender); ok { - namedArgs = v - } else if v, ok := newStructArgs(f, args[0]); ok { - namedArgs = v - } - } - - var argIndex int - for p.Valid() { - b, ok := p.ReadSep('?') - if !ok { - dst = append(dst, b...) - continue - } - if len(b) > 0 && b[len(b)-1] == '\\' { - dst = append(dst, b[:len(b)-1]...) - dst = append(dst, '?') - continue - } - dst = append(dst, b...) - - name, numeric := p.ReadIdentifier() - if name != "" { - if numeric { - idx, err := strconv.Atoi(name) - if err != nil { - goto restore_arg - } - - if idx >= len(args) { - goto restore_arg - } - - dst = f.appendArg(dst, args[idx]) - continue - } - - if namedArgs != nil { - dst, ok = namedArgs.AppendNamedArg(f, dst, name) - if ok { - continue - } - } - - dst, ok = f.args.AppendNamedArg(f, dst, name) - if ok { - continue - } - - restore_arg: - dst = append(dst, '?') - dst = append(dst, name...) - continue - } - - if argIndex >= len(args) { - dst = append(dst, '?') - continue - } - - arg := args[argIndex] - argIndex++ - - dst = f.appendArg(dst, arg) - } - - return dst -} - -func (f Formatter) appendArg(b []byte, arg interface{}) []byte { - switch arg := arg.(type) { - case QueryAppender: - bb, err := arg.AppendQuery(f, b) - if err != nil { - return dialect.AppendError(b, err) - } - return bb - default: - return Append(f, b, arg) - } -} - -//------------------------------------------------------------------------------ - -type NamedArgAppender interface { - AppendNamedArg(fmter Formatter, b []byte, name string) ([]byte, bool) -} - -type namedArgList struct { - arg NamedArgAppender - next *namedArgList -} - -func (l *namedArgList) WithArg(arg NamedArgAppender) *namedArgList { - return &namedArgList{ - arg: arg, - next: l, - } -} - -func (l *namedArgList) AppendNamedArg(fmter Formatter, b []byte, name string) ([]byte, bool) { - for l != nil && l.arg != nil { - if b, ok := l.arg.AppendNamedArg(fmter, b, name); ok { - return b, true - } - l = l.next - } - return b, false -} - -//------------------------------------------------------------------------------ - -type namedArg struct { - name string - value interface{} -} - -var _ NamedArgAppender = (*namedArg)(nil) - -func (a *namedArg) AppendNamedArg(fmter Formatter, b []byte, name string) ([]byte, bool) { - if a.name == name { - return fmter.appendArg(b, a.value), true - } - return b, false -} - -//------------------------------------------------------------------------------ - -type structArgs struct { - table *Table - strct reflect.Value -} - -var _ NamedArgAppender = (*structArgs)(nil) - -func newStructArgs(fmter Formatter, strct interface{}) (*structArgs, bool) { - v := reflect.ValueOf(strct) - if !v.IsValid() { - return nil, false - } - - v = reflect.Indirect(v) - if v.Kind() != reflect.Struct { - return nil, false - } - - return &structArgs{ - table: fmter.Dialect().Tables().Get(v.Type()), - strct: v, - }, true -} - -func (m *structArgs) AppendNamedArg(fmter Formatter, b []byte, name string) ([]byte, bool) { - return m.table.AppendNamedArg(fmter, b, name, m.strct) -} diff --git a/vendor/github.com/uptrace/bun/schema/hook.go b/vendor/github.com/uptrace/bun/schema/hook.go deleted file mode 100644 index 624601c9..00000000 --- a/vendor/github.com/uptrace/bun/schema/hook.go +++ /dev/null @@ -1,59 +0,0 @@ -package schema - -import ( - "context" - "database/sql" - "reflect" -) - -type Model interface { - ScanRows(ctx context.Context, rows *sql.Rows) (int, error) - Value() interface{} -} - -type Query interface { - QueryAppender - Operation() string - GetModel() Model - GetTableName() string -} - -//------------------------------------------------------------------------------ - -type BeforeAppendModelHook interface { - BeforeAppendModel(ctx context.Context, query Query) error -} - -var beforeAppendModelHookType = reflect.TypeOf((*BeforeAppendModelHook)(nil)).Elem() - -//------------------------------------------------------------------------------ - -type BeforeScanHook interface { - BeforeScan(context.Context) error -} - -var beforeScanHookType = reflect.TypeOf((*BeforeScanHook)(nil)).Elem() - -//------------------------------------------------------------------------------ - -type AfterScanHook interface { - AfterScan(context.Context) error -} - -var afterScanHookType = reflect.TypeOf((*AfterScanHook)(nil)).Elem() - -//------------------------------------------------------------------------------ - -type BeforeScanRowHook interface { - BeforeScanRow(context.Context) error -} - -var beforeScanRowHookType = reflect.TypeOf((*BeforeScanRowHook)(nil)).Elem() - -//------------------------------------------------------------------------------ - -type AfterScanRowHook interface { - AfterScanRow(context.Context) error -} - -var afterScanRowHookType = reflect.TypeOf((*AfterScanRowHook)(nil)).Elem() diff --git a/vendor/github.com/uptrace/bun/schema/reflect.go b/vendor/github.com/uptrace/bun/schema/reflect.go deleted file mode 100644 index f13826a6..00000000 --- a/vendor/github.com/uptrace/bun/schema/reflect.go +++ /dev/null @@ -1,72 +0,0 @@ -package schema - -import ( - "database/sql/driver" - "encoding/json" - "net" - "reflect" - "time" -) - -var ( - bytesType = reflect.TypeOf((*[]byte)(nil)).Elem() - timePtrType = reflect.TypeOf((*time.Time)(nil)) - timeType = timePtrType.Elem() - ipType = reflect.TypeOf((*net.IP)(nil)).Elem() - ipNetType = reflect.TypeOf((*net.IPNet)(nil)).Elem() - jsonRawMessageType = reflect.TypeOf((*json.RawMessage)(nil)).Elem() - - driverValuerType = reflect.TypeOf((*driver.Valuer)(nil)).Elem() - queryAppenderType = reflect.TypeOf((*QueryAppender)(nil)).Elem() - jsonMarshalerType = reflect.TypeOf((*json.Marshaler)(nil)).Elem() -) - -func indirectType(t reflect.Type) reflect.Type { - if t.Kind() == reflect.Ptr { - t = t.Elem() - } - return t -} - -func fieldByIndex(v reflect.Value, index []int) (_ reflect.Value, ok bool) { - if len(index) == 1 { - return v.Field(index[0]), true - } - - for i, idx := range index { - if i > 0 { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - return v, false - } - v = v.Elem() - } - } - v = v.Field(idx) - } - return v, true -} - -func fieldByIndexAlloc(v reflect.Value, index []int) reflect.Value { - if len(index) == 1 { - return v.Field(index[0]) - } - - for i, idx := range index { - if i > 0 { - v = indirectNil(v) - } - v = v.Field(idx) - } - return v -} - -func indirectNil(v reflect.Value) reflect.Value { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) - } - v = v.Elem() - } - return v -} diff --git a/vendor/github.com/uptrace/bun/schema/relation.go b/vendor/github.com/uptrace/bun/schema/relation.go deleted file mode 100644 index 6636e26a..00000000 --- a/vendor/github.com/uptrace/bun/schema/relation.go +++ /dev/null @@ -1,35 +0,0 @@ -package schema - -import ( - "fmt" -) - -const ( - InvalidRelation = iota - HasOneRelation - BelongsToRelation - HasManyRelation - ManyToManyRelation -) - -type Relation struct { - Type int - Field *Field - JoinTable *Table - BaseFields []*Field - JoinFields []*Field - OnUpdate string - OnDelete string - Condition []string - - PolymorphicField *Field - PolymorphicValue string - - M2MTable *Table - M2MBaseFields []*Field - M2MJoinFields []*Field -} - -func (r *Relation) String() string { - return fmt.Sprintf("relation=%s", r.Field.GoName) -} diff --git a/vendor/github.com/uptrace/bun/schema/scan.go b/vendor/github.com/uptrace/bun/schema/scan.go deleted file mode 100644 index 96b31caf..00000000 --- a/vendor/github.com/uptrace/bun/schema/scan.go +++ /dev/null @@ -1,516 +0,0 @@ -package schema - -import ( - "bytes" - "database/sql" - "fmt" - "net" - "reflect" - "strconv" - "strings" - "sync" - "time" - - "github.com/vmihailenco/msgpack/v5" - - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/extra/bunjson" - "github.com/uptrace/bun/internal" -) - -var scannerType = reflect.TypeOf((*sql.Scanner)(nil)).Elem() - -type ScannerFunc func(dest reflect.Value, src interface{}) error - -var scanners []ScannerFunc - -func init() { - scanners = []ScannerFunc{ - reflect.Bool: scanBool, - reflect.Int: scanInt64, - reflect.Int8: scanInt64, - reflect.Int16: scanInt64, - reflect.Int32: scanInt64, - reflect.Int64: scanInt64, - reflect.Uint: scanUint64, - reflect.Uint8: scanUint64, - reflect.Uint16: scanUint64, - reflect.Uint32: scanUint64, - reflect.Uint64: scanUint64, - reflect.Uintptr: scanUint64, - reflect.Float32: scanFloat64, - reflect.Float64: scanFloat64, - reflect.Complex64: nil, - reflect.Complex128: nil, - reflect.Array: nil, - reflect.Interface: scanInterface, - reflect.Map: scanJSON, - reflect.Ptr: nil, - reflect.Slice: scanJSON, - reflect.String: scanString, - reflect.Struct: scanJSON, - reflect.UnsafePointer: nil, - } -} - -var scannerMap sync.Map - -func FieldScanner(dialect Dialect, field *Field) ScannerFunc { - if field.Tag.HasOption("msgpack") { - return scanMsgpack - } - if field.Tag.HasOption("json_use_number") { - return scanJSONUseNumber - } - if field.StructField.Type.Kind() == reflect.Interface { - switch strings.ToUpper(field.UserSQLType) { - case sqltype.JSON, sqltype.JSONB: - return scanJSONIntoInterface - } - } - return Scanner(field.StructField.Type) -} - -func Scanner(typ reflect.Type) ScannerFunc { - if v, ok := scannerMap.Load(typ); ok { - return v.(ScannerFunc) - } - - fn := scanner(typ) - - if v, ok := scannerMap.LoadOrStore(typ, fn); ok { - return v.(ScannerFunc) - } - return fn -} - -func scanner(typ reflect.Type) ScannerFunc { - kind := typ.Kind() - - if kind == reflect.Ptr { - if fn := Scanner(typ.Elem()); fn != nil { - return PtrScanner(fn) - } - } - - switch typ { - case bytesType: - return scanBytes - case timeType: - return scanTime - case ipType: - return scanIP - case ipNetType: - return scanIPNet - case jsonRawMessageType: - return scanBytes - } - - if typ.Implements(scannerType) { - return scanScanner - } - - if kind != reflect.Ptr { - ptr := reflect.PtrTo(typ) - if ptr.Implements(scannerType) { - return addrScanner(scanScanner) - } - } - - if typ.Kind() == reflect.Slice && typ.Elem().Kind() == reflect.Uint8 { - return scanBytes - } - - return scanners[kind] -} - -func scanBool(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetBool(false) - return nil - case bool: - dest.SetBool(src) - return nil - case int64: - dest.SetBool(src != 0) - return nil - case []byte: - f, err := strconv.ParseBool(internal.String(src)) - if err != nil { - return err - } - dest.SetBool(f) - return nil - case string: - f, err := strconv.ParseBool(src) - if err != nil { - return err - } - dest.SetBool(f) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanInt64(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetInt(0) - return nil - case int64: - dest.SetInt(src) - return nil - case uint64: - dest.SetInt(int64(src)) - return nil - case []byte: - n, err := strconv.ParseInt(internal.String(src), 10, 64) - if err != nil { - return err - } - dest.SetInt(n) - return nil - case string: - n, err := strconv.ParseInt(src, 10, 64) - if err != nil { - return err - } - dest.SetInt(n) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanUint64(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetUint(0) - return nil - case uint64: - dest.SetUint(src) - return nil - case int64: - dest.SetUint(uint64(src)) - return nil - case []byte: - n, err := strconv.ParseUint(internal.String(src), 10, 64) - if err != nil { - return err - } - dest.SetUint(n) - return nil - case string: - n, err := strconv.ParseUint(src, 10, 64) - if err != nil { - return err - } - dest.SetUint(n) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanFloat64(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetFloat(0) - return nil - case float64: - dest.SetFloat(src) - return nil - case []byte: - f, err := strconv.ParseFloat(internal.String(src), 64) - if err != nil { - return err - } - dest.SetFloat(f) - return nil - case string: - f, err := strconv.ParseFloat(src, 64) - if err != nil { - return err - } - dest.SetFloat(f) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanString(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetString("") - return nil - case string: - dest.SetString(src) - return nil - case []byte: - dest.SetString(string(src)) - return nil - case time.Time: - dest.SetString(src.Format(time.RFC3339Nano)) - return nil - case int64: - dest.SetString(strconv.FormatInt(src, 10)) - return nil - case uint64: - dest.SetString(strconv.FormatUint(src, 10)) - return nil - case float64: - dest.SetString(strconv.FormatFloat(src, 'G', -1, 64)) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanBytes(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - dest.SetBytes(nil) - return nil - case string: - dest.SetBytes([]byte(src)) - return nil - case []byte: - clone := make([]byte, len(src)) - copy(clone, src) - - dest.SetBytes(clone) - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanTime(dest reflect.Value, src interface{}) error { - switch src := src.(type) { - case nil: - destTime := dest.Addr().Interface().(*time.Time) - *destTime = time.Time{} - return nil - case time.Time: - destTime := dest.Addr().Interface().(*time.Time) - *destTime = src - return nil - case string: - srcTime, err := internal.ParseTime(src) - if err != nil { - return err - } - destTime := dest.Addr().Interface().(*time.Time) - *destTime = srcTime - return nil - case []byte: - srcTime, err := internal.ParseTime(internal.String(src)) - if err != nil { - return err - } - destTime := dest.Addr().Interface().(*time.Time) - *destTime = srcTime - return nil - default: - return scanError(dest.Type(), src) - } -} - -func scanScanner(dest reflect.Value, src interface{}) error { - return dest.Interface().(sql.Scanner).Scan(src) -} - -func scanMsgpack(dest reflect.Value, src interface{}) error { - if src == nil { - return scanNull(dest) - } - - b, err := toBytes(src) - if err != nil { - return err - } - - dec := msgpack.GetDecoder() - defer msgpack.PutDecoder(dec) - - dec.Reset(bytes.NewReader(b)) - return dec.DecodeValue(dest) -} - -func scanJSON(dest reflect.Value, src interface{}) error { - if src == nil { - return scanNull(dest) - } - - b, err := toBytes(src) - if err != nil { - return err - } - - return bunjson.Unmarshal(b, dest.Addr().Interface()) -} - -func scanJSONUseNumber(dest reflect.Value, src interface{}) error { - if src == nil { - return scanNull(dest) - } - - b, err := toBytes(src) - if err != nil { - return err - } - - dec := bunjson.NewDecoder(bytes.NewReader(b)) - dec.UseNumber() - return dec.Decode(dest.Addr().Interface()) -} - -func scanIP(dest reflect.Value, src interface{}) error { - if src == nil { - return scanNull(dest) - } - - b, err := toBytes(src) - if err != nil { - return err - } - - ip := net.ParseIP(internal.String(b)) - if ip == nil { - return fmt.Errorf("bun: invalid ip: %q", b) - } - - ptr := dest.Addr().Interface().(*net.IP) - *ptr = ip - - return nil -} - -func scanIPNet(dest reflect.Value, src interface{}) error { - if src == nil { - return scanNull(dest) - } - - b, err := toBytes(src) - if err != nil { - return err - } - - _, ipnet, err := net.ParseCIDR(internal.String(b)) - if err != nil { - return err - } - - ptr := dest.Addr().Interface().(*net.IPNet) - *ptr = *ipnet - - return nil -} - -func addrScanner(fn ScannerFunc) ScannerFunc { - return func(dest reflect.Value, src interface{}) error { - if !dest.CanAddr() { - return fmt.Errorf("bun: Scan(nonaddressable %T)", dest.Interface()) - } - return fn(dest.Addr(), src) - } -} - -func toBytes(src interface{}) ([]byte, error) { - switch src := src.(type) { - case string: - return internal.Bytes(src), nil - case []byte: - return src, nil - default: - return nil, fmt.Errorf("bun: got %T, wanted []byte or string", src) - } -} - -func PtrScanner(fn ScannerFunc) ScannerFunc { - return func(dest reflect.Value, src interface{}) error { - if src == nil { - if !dest.CanAddr() { - if dest.IsNil() { - return nil - } - return fn(dest.Elem(), src) - } - - if !dest.IsNil() { - dest.Set(reflect.New(dest.Type().Elem())) - } - return nil - } - - if dest.IsNil() { - dest.Set(reflect.New(dest.Type().Elem())) - } - - if dest.Kind() == reflect.Map { - return fn(dest, src) - } - - return fn(dest.Elem(), src) - } -} - -func scanNull(dest reflect.Value) error { - if nilable(dest.Kind()) && dest.IsNil() { - return nil - } - dest.Set(reflect.New(dest.Type()).Elem()) - return nil -} - -func scanJSONIntoInterface(dest reflect.Value, src interface{}) error { - if dest.IsNil() { - if src == nil { - return nil - } - - b, err := toBytes(src) - if err != nil { - return err - } - - return bunjson.Unmarshal(b, dest.Addr().Interface()) - } - - dest = dest.Elem() - if fn := Scanner(dest.Type()); fn != nil { - return fn(dest, src) - } - return scanError(dest.Type(), src) -} - -func scanInterface(dest reflect.Value, src interface{}) error { - if dest.IsNil() { - if src == nil { - return nil - } - dest.Set(reflect.ValueOf(src)) - return nil - } - - dest = dest.Elem() - if fn := Scanner(dest.Type()); fn != nil { - return fn(dest, src) - } - return scanError(dest.Type(), src) -} - -func nilable(kind reflect.Kind) bool { - switch kind { - case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: - return true - } - return false -} - -func scanError(dest reflect.Type, src interface{}) error { - return fmt.Errorf("bun: can't scan %#v (%T) into %s", src, src, dest.String()) -} diff --git a/vendor/github.com/uptrace/bun/schema/sqlfmt.go b/vendor/github.com/uptrace/bun/schema/sqlfmt.go deleted file mode 100644 index a4ed24af..00000000 --- a/vendor/github.com/uptrace/bun/schema/sqlfmt.go +++ /dev/null @@ -1,87 +0,0 @@ -package schema - -import ( - "strings" - - "github.com/uptrace/bun/internal" -) - -type QueryAppender interface { - AppendQuery(fmter Formatter, b []byte) ([]byte, error) -} - -type ColumnsAppender interface { - AppendColumns(fmter Formatter, b []byte) ([]byte, error) -} - -//------------------------------------------------------------------------------ - -// Safe represents a safe SQL query. -type Safe string - -var _ QueryAppender = (*Safe)(nil) - -func (s Safe) AppendQuery(fmter Formatter, b []byte) ([]byte, error) { - return append(b, s...), nil -} - -//------------------------------------------------------------------------------ - -// Ident represents a SQL identifier, for example, table or column name. -type Ident string - -var _ QueryAppender = (*Ident)(nil) - -func (s Ident) AppendQuery(fmter Formatter, b []byte) ([]byte, error) { - return fmter.AppendIdent(b, string(s)), nil -} - -//------------------------------------------------------------------------------ - -type QueryWithArgs struct { - Query string - Args []interface{} -} - -var _ QueryAppender = QueryWithArgs{} - -func SafeQuery(query string, args []interface{}) QueryWithArgs { - if args == nil { - args = make([]interface{}, 0) - } else if len(query) > 0 && strings.IndexByte(query, '?') == -1 { - internal.Warn.Printf("query %q has %v args, but no placeholders", query, args) - } - return QueryWithArgs{ - Query: query, - Args: args, - } -} - -func UnsafeIdent(ident string) QueryWithArgs { - return QueryWithArgs{Query: ident} -} - -func (q QueryWithArgs) IsZero() bool { - return q.Query == "" && q.Args == nil -} - -func (q QueryWithArgs) AppendQuery(fmter Formatter, b []byte) ([]byte, error) { - if q.Args == nil { - return fmter.AppendIdent(b, q.Query), nil - } - return fmter.AppendQuery(b, q.Query, q.Args...), nil -} - -//------------------------------------------------------------------------------ - -type QueryWithSep struct { - QueryWithArgs - Sep string -} - -func SafeQueryWithSep(query string, args []interface{}, sep string) QueryWithSep { - return QueryWithSep{ - QueryWithArgs: SafeQuery(query, args), - Sep: sep, - } -} diff --git a/vendor/github.com/uptrace/bun/schema/sqltype.go b/vendor/github.com/uptrace/bun/schema/sqltype.go deleted file mode 100644 index 233ba641..00000000 --- a/vendor/github.com/uptrace/bun/schema/sqltype.go +++ /dev/null @@ -1,141 +0,0 @@ -package schema - -import ( - "bytes" - "database/sql" - "encoding/json" - "reflect" - "time" - - "github.com/uptrace/bun/dialect" - "github.com/uptrace/bun/dialect/sqltype" - "github.com/uptrace/bun/internal" -) - -var ( - bunNullTimeType = reflect.TypeOf((*NullTime)(nil)).Elem() - nullTimeType = reflect.TypeOf((*sql.NullTime)(nil)).Elem() - nullBoolType = reflect.TypeOf((*sql.NullBool)(nil)).Elem() - nullFloatType = reflect.TypeOf((*sql.NullFloat64)(nil)).Elem() - nullIntType = reflect.TypeOf((*sql.NullInt64)(nil)).Elem() - nullStringType = reflect.TypeOf((*sql.NullString)(nil)).Elem() -) - -var sqlTypes = []string{ - reflect.Bool: sqltype.Boolean, - reflect.Int: sqltype.BigInt, - reflect.Int8: sqltype.SmallInt, - reflect.Int16: sqltype.SmallInt, - reflect.Int32: sqltype.Integer, - reflect.Int64: sqltype.BigInt, - reflect.Uint: sqltype.BigInt, - reflect.Uint8: sqltype.SmallInt, - reflect.Uint16: sqltype.SmallInt, - reflect.Uint32: sqltype.Integer, - reflect.Uint64: sqltype.BigInt, - reflect.Uintptr: sqltype.BigInt, - reflect.Float32: sqltype.Real, - reflect.Float64: sqltype.DoublePrecision, - reflect.Complex64: "", - reflect.Complex128: "", - reflect.Array: "", - reflect.Interface: "", - reflect.Map: sqltype.VarChar, - reflect.Ptr: "", - reflect.Slice: sqltype.VarChar, - reflect.String: sqltype.VarChar, - reflect.Struct: sqltype.VarChar, -} - -func DiscoverSQLType(typ reflect.Type) string { - switch typ { - case timeType, nullTimeType, bunNullTimeType: - return sqltype.Timestamp - case nullBoolType: - return sqltype.Boolean - case nullFloatType: - return sqltype.DoublePrecision - case nullIntType: - return sqltype.BigInt - case nullStringType: - return sqltype.VarChar - case jsonRawMessageType: - return sqltype.JSON - } - - switch typ.Kind() { - case reflect.Slice: - if typ.Elem().Kind() == reflect.Uint8 { - return sqltype.Blob - } - } - - return sqlTypes[typ.Kind()] -} - -//------------------------------------------------------------------------------ - -var jsonNull = []byte("null") - -// NullTime is a time.Time wrapper that marshals zero time as JSON null and SQL NULL. -type NullTime struct { - time.Time -} - -var ( - _ json.Marshaler = (*NullTime)(nil) - _ json.Unmarshaler = (*NullTime)(nil) - _ sql.Scanner = (*NullTime)(nil) - _ QueryAppender = (*NullTime)(nil) -) - -func (tm NullTime) MarshalJSON() ([]byte, error) { - if tm.IsZero() { - return jsonNull, nil - } - return tm.Time.MarshalJSON() -} - -func (tm *NullTime) UnmarshalJSON(b []byte) error { - if bytes.Equal(b, jsonNull) { - tm.Time = time.Time{} - return nil - } - return tm.Time.UnmarshalJSON(b) -} - -func (tm NullTime) AppendQuery(fmter Formatter, b []byte) ([]byte, error) { - if tm.IsZero() { - return dialect.AppendNull(b), nil - } - return fmter.Dialect().AppendTime(b, tm.Time), nil -} - -func (tm *NullTime) Scan(src interface{}) error { - if src == nil { - tm.Time = time.Time{} - return nil - } - - switch src := src.(type) { - case time.Time: - tm.Time = src - return nil - case string: - newtm, err := internal.ParseTime(src) - if err != nil { - return err - } - tm.Time = newtm - return nil - case []byte: - newtm, err := internal.ParseTime(internal.String(src)) - if err != nil { - return err - } - tm.Time = newtm - return nil - default: - return scanError(bunNullTimeType, src) - } -} diff --git a/vendor/github.com/uptrace/bun/schema/table.go b/vendor/github.com/uptrace/bun/schema/table.go deleted file mode 100644 index ed8c517c..00000000 --- a/vendor/github.com/uptrace/bun/schema/table.go +++ /dev/null @@ -1,1035 +0,0 @@ -package schema - -import ( - "database/sql" - "fmt" - "reflect" - "strings" - "sync" - "time" - - "github.com/jinzhu/inflection" - - "github.com/uptrace/bun/internal" - "github.com/uptrace/bun/internal/tagparser" -) - -const ( - beforeAppendModelHookFlag internal.Flag = 1 << iota - beforeScanHookFlag - afterScanHookFlag - beforeScanRowHookFlag - afterScanRowHookFlag -) - -var ( - baseModelType = reflect.TypeOf((*BaseModel)(nil)).Elem() - tableNameInflector = inflection.Plural -) - -type BaseModel struct{} - -// SetTableNameInflector overrides the default func that pluralizes -// model name to get table name, e.g. my_article becomes my_articles. -func SetTableNameInflector(fn func(string) string) { - tableNameInflector = fn -} - -// Table represents a SQL table created from Go struct. -type Table struct { - dialect Dialect - - Type reflect.Type - ZeroValue reflect.Value // reflect.Struct - ZeroIface interface{} // struct pointer - - TypeName string - ModelName string - - Name string - SQLName Safe - SQLNameForSelects Safe - Alias string - SQLAlias Safe - - Fields []*Field // PKs + DataFields - PKs []*Field - DataFields []*Field - - fieldsMapMu sync.RWMutex - FieldMap map[string]*Field - - Relations map[string]*Relation - Unique map[string][]*Field - - SoftDeleteField *Field - UpdateSoftDeleteField func(fv reflect.Value, tm time.Time) error - - allFields []*Field // read only - - flags internal.Flag -} - -func newTable(dialect Dialect, typ reflect.Type) *Table { - t := new(Table) - t.dialect = dialect - t.Type = typ - t.ZeroValue = reflect.New(t.Type).Elem() - t.ZeroIface = reflect.New(t.Type).Interface() - t.TypeName = internal.ToExported(t.Type.Name()) - t.ModelName = internal.Underscore(t.Type.Name()) - tableName := tableNameInflector(t.ModelName) - t.setName(tableName) - t.Alias = t.ModelName - t.SQLAlias = t.quoteIdent(t.ModelName) - - hooks := []struct { - typ reflect.Type - flag internal.Flag - }{ - {beforeAppendModelHookType, beforeAppendModelHookFlag}, - - {beforeScanHookType, beforeScanHookFlag}, - {afterScanHookType, afterScanHookFlag}, - - {beforeScanRowHookType, beforeScanRowHookFlag}, - {afterScanRowHookType, afterScanRowHookFlag}, - } - - typ = reflect.PtrTo(t.Type) - for _, hook := range hooks { - if typ.Implements(hook.typ) { - t.flags = t.flags.Set(hook.flag) - } - } - - // Deprecated. - deprecatedHooks := []struct { - typ reflect.Type - flag internal.Flag - msg string - }{ - {beforeScanHookType, beforeScanHookFlag, "rename BeforeScan hook to BeforeScanRow"}, - {afterScanHookType, afterScanHookFlag, "rename AfterScan hook to AfterScanRow"}, - } - for _, hook := range deprecatedHooks { - if typ.Implements(hook.typ) { - internal.Deprecated.Printf("%s: %s", t.TypeName, hook.msg) - t.flags = t.flags.Set(hook.flag) - } - } - - return t -} - -func (t *Table) init1() { - t.initFields() -} - -func (t *Table) init2() { - t.initRelations() -} - -func (t *Table) setName(name string) { - t.Name = name - t.SQLName = t.quoteIdent(name) - t.SQLNameForSelects = t.quoteIdent(name) - if t.SQLAlias == "" { - t.Alias = name - t.SQLAlias = t.quoteIdent(name) - } -} - -func (t *Table) String() string { - return "model=" + t.TypeName -} - -func (t *Table) CheckPKs() error { - if len(t.PKs) == 0 { - return fmt.Errorf("bun: %s does not have primary keys", t) - } - return nil -} - -func (t *Table) addField(field *Field) { - t.Fields = append(t.Fields, field) - if field.IsPK { - t.PKs = append(t.PKs, field) - } else { - t.DataFields = append(t.DataFields, field) - } - t.FieldMap[field.Name] = field -} - -func (t *Table) removeField(field *Field) { - t.Fields = removeField(t.Fields, field) - if field.IsPK { - t.PKs = removeField(t.PKs, field) - } else { - t.DataFields = removeField(t.DataFields, field) - } - delete(t.FieldMap, field.Name) -} - -func (t *Table) fieldWithLock(name string) *Field { - t.fieldsMapMu.RLock() - field := t.FieldMap[name] - t.fieldsMapMu.RUnlock() - return field -} - -func (t *Table) HasField(name string) bool { - _, ok := t.FieldMap[name] - return ok -} - -func (t *Table) Field(name string) (*Field, error) { - field, ok := t.FieldMap[name] - if !ok { - return nil, fmt.Errorf("bun: %s does not have column=%s", t, name) - } - return field, nil -} - -func (t *Table) fieldByGoName(name string) *Field { - for _, f := range t.allFields { - if f.GoName == name { - return f - } - } - return nil -} - -func (t *Table) initFields() { - t.Fields = make([]*Field, 0, t.Type.NumField()) - t.FieldMap = make(map[string]*Field, t.Type.NumField()) - t.addFields(t.Type, "", nil) -} - -func (t *Table) addFields(typ reflect.Type, prefix string, index []int) { - for i := 0; i < typ.NumField(); i++ { - f := typ.Field(i) - unexported := f.PkgPath != "" - - if unexported && !f.Anonymous { // unexported - continue - } - if f.Tag.Get("bun") == "-" { - continue - } - - if f.Anonymous { - if f.Name == "BaseModel" && f.Type == baseModelType { - if len(index) == 0 { - t.processBaseModelField(f) - } - continue - } - - // If field is an embedded struct, add each field of the embedded struct. - fieldType := indirectType(f.Type) - if fieldType.Kind() == reflect.Struct { - t.addFields(fieldType, "", withIndex(index, f.Index)) - - tag := tagparser.Parse(f.Tag.Get("bun")) - if tag.HasOption("inherit") || tag.HasOption("extend") { - embeddedTable := t.dialect.Tables().Ref(fieldType) - t.TypeName = embeddedTable.TypeName - t.SQLName = embeddedTable.SQLName - t.SQLNameForSelects = embeddedTable.SQLNameForSelects - t.Alias = embeddedTable.Alias - t.SQLAlias = embeddedTable.SQLAlias - t.ModelName = embeddedTable.ModelName - } - continue - } - } - - // If field is not a struct, add it. - // This will also add any embedded non-struct type as a field. - if field := t.newField(f, prefix, index); field != nil { - t.addField(field) - } - } -} - -func (t *Table) processBaseModelField(f reflect.StructField) { - tag := tagparser.Parse(f.Tag.Get("bun")) - - if isKnownTableOption(tag.Name) { - internal.Warn.Printf( - "%s.%s tag name %q is also an option name, is it a mistake? Try table:%s.", - t.TypeName, f.Name, tag.Name, tag.Name, - ) - } - - for name := range tag.Options { - if !isKnownTableOption(name) { - internal.Warn.Printf("%s.%s has unknown tag option: %q", t.TypeName, f.Name, name) - } - } - - if tag.Name != "" { - t.setName(tag.Name) - } - - if s, ok := tag.Option("table"); ok { - t.setName(s) - } - - if s, ok := tag.Option("select"); ok { - t.SQLNameForSelects = t.quoteTableName(s) - } - - if s, ok := tag.Option("alias"); ok { - t.Alias = s - t.SQLAlias = t.quoteIdent(s) - } -} - -// nolint -func (t *Table) newField(f reflect.StructField, prefix string, index []int) *Field { - tag := tagparser.Parse(f.Tag.Get("bun")) - - if nextPrefix, ok := tag.Option("embed"); ok { - fieldType := indirectType(f.Type) - if fieldType.Kind() != reflect.Struct { - panic(fmt.Errorf("bun: embed %s.%s: got %s, wanted reflect.Struct", - t.TypeName, f.Name, fieldType.Kind())) - } - t.addFields(fieldType, prefix+nextPrefix, withIndex(index, f.Index)) - return nil - } - - sqlName := internal.Underscore(f.Name) - if tag.Name != "" && tag.Name != sqlName { - if isKnownFieldOption(tag.Name) { - internal.Warn.Printf( - "%s.%s tag name %q is also an option name, is it a mistake? Try column:%s.", - t.TypeName, f.Name, tag.Name, tag.Name, - ) - } - sqlName = tag.Name - } - if s, ok := tag.Option("column"); ok { - sqlName = s - } - sqlName = prefix + sqlName - - for name := range tag.Options { - if !isKnownFieldOption(name) { - internal.Warn.Printf("%s.%s has unknown tag option: %q", t.TypeName, f.Name, name) - } - } - - index = withIndex(index, f.Index) - if field := t.fieldWithLock(sqlName); field != nil { - if indexEqual(field.Index, index) { - return field - } - t.removeField(field) - } - - field := &Field{ - StructField: f, - IsPtr: f.Type.Kind() == reflect.Ptr, - - Tag: tag, - IndirectType: indirectType(f.Type), - Index: index, - - Name: sqlName, - GoName: f.Name, - SQLName: t.quoteIdent(sqlName), - } - - field.NotNull = tag.HasOption("notnull") - field.NullZero = tag.HasOption("nullzero") - if tag.HasOption("pk") { - field.IsPK = true - field.NotNull = true - } - if tag.HasOption("autoincrement") { - field.AutoIncrement = true - field.NullZero = true - } - if tag.HasOption("identity") { - field.Identity = true - } - - if v, ok := tag.Options["unique"]; ok { - var names []string - if len(v) == 1 { - // Split the value by comma, this will allow multiple names to be specified. - // We can use this to create multiple named unique constraints where a single column - // might be included in multiple constraints. - names = strings.Split(v[0], ",") - } else { - names = v - } - - for _, uniqueName := range names { - if t.Unique == nil { - t.Unique = make(map[string][]*Field) - } - t.Unique[uniqueName] = append(t.Unique[uniqueName], field) - } - } - if s, ok := tag.Option("default"); ok { - field.SQLDefault = s - field.NullZero = true - } - if s, ok := field.Tag.Option("type"); ok { - field.UserSQLType = s - } - field.DiscoveredSQLType = DiscoverSQLType(field.IndirectType) - field.Append = FieldAppender(t.dialect, field) - field.Scan = FieldScanner(t.dialect, field) - field.IsZero = zeroChecker(field.StructField.Type) - - if v, ok := tag.Option("alt"); ok { - t.FieldMap[v] = field - } - - t.allFields = append(t.allFields, field) - if tag.HasOption("scanonly") { - t.FieldMap[field.Name] = field - if field.IndirectType.Kind() == reflect.Struct { - t.inlineFields(field, nil) - } - return nil - } - - if _, ok := tag.Options["soft_delete"]; ok { - t.SoftDeleteField = field - t.UpdateSoftDeleteField = softDeleteFieldUpdater(field) - } - - return field -} - -//--------------------------------------------------------------------------------------- - -func (t *Table) initRelations() { - for i := 0; i < len(t.Fields); { - f := t.Fields[i] - if t.tryRelation(f) { - t.Fields = removeField(t.Fields, f) - t.DataFields = removeField(t.DataFields, f) - } else { - i++ - } - - if f.IndirectType.Kind() == reflect.Struct { - t.inlineFields(f, nil) - } - } -} - -func (t *Table) tryRelation(field *Field) bool { - if rel, ok := field.Tag.Option("rel"); ok { - t.initRelation(field, rel) - return true - } - if field.Tag.HasOption("m2m") { - t.addRelation(t.m2mRelation(field)) - return true - } - - if field.Tag.HasOption("join") { - internal.Warn.Printf( - `%s.%s "join" option must come together with "rel" option`, - t.TypeName, field.GoName, - ) - } - - return false -} - -func (t *Table) initRelation(field *Field, rel string) { - switch rel { - case "belongs-to": - t.addRelation(t.belongsToRelation(field)) - case "has-one": - t.addRelation(t.hasOneRelation(field)) - case "has-many": - t.addRelation(t.hasManyRelation(field)) - default: - panic(fmt.Errorf("bun: unknown relation=%s on field=%s", rel, field.GoName)) - } -} - -func (t *Table) addRelation(rel *Relation) { - if t.Relations == nil { - t.Relations = make(map[string]*Relation) - } - _, ok := t.Relations[rel.Field.GoName] - if ok { - panic(fmt.Errorf("%s already has %s", t, rel)) - } - t.Relations[rel.Field.GoName] = rel -} - -func (t *Table) belongsToRelation(field *Field) *Relation { - joinTable := t.dialect.Tables().Ref(field.IndirectType) - if err := joinTable.CheckPKs(); err != nil { - panic(err) - } - - rel := &Relation{ - Type: HasOneRelation, - Field: field, - JoinTable: joinTable, - } - - if field.Tag.HasOption("join_on") { - rel.Condition = field.Tag.Options["join_on"] - } - - rel.OnUpdate = "ON UPDATE NO ACTION" - if onUpdate, ok := field.Tag.Options["on_update"]; ok { - if len(onUpdate) > 1 { - panic(fmt.Errorf("bun: %s belongs-to %s: on_update option must be a single field", t.TypeName, field.GoName)) - } - - rule := strings.ToUpper(onUpdate[0]) - if !isKnownFKRule(rule) { - internal.Warn.Printf("bun: %s belongs-to %s: unknown on_update rule %s", t.TypeName, field.GoName, rule) - } - - s := fmt.Sprintf("ON UPDATE %s", rule) - rel.OnUpdate = s - } - - rel.OnDelete = "ON DELETE NO ACTION" - if onDelete, ok := field.Tag.Options["on_delete"]; ok { - if len(onDelete) > 1 { - panic(fmt.Errorf("bun: %s belongs-to %s: on_delete option must be a single field", t.TypeName, field.GoName)) - } - - rule := strings.ToUpper(onDelete[0]) - if !isKnownFKRule(rule) { - internal.Warn.Printf("bun: %s belongs-to %s: unknown on_delete rule %s", t.TypeName, field.GoName, rule) - } - s := fmt.Sprintf("ON DELETE %s", rule) - rel.OnDelete = s - } - - if join, ok := field.Tag.Options["join"]; ok { - baseColumns, joinColumns := parseRelationJoin(join) - for i, baseColumn := range baseColumns { - joinColumn := joinColumns[i] - - if f := t.fieldWithLock(baseColumn); f != nil { - rel.BaseFields = append(rel.BaseFields, f) - } else { - panic(fmt.Errorf( - "bun: %s belongs-to %s: %s must have column %s", - t.TypeName, field.GoName, t.TypeName, baseColumn, - )) - } - - if f := joinTable.fieldWithLock(joinColumn); f != nil { - rel.JoinFields = append(rel.JoinFields, f) - } else { - panic(fmt.Errorf( - "bun: %s belongs-to %s: %s must have column %s", - t.TypeName, field.GoName, t.TypeName, baseColumn, - )) - } - } - return rel - } - - rel.JoinFields = joinTable.PKs - fkPrefix := internal.Underscore(field.GoName) + "_" - for _, joinPK := range joinTable.PKs { - fkName := fkPrefix + joinPK.Name - if fk := t.fieldWithLock(fkName); fk != nil { - rel.BaseFields = append(rel.BaseFields, fk) - continue - } - - if fk := t.fieldWithLock(joinPK.Name); fk != nil { - rel.BaseFields = append(rel.BaseFields, fk) - continue - } - - panic(fmt.Errorf( - "bun: %s belongs-to %s: %s must have column %s "+ - "(to override, use join:base_column=join_column tag on %s field)", - t.TypeName, field.GoName, t.TypeName, fkName, field.GoName, - )) - } - return rel -} - -func (t *Table) hasOneRelation(field *Field) *Relation { - if err := t.CheckPKs(); err != nil { - panic(err) - } - - joinTable := t.dialect.Tables().Ref(field.IndirectType) - rel := &Relation{ - Type: BelongsToRelation, - Field: field, - JoinTable: joinTable, - } - - if field.Tag.HasOption("join_on") { - rel.Condition = field.Tag.Options["join_on"] - } - - if join, ok := field.Tag.Options["join"]; ok { - baseColumns, joinColumns := parseRelationJoin(join) - for i, baseColumn := range baseColumns { - if f := t.fieldWithLock(baseColumn); f != nil { - rel.BaseFields = append(rel.BaseFields, f) - } else { - panic(fmt.Errorf( - "bun: %s has-one %s: %s must have column %s", - field.GoName, t.TypeName, joinTable.TypeName, baseColumn, - )) - } - - joinColumn := joinColumns[i] - if f := joinTable.fieldWithLock(joinColumn); f != nil { - rel.JoinFields = append(rel.JoinFields, f) - } else { - panic(fmt.Errorf( - "bun: %s has-one %s: %s must have column %s", - field.GoName, t.TypeName, joinTable.TypeName, baseColumn, - )) - } - } - return rel - } - - rel.BaseFields = t.PKs - fkPrefix := internal.Underscore(t.ModelName) + "_" - for _, pk := range t.PKs { - fkName := fkPrefix + pk.Name - if f := joinTable.fieldWithLock(fkName); f != nil { - rel.JoinFields = append(rel.JoinFields, f) - continue - } - - if f := joinTable.fieldWithLock(pk.Name); f != nil { - rel.JoinFields = append(rel.JoinFields, f) - continue - } - - panic(fmt.Errorf( - "bun: %s has-one %s: %s must have column %s "+ - "(to override, use join:base_column=join_column tag on %s field)", - field.GoName, t.TypeName, joinTable.TypeName, fkName, field.GoName, - )) - } - return rel -} - -func (t *Table) hasManyRelation(field *Field) *Relation { - if err := t.CheckPKs(); err != nil { - panic(err) - } - if field.IndirectType.Kind() != reflect.Slice { - panic(fmt.Errorf( - "bun: %s.%s has-many relation requires slice, got %q", - t.TypeName, field.GoName, field.IndirectType.Kind(), - )) - } - - joinTable := t.dialect.Tables().Ref(indirectType(field.IndirectType.Elem())) - polymorphicValue, isPolymorphic := field.Tag.Option("polymorphic") - rel := &Relation{ - Type: HasManyRelation, - Field: field, - JoinTable: joinTable, - } - - if field.Tag.HasOption("join_on") { - rel.Condition = field.Tag.Options["join_on"] - } - - var polymorphicColumn string - - if join, ok := field.Tag.Options["join"]; ok { - baseColumns, joinColumns := parseRelationJoin(join) - for i, baseColumn := range baseColumns { - joinColumn := joinColumns[i] - - if isPolymorphic && baseColumn == "type" { - polymorphicColumn = joinColumn - continue - } - - if f := t.fieldWithLock(baseColumn); f != nil { - rel.BaseFields = append(rel.BaseFields, f) - } else { - panic(fmt.Errorf( - "bun: %s has-many %s: %s must have column %s", - t.TypeName, field.GoName, t.TypeName, baseColumn, - )) - } - - if f := joinTable.fieldWithLock(joinColumn); f != nil { - rel.JoinFields = append(rel.JoinFields, f) - } else { - panic(fmt.Errorf( - "bun: %s has-many %s: %s must have column %s", - t.TypeName, field.GoName, t.TypeName, baseColumn, - )) - } - } - } else { - rel.BaseFields = t.PKs - fkPrefix := internal.Underscore(t.ModelName) + "_" - if isPolymorphic { - polymorphicColumn = fkPrefix + "type" - } - - for _, pk := range t.PKs { - joinColumn := fkPrefix + pk.Name - if fk := joinTable.fieldWithLock(joinColumn); fk != nil { - rel.JoinFields = append(rel.JoinFields, fk) - continue - } - - if fk := joinTable.fieldWithLock(pk.Name); fk != nil { - rel.JoinFields = append(rel.JoinFields, fk) - continue - } - - panic(fmt.Errorf( - "bun: %s has-many %s: %s must have column %s "+ - "(to override, use join:base_column=join_column tag on the field %s)", - t.TypeName, field.GoName, joinTable.TypeName, joinColumn, field.GoName, - )) - } - } - - if isPolymorphic { - rel.PolymorphicField = joinTable.fieldWithLock(polymorphicColumn) - if rel.PolymorphicField == nil { - panic(fmt.Errorf( - "bun: %s has-many %s: %s must have polymorphic column %s", - t.TypeName, field.GoName, joinTable.TypeName, polymorphicColumn, - )) - } - - if polymorphicValue == "" { - polymorphicValue = t.ModelName - } - rel.PolymorphicValue = polymorphicValue - } - - return rel -} - -func (t *Table) m2mRelation(field *Field) *Relation { - if field.IndirectType.Kind() != reflect.Slice { - panic(fmt.Errorf( - "bun: %s.%s m2m relation requires slice, got %q", - t.TypeName, field.GoName, field.IndirectType.Kind(), - )) - } - joinTable := t.dialect.Tables().Ref(indirectType(field.IndirectType.Elem())) - - if err := t.CheckPKs(); err != nil { - panic(err) - } - if err := joinTable.CheckPKs(); err != nil { - panic(err) - } - - m2mTableName, ok := field.Tag.Option("m2m") - if !ok { - panic(fmt.Errorf("bun: %s must have m2m tag option", field.GoName)) - } - - m2mTable := t.dialect.Tables().ByName(m2mTableName) - if m2mTable == nil { - panic(fmt.Errorf( - "bun: can't find m2m %s table (use db.RegisterModel)", - m2mTableName, - )) - } - - rel := &Relation{ - Type: ManyToManyRelation, - Field: field, - JoinTable: joinTable, - M2MTable: m2mTable, - } - - if field.Tag.HasOption("join_on") { - rel.Condition = field.Tag.Options["join_on"] - } - - var leftColumn, rightColumn string - - if join, ok := field.Tag.Options["join"]; ok { - left, right := parseRelationJoin(join) - leftColumn = left[0] - rightColumn = right[0] - } else { - leftColumn = t.TypeName - rightColumn = joinTable.TypeName - } - - leftField := m2mTable.fieldByGoName(leftColumn) - if leftField == nil { - panic(fmt.Errorf( - "bun: %s many-to-many %s: %s must have field %s "+ - "(to override, use tag join:LeftField=RightField on field %s.%s", - t.TypeName, field.GoName, m2mTable.TypeName, leftColumn, t.TypeName, field.GoName, - )) - } - - rightField := m2mTable.fieldByGoName(rightColumn) - if rightField == nil { - panic(fmt.Errorf( - "bun: %s many-to-many %s: %s must have field %s "+ - "(to override, use tag join:LeftField=RightField on field %s.%s", - t.TypeName, field.GoName, m2mTable.TypeName, rightColumn, t.TypeName, field.GoName, - )) - } - - leftRel := m2mTable.belongsToRelation(leftField) - rel.BaseFields = leftRel.JoinFields - rel.M2MBaseFields = leftRel.BaseFields - - rightRel := m2mTable.belongsToRelation(rightField) - rel.JoinFields = rightRel.JoinFields - rel.M2MJoinFields = rightRel.BaseFields - - return rel -} - -func (t *Table) inlineFields(field *Field, seen map[reflect.Type]struct{}) { - if seen == nil { - seen = map[reflect.Type]struct{}{t.Type: {}} - } - - if _, ok := seen[field.IndirectType]; ok { - return - } - seen[field.IndirectType] = struct{}{} - - joinTable := t.dialect.Tables().Ref(field.IndirectType) - for _, f := range joinTable.allFields { - f = f.Clone() - f.GoName = field.GoName + "_" + f.GoName - f.Name = field.Name + "__" + f.Name - f.SQLName = t.quoteIdent(f.Name) - f.Index = withIndex(field.Index, f.Index) - - t.fieldsMapMu.Lock() - if _, ok := t.FieldMap[f.Name]; !ok { - t.FieldMap[f.Name] = f - } - t.fieldsMapMu.Unlock() - - if f.IndirectType.Kind() != reflect.Struct { - continue - } - - if _, ok := seen[f.IndirectType]; !ok { - t.inlineFields(f, seen) - } - } -} - -//------------------------------------------------------------------------------ - -func (t *Table) Dialect() Dialect { return t.dialect } - -func (t *Table) HasBeforeAppendModelHook() bool { return t.flags.Has(beforeAppendModelHookFlag) } - -// DEPRECATED. Use HasBeforeScanRowHook. -func (t *Table) HasBeforeScanHook() bool { return t.flags.Has(beforeScanHookFlag) } - -// DEPRECATED. Use HasAfterScanRowHook. -func (t *Table) HasAfterScanHook() bool { return t.flags.Has(afterScanHookFlag) } - -func (t *Table) HasBeforeScanRowHook() bool { return t.flags.Has(beforeScanRowHookFlag) } -func (t *Table) HasAfterScanRowHook() bool { return t.flags.Has(afterScanRowHookFlag) } - -//------------------------------------------------------------------------------ - -func (t *Table) AppendNamedArg( - fmter Formatter, b []byte, name string, strct reflect.Value, -) ([]byte, bool) { - if field, ok := t.FieldMap[name]; ok { - return field.AppendValue(fmter, b, strct), true - } - return b, false -} - -func (t *Table) quoteTableName(s string) Safe { - // Don't quote if table name contains placeholder (?) or parentheses. - if strings.IndexByte(s, '?') >= 0 || - strings.IndexByte(s, '(') >= 0 || - strings.IndexByte(s, ')') >= 0 { - return Safe(s) - } - return t.quoteIdent(s) -} - -func (t *Table) quoteIdent(s string) Safe { - return Safe(NewFormatter(t.dialect).AppendIdent(nil, s)) -} - -func isKnownTableOption(name string) bool { - switch name { - case "table", "alias", "select": - return true - } - return false -} - -func isKnownFieldOption(name string) bool { - switch name { - case "column", - "alias", - "type", - "array", - "hstore", - "composite", - "json_use_number", - "msgpack", - "notnull", - "nullzero", - "default", - "unique", - "soft_delete", - "scanonly", - "skipupdate", - - "pk", - "autoincrement", - "rel", - "join", - "join_on", - "on_update", - "on_delete", - "m2m", - "polymorphic", - "identity": - return true - } - return false -} - -func isKnownFKRule(name string) bool { - switch name { - case "CASCADE", - "RESTRICT", - "SET NULL", - "SET DEFAULT": - return true - } - return false -} - -func removeField(fields []*Field, field *Field) []*Field { - for i, f := range fields { - if f == field { - return append(fields[:i], fields[i+1:]...) - } - } - return fields -} - -func parseRelationJoin(join []string) ([]string, []string) { - var ss []string - if len(join) == 1 { - ss = strings.Split(join[0], ",") - } else { - ss = join - } - - baseColumns := make([]string, len(ss)) - joinColumns := make([]string, len(ss)) - for i, s := range ss { - ss := strings.Split(strings.TrimSpace(s), "=") - if len(ss) != 2 { - panic(fmt.Errorf("can't parse relation join: %q", join)) - } - baseColumns[i] = ss[0] - joinColumns[i] = ss[1] - } - return baseColumns, joinColumns -} - -//------------------------------------------------------------------------------ - -func softDeleteFieldUpdater(field *Field) func(fv reflect.Value, tm time.Time) error { - typ := field.StructField.Type - - switch typ { - case timeType: - return func(fv reflect.Value, tm time.Time) error { - ptr := fv.Addr().Interface().(*time.Time) - *ptr = tm - return nil - } - case nullTimeType: - return func(fv reflect.Value, tm time.Time) error { - ptr := fv.Addr().Interface().(*sql.NullTime) - *ptr = sql.NullTime{Time: tm} - return nil - } - case nullIntType: - return func(fv reflect.Value, tm time.Time) error { - ptr := fv.Addr().Interface().(*sql.NullInt64) - *ptr = sql.NullInt64{Int64: tm.UnixNano()} - return nil - } - } - - switch field.IndirectType.Kind() { - case reflect.Int64: - return func(fv reflect.Value, tm time.Time) error { - ptr := fv.Addr().Interface().(*int64) - *ptr = tm.UnixNano() - return nil - } - case reflect.Ptr: - typ = typ.Elem() - default: - return softDeleteFieldUpdaterFallback(field) - } - - switch typ { //nolint:gocritic - case timeType: - return func(fv reflect.Value, tm time.Time) error { - fv.Set(reflect.ValueOf(&tm)) - return nil - } - } - - switch typ.Kind() { //nolint:gocritic - case reflect.Int64: - return func(fv reflect.Value, tm time.Time) error { - utime := tm.UnixNano() - fv.Set(reflect.ValueOf(&utime)) - return nil - } - } - - return softDeleteFieldUpdaterFallback(field) -} - -func softDeleteFieldUpdaterFallback(field *Field) func(fv reflect.Value, tm time.Time) error { - return func(fv reflect.Value, tm time.Time) error { - return field.ScanWithCheck(fv, tm) - } -} - -func withIndex(a, b []int) []int { - dest := make([]int, 0, len(a)+len(b)) - dest = append(dest, a...) - dest = append(dest, b...) - return dest -} diff --git a/vendor/github.com/uptrace/bun/schema/tables.go b/vendor/github.com/uptrace/bun/schema/tables.go deleted file mode 100644 index b6215a14..00000000 --- a/vendor/github.com/uptrace/bun/schema/tables.go +++ /dev/null @@ -1,151 +0,0 @@ -package schema - -import ( - "fmt" - "reflect" - "sync" -) - -type tableInProgress struct { - table *Table - - init1Once sync.Once - init2Once sync.Once -} - -func newTableInProgress(table *Table) *tableInProgress { - return &tableInProgress{ - table: table, - } -} - -func (inp *tableInProgress) init1() bool { - var inited bool - inp.init1Once.Do(func() { - inp.table.init1() - inited = true - }) - return inited -} - -func (inp *tableInProgress) init2() bool { - var inited bool - inp.init2Once.Do(func() { - inp.table.init2() - inited = true - }) - return inited -} - -type Tables struct { - dialect Dialect - tables sync.Map - - mu sync.RWMutex - inProgress map[reflect.Type]*tableInProgress -} - -func NewTables(dialect Dialect) *Tables { - return &Tables{ - dialect: dialect, - inProgress: make(map[reflect.Type]*tableInProgress), - } -} - -func (t *Tables) Register(models ...interface{}) { - for _, model := range models { - _ = t.Get(reflect.TypeOf(model).Elem()) - } -} - -func (t *Tables) Get(typ reflect.Type) *Table { - return t.table(typ, false) -} - -func (t *Tables) Ref(typ reflect.Type) *Table { - return t.table(typ, true) -} - -func (t *Tables) table(typ reflect.Type, allowInProgress bool) *Table { - typ = indirectType(typ) - if typ.Kind() != reflect.Struct { - panic(fmt.Errorf("got %s, wanted %s", typ.Kind(), reflect.Struct)) - } - - if v, ok := t.tables.Load(typ); ok { - return v.(*Table) - } - - t.mu.Lock() - - if v, ok := t.tables.Load(typ); ok { - t.mu.Unlock() - return v.(*Table) - } - - var table *Table - - inProgress := t.inProgress[typ] - if inProgress == nil { - table = newTable(t.dialect, typ) - inProgress = newTableInProgress(table) - t.inProgress[typ] = inProgress - } else { - table = inProgress.table - } - - t.mu.Unlock() - - inProgress.init1() - if allowInProgress { - return table - } - - if !inProgress.init2() { - return table - } - - t.mu.Lock() - delete(t.inProgress, typ) - t.tables.Store(typ, table) - t.mu.Unlock() - - t.dialect.OnTable(table) - - for _, field := range table.FieldMap { - if field.UserSQLType == "" { - field.UserSQLType = field.DiscoveredSQLType - } - if field.CreateTableSQLType == "" { - field.CreateTableSQLType = field.UserSQLType - } - } - - return table -} - -func (t *Tables) ByModel(name string) *Table { - var found *Table - t.tables.Range(func(key, value interface{}) bool { - t := value.(*Table) - if t.TypeName == name { - found = t - return false - } - return true - }) - return found -} - -func (t *Tables) ByName(name string) *Table { - var found *Table - t.tables.Range(func(key, value interface{}) bool { - t := value.(*Table) - if t.Name == name { - found = t - return false - } - return true - }) - return found -} diff --git a/vendor/github.com/uptrace/bun/schema/zerochecker.go b/vendor/github.com/uptrace/bun/schema/zerochecker.go deleted file mode 100644 index f088b8c2..00000000 --- a/vendor/github.com/uptrace/bun/schema/zerochecker.go +++ /dev/null @@ -1,122 +0,0 @@ -package schema - -import ( - "database/sql/driver" - "reflect" -) - -var isZeroerType = reflect.TypeOf((*isZeroer)(nil)).Elem() - -type isZeroer interface { - IsZero() bool -} - -type IsZeroerFunc func(reflect.Value) bool - -func zeroChecker(typ reflect.Type) IsZeroerFunc { - if typ.Implements(isZeroerType) { - return isZeroInterface - } - - kind := typ.Kind() - - if kind != reflect.Ptr { - ptr := reflect.PtrTo(typ) - if ptr.Implements(isZeroerType) { - return addrChecker(isZeroInterface) - } - } - - switch kind { - case reflect.Array: - if typ.Elem().Kind() == reflect.Uint8 { - return isZeroBytes - } - return isZeroLen - case reflect.String: - return isZeroLen - case reflect.Bool: - return isZeroBool - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - return isZeroInt - case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: - return isZeroUint - case reflect.Float32, reflect.Float64: - return isZeroFloat - case reflect.Interface, reflect.Ptr, reflect.Slice, reflect.Map: - return isNil - } - - if typ.Implements(driverValuerType) { - return isZeroDriverValue - } - - return notZero -} - -func addrChecker(fn IsZeroerFunc) IsZeroerFunc { - return func(v reflect.Value) bool { - if !v.CanAddr() { - return false - } - return fn(v.Addr()) - } -} - -func isZeroInterface(v reflect.Value) bool { - if v.Kind() == reflect.Ptr && v.IsNil() { - return true - } - return v.Interface().(isZeroer).IsZero() -} - -func isZeroDriverValue(v reflect.Value) bool { - if v.Kind() == reflect.Ptr { - return v.IsNil() - } - - valuer := v.Interface().(driver.Valuer) - value, err := valuer.Value() - if err != nil { - return false - } - return value == nil -} - -func isZeroLen(v reflect.Value) bool { - return v.Len() == 0 -} - -func isNil(v reflect.Value) bool { - return v.IsNil() -} - -func isZeroBool(v reflect.Value) bool { - return !v.Bool() -} - -func isZeroInt(v reflect.Value) bool { - return v.Int() == 0 -} - -func isZeroUint(v reflect.Value) bool { - return v.Uint() == 0 -} - -func isZeroFloat(v reflect.Value) bool { - return v.Float() == 0 -} - -func isZeroBytes(v reflect.Value) bool { - b := v.Slice(0, v.Len()).Bytes() - for _, c := range b { - if c != 0 { - return false - } - } - return true -} - -func notZero(v reflect.Value) bool { - return false -} diff --git a/vendor/github.com/uptrace/bun/util.go b/vendor/github.com/uptrace/bun/util.go deleted file mode 100644 index 09ffbb99..00000000 --- a/vendor/github.com/uptrace/bun/util.go +++ /dev/null @@ -1,68 +0,0 @@ -package bun - -import "reflect" - -func indirect(v reflect.Value) reflect.Value { - switch v.Kind() { - case reflect.Interface: - return indirect(v.Elem()) - case reflect.Ptr: - return v.Elem() - default: - return v - } -} - -func walk(v reflect.Value, index []int, fn func(reflect.Value)) { - v = reflect.Indirect(v) - switch v.Kind() { - case reflect.Slice: - sliceLen := v.Len() - for i := 0; i < sliceLen; i++ { - visitField(v.Index(i), index, fn) - } - default: - visitField(v, index, fn) - } -} - -func visitField(v reflect.Value, index []int, fn func(reflect.Value)) { - v = reflect.Indirect(v) - if len(index) > 0 { - v = v.Field(index[0]) - if v.Kind() == reflect.Ptr && v.IsNil() { - return - } - walk(v, index[1:], fn) - } else { - fn(v) - } -} - -func typeByIndex(t reflect.Type, index []int) reflect.Type { - for _, x := range index { - switch t.Kind() { - case reflect.Ptr: - t = t.Elem() - case reflect.Slice: - t = indirectType(t.Elem()) - } - t = t.Field(x).Type - } - return indirectType(t) -} - -func indirectType(t reflect.Type) reflect.Type { - if t.Kind() == reflect.Ptr { - t = t.Elem() - } - return t -} - -func sliceElemType(v reflect.Value) reflect.Type { - elemType := v.Type().Elem() - if elemType.Kind() == reflect.Interface && v.Len() > 0 { - return indirect(v.Index(0).Elem()).Type() - } - return indirectType(elemType) -} diff --git a/vendor/github.com/uptrace/bun/version.go b/vendor/github.com/uptrace/bun/version.go deleted file mode 100644 index daa7f929..00000000 --- a/vendor/github.com/uptrace/bun/version.go +++ /dev/null @@ -1,6 +0,0 @@ -package bun - -// Version is the current release version. -func Version() string { - return "1.1.12" -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc b/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc deleted file mode 100644 index 8b7f044a..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc +++ /dev/null @@ -1,4 +0,0 @@ -semi: false -singleQuote: true -proseWrap: always -printWidth: 100 diff --git a/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml b/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml deleted file mode 100644 index e2ce06c4..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml +++ /dev/null @@ -1,20 +0,0 @@ -sudo: false -language: go - -go: - - 1.15.x - - 1.16.x - - tip - -matrix: - allow_failures: - - go: tip - -env: - - GO111MODULE=on - -go_import_path: github.com/vmihailenco/msgpack - -before_install: - - curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go - env GOPATH)/bin v1.31.0 diff --git a/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md b/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md deleted file mode 100644 index f6b19d5b..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md +++ /dev/null @@ -1,51 +0,0 @@ -## [5.3.5](https://github.com/vmihailenco/msgpack/compare/v5.3.4...v5.3.5) (2021-10-22) - - - -## v5 - -### Added - -- `DecodeMap` is split into `DecodeMap`, `DecodeTypedMap`, and `DecodeUntypedMap`. -- New msgpack extensions API. - -### Changed - -- `Reset*` functions also reset flags. -- `SetMapDecodeFunc` is renamed to `SetMapDecoder`. -- `StructAsArray` is renamed to `UseArrayEncodedStructs`. -- `SortMapKeys` is renamed to `SetSortMapKeys`. - -### Removed - -- `UseJSONTag` is removed. Use `SetCustomStructTag("json")` instead. - -## v4 - -- Encode, Decode, Marshal, and Unmarshal are changed to accept single argument. EncodeMulti and - DecodeMulti are added as replacement. -- Added EncodeInt8/16/32/64 and EncodeUint8/16/32/64. -- Encoder changed to preserve type of numbers instead of chosing most compact encoding. The old - behavior can be achieved with Encoder.UseCompactEncoding. - -## v3.3 - -- `msgpack:",inline"` tag is restored to force inlining structs. - -## v3.2 - -- Decoding extension types returns pointer to the value instead of the value. Fixes #153 - -## v3 - -- gopkg.in is not supported any more. Update import path to github.com/vmihailenco/msgpack. -- Msgpack maps are decoded into map[string]interface{} by default. -- EncodeSliceLen is removed in favor of EncodeArrayLen. DecodeSliceLen is removed in favor of - DecodeArrayLen. -- Embedded structs are automatically inlined where possible. -- Time is encoded using extension as described in https://github.com/msgpack/msgpack/pull/209. Old - format is supported as well. -- EncodeInt8/16/32/64 is replaced with EncodeInt. EncodeUint8/16/32/64 is replaced with EncodeUint. - There should be no performance differences. -- DecodeInterface can now return int8/16/32 and uint8/16/32. -- PeekCode returns codes.Code instead of byte. diff --git a/vendor/github.com/vmihailenco/msgpack/v5/LICENSE b/vendor/github.com/vmihailenco/msgpack/v5/LICENSE deleted file mode 100644 index b749d070..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/LICENSE +++ /dev/null @@ -1,25 +0,0 @@ -Copyright (c) 2013 The github.com/vmihailenco/msgpack Authors. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/vmihailenco/msgpack/v5/Makefile b/vendor/github.com/vmihailenco/msgpack/v5/Makefile deleted file mode 100644 index e9aade78..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/Makefile +++ /dev/null @@ -1,6 +0,0 @@ -test: - go test ./... - go test ./... -short -race - go test ./... -run=NONE -bench=. -benchmem - env GOOS=linux GOARCH=386 go test ./... - go vet diff --git a/vendor/github.com/vmihailenco/msgpack/v5/README.md b/vendor/github.com/vmihailenco/msgpack/v5/README.md deleted file mode 100644 index 66ad98b9..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/README.md +++ /dev/null @@ -1,86 +0,0 @@ -# MessagePack encoding for Golang - -[![Build Status](https://travis-ci.org/vmihailenco/msgpack.svg)](https://travis-ci.org/vmihailenco/msgpack) -[![PkgGoDev](https://pkg.go.dev/badge/github.com/vmihailenco/msgpack/v5)](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5) -[![Documentation](https://img.shields.io/badge/msgpack-documentation-informational)](https://msgpack.uptrace.dev/) -[![Chat](https://discordapp.com/api/guilds/752070105847955518/widget.png)](https://discord.gg/rWtp5Aj) - -> :heart: -> [**Uptrace.dev** - All-in-one tool to optimize performance and monitor errors & logs](https://uptrace.dev/?utm_source=gh-msgpack&utm_campaign=gh-msgpack-var2) - -- Join [Discord](https://discord.gg/rWtp5Aj) to ask questions. -- [Documentation](https://msgpack.uptrace.dev) -- [Reference](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5) -- [Examples](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#pkg-examples) - -Other projects you may like: - -- [Bun](https://bun.uptrace.dev) - fast and simple SQL client for PostgreSQL, MySQL, and SQLite. -- [BunRouter](https://bunrouter.uptrace.dev/) - fast and flexible HTTP router for Go. - -## Features - -- Primitives, arrays, maps, structs, time.Time and interface{}. -- Appengine \*datastore.Key and datastore.Cursor. -- [CustomEncoder]/[CustomDecoder] interfaces for custom encoding. -- [Extensions](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-RegisterExt) to encode - type information. -- Renaming fields via `msgpack:"my_field_name"` and alias via `msgpack:"alias:another_name"`. -- Omitting individual empty fields via `msgpack:",omitempty"` tag or all - [empty fields in a struct](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Marshal-OmitEmpty). -- [Map keys sorting](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.SetSortMapKeys). -- Encoding/decoding all - [structs as arrays](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.UseArrayEncodedStructs) - or - [individual structs](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Marshal-AsArray). -- [Encoder.SetCustomStructTag] with [Decoder.SetCustomStructTag] can turn msgpack into drop-in - replacement for any tag. -- Simple but very fast and efficient - [queries](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Decoder.Query). - -[customencoder]: https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#CustomEncoder -[customdecoder]: https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#CustomDecoder -[encoder.setcustomstructtag]: - https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.SetCustomStructTag -[decoder.setcustomstructtag]: - https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Decoder.SetCustomStructTag - -## Installation - -msgpack supports 2 last Go versions and requires support for -[Go modules](https://github.com/golang/go/wiki/Modules). So make sure to initialize a Go module: - -```shell -go mod init github.com/my/repo -``` - -And then install msgpack/v5 (note _v5_ in the import; omitting it is a popular mistake): - -```shell -go get github.com/vmihailenco/msgpack/v5 -``` - -## Quickstart - -```go -import "github.com/vmihailenco/msgpack/v5" - -func ExampleMarshal() { - type Item struct { - Foo string - } - - b, err := msgpack.Marshal(&Item{Foo: "bar"}) - if err != nil { - panic(err) - } - - var item Item - err = msgpack.Unmarshal(b, &item) - if err != nil { - panic(err) - } - fmt.Println(item.Foo) - // Output: bar -} -``` diff --git a/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js b/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js deleted file mode 100644 index 4fedde6d..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js +++ /dev/null @@ -1 +0,0 @@ -module.exports = { extends: ['@commitlint/config-conventional'] } diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode.go b/vendor/github.com/vmihailenco/msgpack/v5/decode.go deleted file mode 100644 index 5df40e5d..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode.go +++ /dev/null @@ -1,663 +0,0 @@ -package msgpack - -import ( - "bufio" - "bytes" - "errors" - "fmt" - "io" - "reflect" - "sync" - "time" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -const ( - looseInterfaceDecodingFlag uint32 = 1 << iota - disallowUnknownFieldsFlag -) - -const ( - bytesAllocLimit = 1e6 // 1mb - sliceAllocLimit = 1e4 - maxMapSize = 1e6 -) - -type bufReader interface { - io.Reader - io.ByteScanner -} - -//------------------------------------------------------------------------------ - -var decPool = sync.Pool{ - New: func() interface{} { - return NewDecoder(nil) - }, -} - -func GetDecoder() *Decoder { - return decPool.Get().(*Decoder) -} - -func PutDecoder(dec *Decoder) { - dec.r = nil - dec.s = nil - decPool.Put(dec) -} - -//------------------------------------------------------------------------------ - -// Unmarshal decodes the MessagePack-encoded data and stores the result -// in the value pointed to by v. -func Unmarshal(data []byte, v interface{}) error { - dec := GetDecoder() - - dec.Reset(bytes.NewReader(data)) - err := dec.Decode(v) - - PutDecoder(dec) - - return err -} - -// A Decoder reads and decodes MessagePack values from an input stream. -type Decoder struct { - r io.Reader - s io.ByteScanner - buf []byte - - rec []byte // accumulates read data if not nil - - dict []string - flags uint32 - structTag string - mapDecoder func(*Decoder) (interface{}, error) -} - -// NewDecoder returns a new decoder that reads from r. -// -// The decoder introduces its own buffering and may read data from r -// beyond the requested msgpack values. Buffering can be disabled -// by passing a reader that implements io.ByteScanner interface. -func NewDecoder(r io.Reader) *Decoder { - d := new(Decoder) - d.Reset(r) - return d -} - -// Reset discards any buffered data, resets all state, and switches the buffered -// reader to read from r. -func (d *Decoder) Reset(r io.Reader) { - d.ResetDict(r, nil) -} - -// ResetDict is like Reset, but also resets the dict. -func (d *Decoder) ResetDict(r io.Reader, dict []string) { - d.resetReader(r) - d.flags = 0 - d.structTag = "" - d.mapDecoder = nil - d.dict = dict -} - -func (d *Decoder) WithDict(dict []string, fn func(*Decoder) error) error { - oldDict := d.dict - d.dict = dict - err := fn(d) - d.dict = oldDict - return err -} - -func (d *Decoder) resetReader(r io.Reader) { - if br, ok := r.(bufReader); ok { - d.r = br - d.s = br - } else { - br := bufio.NewReader(r) - d.r = br - d.s = br - } -} - -func (d *Decoder) SetMapDecoder(fn func(*Decoder) (interface{}, error)) { - d.mapDecoder = fn -} - -// UseLooseInterfaceDecoding causes decoder to use DecodeInterfaceLoose -// to decode msgpack value into Go interface{}. -func (d *Decoder) UseLooseInterfaceDecoding(on bool) { - if on { - d.flags |= looseInterfaceDecodingFlag - } else { - d.flags &= ^looseInterfaceDecodingFlag - } -} - -// SetCustomStructTag causes the decoder to use the supplied tag as a fallback option -// if there is no msgpack tag. -func (d *Decoder) SetCustomStructTag(tag string) { - d.structTag = tag -} - -// DisallowUnknownFields causes the Decoder to return an error when the destination -// is a struct and the input contains object keys which do not match any -// non-ignored, exported fields in the destination. -func (d *Decoder) DisallowUnknownFields(on bool) { - if on { - d.flags |= disallowUnknownFieldsFlag - } else { - d.flags &= ^disallowUnknownFieldsFlag - } -} - -// UseInternedStrings enables support for decoding interned strings. -func (d *Decoder) UseInternedStrings(on bool) { - if on { - d.flags |= useInternedStringsFlag - } else { - d.flags &= ^useInternedStringsFlag - } -} - -// Buffered returns a reader of the data remaining in the Decoder's buffer. -// The reader is valid until the next call to Decode. -func (d *Decoder) Buffered() io.Reader { - return d.r -} - -//nolint:gocyclo -func (d *Decoder) Decode(v interface{}) error { - var err error - switch v := v.(type) { - case *string: - if v != nil { - *v, err = d.DecodeString() - return err - } - case *[]byte: - if v != nil { - return d.decodeBytesPtr(v) - } - case *int: - if v != nil { - *v, err = d.DecodeInt() - return err - } - case *int8: - if v != nil { - *v, err = d.DecodeInt8() - return err - } - case *int16: - if v != nil { - *v, err = d.DecodeInt16() - return err - } - case *int32: - if v != nil { - *v, err = d.DecodeInt32() - return err - } - case *int64: - if v != nil { - *v, err = d.DecodeInt64() - return err - } - case *uint: - if v != nil { - *v, err = d.DecodeUint() - return err - } - case *uint8: - if v != nil { - *v, err = d.DecodeUint8() - return err - } - case *uint16: - if v != nil { - *v, err = d.DecodeUint16() - return err - } - case *uint32: - if v != nil { - *v, err = d.DecodeUint32() - return err - } - case *uint64: - if v != nil { - *v, err = d.DecodeUint64() - return err - } - case *bool: - if v != nil { - *v, err = d.DecodeBool() - return err - } - case *float32: - if v != nil { - *v, err = d.DecodeFloat32() - return err - } - case *float64: - if v != nil { - *v, err = d.DecodeFloat64() - return err - } - case *[]string: - return d.decodeStringSlicePtr(v) - case *map[string]string: - return d.decodeMapStringStringPtr(v) - case *map[string]interface{}: - return d.decodeMapStringInterfacePtr(v) - case *time.Duration: - if v != nil { - vv, err := d.DecodeInt64() - *v = time.Duration(vv) - return err - } - case *time.Time: - if v != nil { - *v, err = d.DecodeTime() - return err - } - } - - vv := reflect.ValueOf(v) - if !vv.IsValid() { - return errors.New("msgpack: Decode(nil)") - } - if vv.Kind() != reflect.Ptr { - return fmt.Errorf("msgpack: Decode(non-pointer %T)", v) - } - if vv.IsNil() { - return fmt.Errorf("msgpack: Decode(non-settable %T)", v) - } - - vv = vv.Elem() - if vv.Kind() == reflect.Interface { - if !vv.IsNil() { - vv = vv.Elem() - if vv.Kind() != reflect.Ptr { - return fmt.Errorf("msgpack: Decode(non-pointer %s)", vv.Type().String()) - } - } - } - - return d.DecodeValue(vv) -} - -func (d *Decoder) DecodeMulti(v ...interface{}) error { - for _, vv := range v { - if err := d.Decode(vv); err != nil { - return err - } - } - return nil -} - -func (d *Decoder) decodeInterfaceCond() (interface{}, error) { - if d.flags&looseInterfaceDecodingFlag != 0 { - return d.DecodeInterfaceLoose() - } - return d.DecodeInterface() -} - -func (d *Decoder) DecodeValue(v reflect.Value) error { - decode := getDecoder(v.Type()) - return decode(d, v) -} - -func (d *Decoder) DecodeNil() error { - c, err := d.readCode() - if err != nil { - return err - } - if c != msgpcode.Nil { - return fmt.Errorf("msgpack: invalid code=%x decoding nil", c) - } - return nil -} - -func (d *Decoder) decodeNilValue(v reflect.Value) error { - err := d.DecodeNil() - if v.IsNil() { - return err - } - if v.Kind() == reflect.Ptr { - v = v.Elem() - } - v.Set(reflect.Zero(v.Type())) - return err -} - -func (d *Decoder) DecodeBool() (bool, error) { - c, err := d.readCode() - if err != nil { - return false, err - } - return d.bool(c) -} - -func (d *Decoder) bool(c byte) (bool, error) { - if c == msgpcode.Nil { - return false, nil - } - if c == msgpcode.False { - return false, nil - } - if c == msgpcode.True { - return true, nil - } - return false, fmt.Errorf("msgpack: invalid code=%x decoding bool", c) -} - -func (d *Decoder) DecodeDuration() (time.Duration, error) { - n, err := d.DecodeInt64() - if err != nil { - return 0, err - } - return time.Duration(n), nil -} - -// DecodeInterface decodes value into interface. It returns following types: -// - nil, -// - bool, -// - int8, int16, int32, int64, -// - uint8, uint16, uint32, uint64, -// - float32 and float64, -// - string, -// - []byte, -// - slices of any of the above, -// - maps of any of the above. -// -// DecodeInterface should be used only when you don't know the type of value -// you are decoding. For example, if you are decoding number it is better to use -// DecodeInt64 for negative numbers and DecodeUint64 for positive numbers. -func (d *Decoder) DecodeInterface() (interface{}, error) { - c, err := d.readCode() - if err != nil { - return nil, err - } - - if msgpcode.IsFixedNum(c) { - return int8(c), nil - } - if msgpcode.IsFixedMap(c) { - err = d.s.UnreadByte() - if err != nil { - return nil, err - } - return d.decodeMapDefault() - } - if msgpcode.IsFixedArray(c) { - return d.decodeSlice(c) - } - if msgpcode.IsFixedString(c) { - return d.string(c) - } - - switch c { - case msgpcode.Nil: - return nil, nil - case msgpcode.False, msgpcode.True: - return d.bool(c) - case msgpcode.Float: - return d.float32(c) - case msgpcode.Double: - return d.float64(c) - case msgpcode.Uint8: - return d.uint8() - case msgpcode.Uint16: - return d.uint16() - case msgpcode.Uint32: - return d.uint32() - case msgpcode.Uint64: - return d.uint64() - case msgpcode.Int8: - return d.int8() - case msgpcode.Int16: - return d.int16() - case msgpcode.Int32: - return d.int32() - case msgpcode.Int64: - return d.int64() - case msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: - return d.bytes(c, nil) - case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32: - return d.string(c) - case msgpcode.Array16, msgpcode.Array32: - return d.decodeSlice(c) - case msgpcode.Map16, msgpcode.Map32: - err = d.s.UnreadByte() - if err != nil { - return nil, err - } - return d.decodeMapDefault() - case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, - msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: - return d.decodeInterfaceExt(c) - } - - return 0, fmt.Errorf("msgpack: unknown code %x decoding interface{}", c) -} - -// DecodeInterfaceLoose is like DecodeInterface except that: -// - int8, int16, and int32 are converted to int64, -// - uint8, uint16, and uint32 are converted to uint64, -// - float32 is converted to float64. -// - []byte is converted to string. -func (d *Decoder) DecodeInterfaceLoose() (interface{}, error) { - c, err := d.readCode() - if err != nil { - return nil, err - } - - if msgpcode.IsFixedNum(c) { - return int64(int8(c)), nil - } - if msgpcode.IsFixedMap(c) { - err = d.s.UnreadByte() - if err != nil { - return nil, err - } - return d.decodeMapDefault() - } - if msgpcode.IsFixedArray(c) { - return d.decodeSlice(c) - } - if msgpcode.IsFixedString(c) { - return d.string(c) - } - - switch c { - case msgpcode.Nil: - return nil, nil - case msgpcode.False, msgpcode.True: - return d.bool(c) - case msgpcode.Float, msgpcode.Double: - return d.float64(c) - case msgpcode.Uint8, msgpcode.Uint16, msgpcode.Uint32, msgpcode.Uint64: - return d.uint(c) - case msgpcode.Int8, msgpcode.Int16, msgpcode.Int32, msgpcode.Int64: - return d.int(c) - case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32, - msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: - return d.string(c) - case msgpcode.Array16, msgpcode.Array32: - return d.decodeSlice(c) - case msgpcode.Map16, msgpcode.Map32: - err = d.s.UnreadByte() - if err != nil { - return nil, err - } - return d.decodeMapDefault() - case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, - msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: - return d.decodeInterfaceExt(c) - } - - return 0, fmt.Errorf("msgpack: unknown code %x decoding interface{}", c) -} - -// Skip skips next value. -func (d *Decoder) Skip() error { - c, err := d.readCode() - if err != nil { - return err - } - - if msgpcode.IsFixedNum(c) { - return nil - } - if msgpcode.IsFixedMap(c) { - return d.skipMap(c) - } - if msgpcode.IsFixedArray(c) { - return d.skipSlice(c) - } - if msgpcode.IsFixedString(c) { - return d.skipBytes(c) - } - - switch c { - case msgpcode.Nil, msgpcode.False, msgpcode.True: - return nil - case msgpcode.Uint8, msgpcode.Int8: - return d.skipN(1) - case msgpcode.Uint16, msgpcode.Int16: - return d.skipN(2) - case msgpcode.Uint32, msgpcode.Int32, msgpcode.Float: - return d.skipN(4) - case msgpcode.Uint64, msgpcode.Int64, msgpcode.Double: - return d.skipN(8) - case msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: - return d.skipBytes(c) - case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32: - return d.skipBytes(c) - case msgpcode.Array16, msgpcode.Array32: - return d.skipSlice(c) - case msgpcode.Map16, msgpcode.Map32: - return d.skipMap(c) - case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, - msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: - return d.skipExt(c) - } - - return fmt.Errorf("msgpack: unknown code %x", c) -} - -func (d *Decoder) DecodeRaw() (RawMessage, error) { - d.rec = make([]byte, 0) - if err := d.Skip(); err != nil { - return nil, err - } - msg := RawMessage(d.rec) - d.rec = nil - return msg, nil -} - -// PeekCode returns the next MessagePack code without advancing the reader. -// Subpackage msgpack/codes defines the list of available msgpcode. -func (d *Decoder) PeekCode() (byte, error) { - c, err := d.s.ReadByte() - if err != nil { - return 0, err - } - return c, d.s.UnreadByte() -} - -// ReadFull reads exactly len(buf) bytes into the buf. -func (d *Decoder) ReadFull(buf []byte) error { - _, err := readN(d.r, buf, len(buf)) - return err -} - -func (d *Decoder) hasNilCode() bool { - code, err := d.PeekCode() - return err == nil && code == msgpcode.Nil -} - -func (d *Decoder) readCode() (byte, error) { - c, err := d.s.ReadByte() - if err != nil { - return 0, err - } - if d.rec != nil { - d.rec = append(d.rec, c) - } - return c, nil -} - -func (d *Decoder) readFull(b []byte) error { - _, err := io.ReadFull(d.r, b) - if err != nil { - return err - } - if d.rec != nil { - d.rec = append(d.rec, b...) - } - return nil -} - -func (d *Decoder) readN(n int) ([]byte, error) { - var err error - d.buf, err = readN(d.r, d.buf, n) - if err != nil { - return nil, err - } - if d.rec != nil { - // TODO: read directly into d.rec? - d.rec = append(d.rec, d.buf...) - } - return d.buf, nil -} - -func readN(r io.Reader, b []byte, n int) ([]byte, error) { - if b == nil { - if n == 0 { - return make([]byte, 0), nil - } - switch { - case n < 64: - b = make([]byte, 0, 64) - case n <= bytesAllocLimit: - b = make([]byte, 0, n) - default: - b = make([]byte, 0, bytesAllocLimit) - } - } - - if n <= cap(b) { - b = b[:n] - _, err := io.ReadFull(r, b) - return b, err - } - b = b[:cap(b)] - - var pos int - for { - alloc := min(n-len(b), bytesAllocLimit) - b = append(b, make([]byte, alloc)...) - - _, err := io.ReadFull(r, b[pos:]) - if err != nil { - return b, err - } - - if len(b) == n { - break - } - pos = len(b) - } - - return b, nil -} - -func min(a, b int) int { //nolint:unparam - if a <= b { - return a - } - return b -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go deleted file mode 100644 index 52e0526c..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go +++ /dev/null @@ -1,339 +0,0 @@ -package msgpack - -import ( - "errors" - "fmt" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -var errArrayStruct = errors.New("msgpack: number of fields in array-encoded struct has changed") - -var ( - mapStringStringPtrType = reflect.TypeOf((*map[string]string)(nil)) - mapStringStringType = mapStringStringPtrType.Elem() -) - -var ( - mapStringInterfacePtrType = reflect.TypeOf((*map[string]interface{})(nil)) - mapStringInterfaceType = mapStringInterfacePtrType.Elem() -) - -func decodeMapValue(d *Decoder, v reflect.Value) error { - n, err := d.DecodeMapLen() - if err != nil { - return err - } - - typ := v.Type() - if n == -1 { - v.Set(reflect.Zero(typ)) - return nil - } - - if v.IsNil() { - v.Set(reflect.MakeMap(typ)) - } - if n == 0 { - return nil - } - - return d.decodeTypedMapValue(v, n) -} - -func (d *Decoder) decodeMapDefault() (interface{}, error) { - if d.mapDecoder != nil { - return d.mapDecoder(d) - } - return d.DecodeMap() -} - -// DecodeMapLen decodes map length. Length is -1 when map is nil. -func (d *Decoder) DecodeMapLen() (int, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - - if msgpcode.IsExt(c) { - if err = d.skipExtHeader(c); err != nil { - return 0, err - } - - c, err = d.readCode() - if err != nil { - return 0, err - } - } - return d.mapLen(c) -} - -func (d *Decoder) mapLen(c byte) (int, error) { - if c == msgpcode.Nil { - return -1, nil - } - if c >= msgpcode.FixedMapLow && c <= msgpcode.FixedMapHigh { - return int(c & msgpcode.FixedMapMask), nil - } - if c == msgpcode.Map16 { - size, err := d.uint16() - return int(size), err - } - if c == msgpcode.Map32 { - size, err := d.uint32() - return int(size), err - } - return 0, unexpectedCodeError{code: c, hint: "map length"} -} - -func decodeMapStringStringValue(d *Decoder, v reflect.Value) error { - mptr := v.Addr().Convert(mapStringStringPtrType).Interface().(*map[string]string) - return d.decodeMapStringStringPtr(mptr) -} - -func (d *Decoder) decodeMapStringStringPtr(ptr *map[string]string) error { - size, err := d.DecodeMapLen() - if err != nil { - return err - } - if size == -1 { - *ptr = nil - return nil - } - - m := *ptr - if m == nil { - *ptr = make(map[string]string, min(size, maxMapSize)) - m = *ptr - } - - for i := 0; i < size; i++ { - mk, err := d.DecodeString() - if err != nil { - return err - } - mv, err := d.DecodeString() - if err != nil { - return err - } - m[mk] = mv - } - - return nil -} - -func decodeMapStringInterfaceValue(d *Decoder, v reflect.Value) error { - ptr := v.Addr().Convert(mapStringInterfacePtrType).Interface().(*map[string]interface{}) - return d.decodeMapStringInterfacePtr(ptr) -} - -func (d *Decoder) decodeMapStringInterfacePtr(ptr *map[string]interface{}) error { - m, err := d.DecodeMap() - if err != nil { - return err - } - *ptr = m - return nil -} - -func (d *Decoder) DecodeMap() (map[string]interface{}, error) { - n, err := d.DecodeMapLen() - if err != nil { - return nil, err - } - - if n == -1 { - return nil, nil - } - - m := make(map[string]interface{}, min(n, maxMapSize)) - - for i := 0; i < n; i++ { - mk, err := d.DecodeString() - if err != nil { - return nil, err - } - mv, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - m[mk] = mv - } - - return m, nil -} - -func (d *Decoder) DecodeUntypedMap() (map[interface{}]interface{}, error) { - n, err := d.DecodeMapLen() - if err != nil { - return nil, err - } - - if n == -1 { - return nil, nil - } - - m := make(map[interface{}]interface{}, min(n, maxMapSize)) - - for i := 0; i < n; i++ { - mk, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - - mv, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - - m[mk] = mv - } - - return m, nil -} - -// DecodeTypedMap decodes a typed map. Typed map is a map that has a fixed type for keys and values. -// Key and value types may be different. -func (d *Decoder) DecodeTypedMap() (interface{}, error) { - n, err := d.DecodeMapLen() - if err != nil { - return nil, err - } - if n <= 0 { - return nil, nil - } - - key, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - - value, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - - keyType := reflect.TypeOf(key) - valueType := reflect.TypeOf(value) - - if !keyType.Comparable() { - return nil, fmt.Errorf("msgpack: unsupported map key: %s", keyType.String()) - } - - mapType := reflect.MapOf(keyType, valueType) - mapValue := reflect.MakeMap(mapType) - mapValue.SetMapIndex(reflect.ValueOf(key), reflect.ValueOf(value)) - - n-- - if err := d.decodeTypedMapValue(mapValue, n); err != nil { - return nil, err - } - - return mapValue.Interface(), nil -} - -func (d *Decoder) decodeTypedMapValue(v reflect.Value, n int) error { - typ := v.Type() - keyType := typ.Key() - valueType := typ.Elem() - - for i := 0; i < n; i++ { - mk := reflect.New(keyType).Elem() - if err := d.DecodeValue(mk); err != nil { - return err - } - - mv := reflect.New(valueType).Elem() - if err := d.DecodeValue(mv); err != nil { - return err - } - - v.SetMapIndex(mk, mv) - } - - return nil -} - -func (d *Decoder) skipMap(c byte) error { - n, err := d.mapLen(c) - if err != nil { - return err - } - for i := 0; i < n; i++ { - if err := d.Skip(); err != nil { - return err - } - if err := d.Skip(); err != nil { - return err - } - } - return nil -} - -func decodeStructValue(d *Decoder, v reflect.Value) error { - c, err := d.readCode() - if err != nil { - return err - } - - n, err := d.mapLen(c) - if err == nil { - return d.decodeStruct(v, n) - } - - var err2 error - n, err2 = d.arrayLen(c) - if err2 != nil { - return err - } - - if n <= 0 { - v.Set(reflect.Zero(v.Type())) - return nil - } - - fields := structs.Fields(v.Type(), d.structTag) - if n != len(fields.List) { - return errArrayStruct - } - - for _, f := range fields.List { - if err := f.DecodeValue(d, v); err != nil { - return err - } - } - - return nil -} - -func (d *Decoder) decodeStruct(v reflect.Value, n int) error { - if n == -1 { - v.Set(reflect.Zero(v.Type())) - return nil - } - - fields := structs.Fields(v.Type(), d.structTag) - for i := 0; i < n; i++ { - name, err := d.decodeStringTemp() - if err != nil { - return err - } - - if f := fields.Map[name]; f != nil { - if err := f.DecodeValue(d, v); err != nil { - return err - } - continue - } - - if d.flags&disallowUnknownFieldsFlag != 0 { - return fmt.Errorf("msgpack: unknown field %q", name) - } - if err := d.Skip(); err != nil { - return err - } - } - - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go deleted file mode 100644 index 45d6a741..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go +++ /dev/null @@ -1,295 +0,0 @@ -package msgpack - -import ( - "fmt" - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -func (d *Decoder) skipN(n int) error { - _, err := d.readN(n) - return err -} - -func (d *Decoder) uint8() (uint8, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return c, nil -} - -func (d *Decoder) int8() (int8, error) { - n, err := d.uint8() - return int8(n), err -} - -func (d *Decoder) uint16() (uint16, error) { - b, err := d.readN(2) - if err != nil { - return 0, err - } - return (uint16(b[0]) << 8) | uint16(b[1]), nil -} - -func (d *Decoder) int16() (int16, error) { - n, err := d.uint16() - return int16(n), err -} - -func (d *Decoder) uint32() (uint32, error) { - b, err := d.readN(4) - if err != nil { - return 0, err - } - n := (uint32(b[0]) << 24) | - (uint32(b[1]) << 16) | - (uint32(b[2]) << 8) | - uint32(b[3]) - return n, nil -} - -func (d *Decoder) int32() (int32, error) { - n, err := d.uint32() - return int32(n), err -} - -func (d *Decoder) uint64() (uint64, error) { - b, err := d.readN(8) - if err != nil { - return 0, err - } - n := (uint64(b[0]) << 56) | - (uint64(b[1]) << 48) | - (uint64(b[2]) << 40) | - (uint64(b[3]) << 32) | - (uint64(b[4]) << 24) | - (uint64(b[5]) << 16) | - (uint64(b[6]) << 8) | - uint64(b[7]) - return n, nil -} - -func (d *Decoder) int64() (int64, error) { - n, err := d.uint64() - return int64(n), err -} - -// DecodeUint64 decodes msgpack int8/16/32/64 and uint8/16/32/64 -// into Go uint64. -func (d *Decoder) DecodeUint64() (uint64, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.uint(c) -} - -func (d *Decoder) uint(c byte) (uint64, error) { - if c == msgpcode.Nil { - return 0, nil - } - if msgpcode.IsFixedNum(c) { - return uint64(int8(c)), nil - } - switch c { - case msgpcode.Uint8: - n, err := d.uint8() - return uint64(n), err - case msgpcode.Int8: - n, err := d.int8() - return uint64(n), err - case msgpcode.Uint16: - n, err := d.uint16() - return uint64(n), err - case msgpcode.Int16: - n, err := d.int16() - return uint64(n), err - case msgpcode.Uint32: - n, err := d.uint32() - return uint64(n), err - case msgpcode.Int32: - n, err := d.int32() - return uint64(n), err - case msgpcode.Uint64, msgpcode.Int64: - return d.uint64() - } - return 0, fmt.Errorf("msgpack: invalid code=%x decoding uint64", c) -} - -// DecodeInt64 decodes msgpack int8/16/32/64 and uint8/16/32/64 -// into Go int64. -func (d *Decoder) DecodeInt64() (int64, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.int(c) -} - -func (d *Decoder) int(c byte) (int64, error) { - if c == msgpcode.Nil { - return 0, nil - } - if msgpcode.IsFixedNum(c) { - return int64(int8(c)), nil - } - switch c { - case msgpcode.Uint8: - n, err := d.uint8() - return int64(n), err - case msgpcode.Int8: - n, err := d.uint8() - return int64(int8(n)), err - case msgpcode.Uint16: - n, err := d.uint16() - return int64(n), err - case msgpcode.Int16: - n, err := d.uint16() - return int64(int16(n)), err - case msgpcode.Uint32: - n, err := d.uint32() - return int64(n), err - case msgpcode.Int32: - n, err := d.uint32() - return int64(int32(n)), err - case msgpcode.Uint64, msgpcode.Int64: - n, err := d.uint64() - return int64(n), err - } - return 0, fmt.Errorf("msgpack: invalid code=%x decoding int64", c) -} - -func (d *Decoder) DecodeFloat32() (float32, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.float32(c) -} - -func (d *Decoder) float32(c byte) (float32, error) { - if c == msgpcode.Float { - n, err := d.uint32() - if err != nil { - return 0, err - } - return math.Float32frombits(n), nil - } - - n, err := d.int(c) - if err != nil { - return 0, fmt.Errorf("msgpack: invalid code=%x decoding float32", c) - } - return float32(n), nil -} - -// DecodeFloat64 decodes msgpack float32/64 into Go float64. -func (d *Decoder) DecodeFloat64() (float64, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.float64(c) -} - -func (d *Decoder) float64(c byte) (float64, error) { - switch c { - case msgpcode.Float: - n, err := d.float32(c) - if err != nil { - return 0, err - } - return float64(n), nil - case msgpcode.Double: - n, err := d.uint64() - if err != nil { - return 0, err - } - return math.Float64frombits(n), nil - } - - n, err := d.int(c) - if err != nil { - return 0, fmt.Errorf("msgpack: invalid code=%x decoding float32", c) - } - return float64(n), nil -} - -func (d *Decoder) DecodeUint() (uint, error) { - n, err := d.DecodeUint64() - return uint(n), err -} - -func (d *Decoder) DecodeUint8() (uint8, error) { - n, err := d.DecodeUint64() - return uint8(n), err -} - -func (d *Decoder) DecodeUint16() (uint16, error) { - n, err := d.DecodeUint64() - return uint16(n), err -} - -func (d *Decoder) DecodeUint32() (uint32, error) { - n, err := d.DecodeUint64() - return uint32(n), err -} - -func (d *Decoder) DecodeInt() (int, error) { - n, err := d.DecodeInt64() - return int(n), err -} - -func (d *Decoder) DecodeInt8() (int8, error) { - n, err := d.DecodeInt64() - return int8(n), err -} - -func (d *Decoder) DecodeInt16() (int16, error) { - n, err := d.DecodeInt64() - return int16(n), err -} - -func (d *Decoder) DecodeInt32() (int32, error) { - n, err := d.DecodeInt64() - return int32(n), err -} - -func decodeFloat32Value(d *Decoder, v reflect.Value) error { - f, err := d.DecodeFloat32() - if err != nil { - return err - } - v.SetFloat(float64(f)) - return nil -} - -func decodeFloat64Value(d *Decoder, v reflect.Value) error { - f, err := d.DecodeFloat64() - if err != nil { - return err - } - v.SetFloat(f) - return nil -} - -func decodeInt64Value(d *Decoder, v reflect.Value) error { - n, err := d.DecodeInt64() - if err != nil { - return err - } - v.SetInt(n) - return nil -} - -func decodeUint64Value(d *Decoder, v reflect.Value) error { - n, err := d.DecodeUint64() - if err != nil { - return err - } - v.SetUint(n) - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go deleted file mode 100644 index c302ed1f..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go +++ /dev/null @@ -1,158 +0,0 @@ -package msgpack - -import ( - "fmt" - "strconv" - "strings" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -type queryResult struct { - query string - key string - hasAsterisk bool - - values []interface{} -} - -func (q *queryResult) nextKey() { - ind := strings.IndexByte(q.query, '.') - if ind == -1 { - q.key = q.query - q.query = "" - return - } - q.key = q.query[:ind] - q.query = q.query[ind+1:] -} - -// Query extracts data specified by the query from the msgpack stream skipping -// any other data. Query consists of map keys and array indexes separated with dot, -// e.g. key1.0.key2. -func (d *Decoder) Query(query string) ([]interface{}, error) { - res := queryResult{ - query: query, - } - if err := d.query(&res); err != nil { - return nil, err - } - return res.values, nil -} - -func (d *Decoder) query(q *queryResult) error { - q.nextKey() - if q.key == "" { - v, err := d.decodeInterfaceCond() - if err != nil { - return err - } - q.values = append(q.values, v) - return nil - } - - code, err := d.PeekCode() - if err != nil { - return err - } - - switch { - case code == msgpcode.Map16 || code == msgpcode.Map32 || msgpcode.IsFixedMap(code): - err = d.queryMapKey(q) - case code == msgpcode.Array16 || code == msgpcode.Array32 || msgpcode.IsFixedArray(code): - err = d.queryArrayIndex(q) - default: - err = fmt.Errorf("msgpack: unsupported code=%x decoding key=%q", code, q.key) - } - return err -} - -func (d *Decoder) queryMapKey(q *queryResult) error { - n, err := d.DecodeMapLen() - if err != nil { - return err - } - if n == -1 { - return nil - } - - for i := 0; i < n; i++ { - key, err := d.decodeStringTemp() - if err != nil { - return err - } - - if key == q.key { - if err := d.query(q); err != nil { - return err - } - if q.hasAsterisk { - return d.skipNext((n - i - 1) * 2) - } - return nil - } - - if err := d.Skip(); err != nil { - return err - } - } - - return nil -} - -func (d *Decoder) queryArrayIndex(q *queryResult) error { - n, err := d.DecodeArrayLen() - if err != nil { - return err - } - if n == -1 { - return nil - } - - if q.key == "*" { - q.hasAsterisk = true - - query := q.query - for i := 0; i < n; i++ { - q.query = query - if err := d.query(q); err != nil { - return err - } - } - - q.hasAsterisk = false - return nil - } - - ind, err := strconv.Atoi(q.key) - if err != nil { - return err - } - - for i := 0; i < n; i++ { - if i == ind { - if err := d.query(q); err != nil { - return err - } - if q.hasAsterisk { - return d.skipNext(n - i - 1) - } - return nil - } - - if err := d.Skip(); err != nil { - return err - } - } - - return nil -} - -func (d *Decoder) skipNext(n int) error { - for i := 0; i < n; i++ { - if err := d.Skip(); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go deleted file mode 100644 index db6f7c54..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go +++ /dev/null @@ -1,191 +0,0 @@ -package msgpack - -import ( - "fmt" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -var sliceStringPtrType = reflect.TypeOf((*[]string)(nil)) - -// DecodeArrayLen decodes array length. Length is -1 when array is nil. -func (d *Decoder) DecodeArrayLen() (int, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.arrayLen(c) -} - -func (d *Decoder) arrayLen(c byte) (int, error) { - if c == msgpcode.Nil { - return -1, nil - } else if c >= msgpcode.FixedArrayLow && c <= msgpcode.FixedArrayHigh { - return int(c & msgpcode.FixedArrayMask), nil - } - switch c { - case msgpcode.Array16: - n, err := d.uint16() - return int(n), err - case msgpcode.Array32: - n, err := d.uint32() - return int(n), err - } - return 0, fmt.Errorf("msgpack: invalid code=%x decoding array length", c) -} - -func decodeStringSliceValue(d *Decoder, v reflect.Value) error { - ptr := v.Addr().Convert(sliceStringPtrType).Interface().(*[]string) - return d.decodeStringSlicePtr(ptr) -} - -func (d *Decoder) decodeStringSlicePtr(ptr *[]string) error { - n, err := d.DecodeArrayLen() - if err != nil { - return err - } - if n == -1 { - return nil - } - - ss := makeStrings(*ptr, n) - for i := 0; i < n; i++ { - s, err := d.DecodeString() - if err != nil { - return err - } - ss = append(ss, s) - } - *ptr = ss - - return nil -} - -func makeStrings(s []string, n int) []string { - if n > sliceAllocLimit { - n = sliceAllocLimit - } - - if s == nil { - return make([]string, 0, n) - } - - if cap(s) >= n { - return s[:0] - } - - s = s[:cap(s)] - s = append(s, make([]string, n-len(s))...) - return s[:0] -} - -func decodeSliceValue(d *Decoder, v reflect.Value) error { - n, err := d.DecodeArrayLen() - if err != nil { - return err - } - - if n == -1 { - v.Set(reflect.Zero(v.Type())) - return nil - } - if n == 0 && v.IsNil() { - v.Set(reflect.MakeSlice(v.Type(), 0, 0)) - return nil - } - - if v.Cap() >= n { - v.Set(v.Slice(0, n)) - } else if v.Len() < v.Cap() { - v.Set(v.Slice(0, v.Cap())) - } - - for i := 0; i < n; i++ { - if i >= v.Len() { - v.Set(growSliceValue(v, n)) - } - elem := v.Index(i) - if err := d.DecodeValue(elem); err != nil { - return err - } - } - - return nil -} - -func growSliceValue(v reflect.Value, n int) reflect.Value { - diff := n - v.Len() - if diff > sliceAllocLimit { - diff = sliceAllocLimit - } - v = reflect.AppendSlice(v, reflect.MakeSlice(v.Type(), diff, diff)) - return v -} - -func decodeArrayValue(d *Decoder, v reflect.Value) error { - n, err := d.DecodeArrayLen() - if err != nil { - return err - } - - if n == -1 { - return nil - } - if n > v.Len() { - return fmt.Errorf("%s len is %d, but msgpack has %d elements", v.Type(), v.Len(), n) - } - - for i := 0; i < n; i++ { - sv := v.Index(i) - if err := d.DecodeValue(sv); err != nil { - return err - } - } - - return nil -} - -func (d *Decoder) DecodeSlice() ([]interface{}, error) { - c, err := d.readCode() - if err != nil { - return nil, err - } - return d.decodeSlice(c) -} - -func (d *Decoder) decodeSlice(c byte) ([]interface{}, error) { - n, err := d.arrayLen(c) - if err != nil { - return nil, err - } - if n == -1 { - return nil, nil - } - - s := make([]interface{}, 0, min(n, sliceAllocLimit)) - for i := 0; i < n; i++ { - v, err := d.decodeInterfaceCond() - if err != nil { - return nil, err - } - s = append(s, v) - } - - return s, nil -} - -func (d *Decoder) skipSlice(c byte) error { - n, err := d.arrayLen(c) - if err != nil { - return err - } - - for i := 0; i < n; i++ { - if err := d.Skip(); err != nil { - return err - } - } - - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go deleted file mode 100644 index e837e08b..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go +++ /dev/null @@ -1,192 +0,0 @@ -package msgpack - -import ( - "fmt" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -func (d *Decoder) bytesLen(c byte) (int, error) { - if c == msgpcode.Nil { - return -1, nil - } - - if msgpcode.IsFixedString(c) { - return int(c & msgpcode.FixedStrMask), nil - } - - switch c { - case msgpcode.Str8, msgpcode.Bin8: - n, err := d.uint8() - return int(n), err - case msgpcode.Str16, msgpcode.Bin16: - n, err := d.uint16() - return int(n), err - case msgpcode.Str32, msgpcode.Bin32: - n, err := d.uint32() - return int(n), err - } - - return 0, fmt.Errorf("msgpack: invalid code=%x decoding string/bytes length", c) -} - -func (d *Decoder) DecodeString() (string, error) { - if intern := d.flags&useInternedStringsFlag != 0; intern || len(d.dict) > 0 { - return d.decodeInternedString(intern) - } - - c, err := d.readCode() - if err != nil { - return "", err - } - return d.string(c) -} - -func (d *Decoder) string(c byte) (string, error) { - n, err := d.bytesLen(c) - if err != nil { - return "", err - } - return d.stringWithLen(n) -} - -func (d *Decoder) stringWithLen(n int) (string, error) { - if n <= 0 { - return "", nil - } - b, err := d.readN(n) - return string(b), err -} - -func decodeStringValue(d *Decoder, v reflect.Value) error { - s, err := d.DecodeString() - if err != nil { - return err - } - v.SetString(s) - return nil -} - -func (d *Decoder) DecodeBytesLen() (int, error) { - c, err := d.readCode() - if err != nil { - return 0, err - } - return d.bytesLen(c) -} - -func (d *Decoder) DecodeBytes() ([]byte, error) { - c, err := d.readCode() - if err != nil { - return nil, err - } - return d.bytes(c, nil) -} - -func (d *Decoder) bytes(c byte, b []byte) ([]byte, error) { - n, err := d.bytesLen(c) - if err != nil { - return nil, err - } - if n == -1 { - return nil, nil - } - return readN(d.r, b, n) -} - -func (d *Decoder) decodeStringTemp() (string, error) { - if intern := d.flags&useInternedStringsFlag != 0; intern || len(d.dict) > 0 { - return d.decodeInternedString(intern) - } - - c, err := d.readCode() - if err != nil { - return "", err - } - - n, err := d.bytesLen(c) - if err != nil { - return "", err - } - if n == -1 { - return "", nil - } - - b, err := d.readN(n) - if err != nil { - return "", err - } - - return bytesToString(b), nil -} - -func (d *Decoder) decodeBytesPtr(ptr *[]byte) error { - c, err := d.readCode() - if err != nil { - return err - } - return d.bytesPtr(c, ptr) -} - -func (d *Decoder) bytesPtr(c byte, ptr *[]byte) error { - n, err := d.bytesLen(c) - if err != nil { - return err - } - if n == -1 { - *ptr = nil - return nil - } - - *ptr, err = readN(d.r, *ptr, n) - return err -} - -func (d *Decoder) skipBytes(c byte) error { - n, err := d.bytesLen(c) - if err != nil { - return err - } - if n <= 0 { - return nil - } - return d.skipN(n) -} - -func decodeBytesValue(d *Decoder, v reflect.Value) error { - c, err := d.readCode() - if err != nil { - return err - } - - b, err := d.bytes(c, v.Bytes()) - if err != nil { - return err - } - - v.SetBytes(b) - - return nil -} - -func decodeByteArrayValue(d *Decoder, v reflect.Value) error { - c, err := d.readCode() - if err != nil { - return err - } - - n, err := d.bytesLen(c) - if err != nil { - return err - } - if n == -1 { - return nil - } - if n > v.Len() { - return fmt.Errorf("%s len is %d, but msgpack has %d elements", v.Type(), v.Len(), n) - } - - b := v.Slice(0, n).Bytes() - return d.readFull(b) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go b/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go deleted file mode 100644 index d2ff2aea..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go +++ /dev/null @@ -1,250 +0,0 @@ -package msgpack - -import ( - "encoding" - "errors" - "fmt" - "reflect" -) - -var ( - interfaceType = reflect.TypeOf((*interface{})(nil)).Elem() - stringType = reflect.TypeOf((*string)(nil)).Elem() -) - -var valueDecoders []decoderFunc - -//nolint:gochecknoinits -func init() { - valueDecoders = []decoderFunc{ - reflect.Bool: decodeBoolValue, - reflect.Int: decodeInt64Value, - reflect.Int8: decodeInt64Value, - reflect.Int16: decodeInt64Value, - reflect.Int32: decodeInt64Value, - reflect.Int64: decodeInt64Value, - reflect.Uint: decodeUint64Value, - reflect.Uint8: decodeUint64Value, - reflect.Uint16: decodeUint64Value, - reflect.Uint32: decodeUint64Value, - reflect.Uint64: decodeUint64Value, - reflect.Float32: decodeFloat32Value, - reflect.Float64: decodeFloat64Value, - reflect.Complex64: decodeUnsupportedValue, - reflect.Complex128: decodeUnsupportedValue, - reflect.Array: decodeArrayValue, - reflect.Chan: decodeUnsupportedValue, - reflect.Func: decodeUnsupportedValue, - reflect.Interface: decodeInterfaceValue, - reflect.Map: decodeMapValue, - reflect.Ptr: decodeUnsupportedValue, - reflect.Slice: decodeSliceValue, - reflect.String: decodeStringValue, - reflect.Struct: decodeStructValue, - reflect.UnsafePointer: decodeUnsupportedValue, - } -} - -func getDecoder(typ reflect.Type) decoderFunc { - if v, ok := typeDecMap.Load(typ); ok { - return v.(decoderFunc) - } - fn := _getDecoder(typ) - typeDecMap.Store(typ, fn) - return fn -} - -func _getDecoder(typ reflect.Type) decoderFunc { - kind := typ.Kind() - - if kind == reflect.Ptr { - if _, ok := typeDecMap.Load(typ.Elem()); ok { - return ptrValueDecoder(typ) - } - } - - if typ.Implements(customDecoderType) { - return nilAwareDecoder(typ, decodeCustomValue) - } - if typ.Implements(unmarshalerType) { - return nilAwareDecoder(typ, unmarshalValue) - } - if typ.Implements(binaryUnmarshalerType) { - return nilAwareDecoder(typ, unmarshalBinaryValue) - } - if typ.Implements(textUnmarshalerType) { - return nilAwareDecoder(typ, unmarshalTextValue) - } - - // Addressable struct field value. - if kind != reflect.Ptr { - ptr := reflect.PtrTo(typ) - if ptr.Implements(customDecoderType) { - return addrDecoder(nilAwareDecoder(typ, decodeCustomValue)) - } - if ptr.Implements(unmarshalerType) { - return addrDecoder(nilAwareDecoder(typ, unmarshalValue)) - } - if ptr.Implements(binaryUnmarshalerType) { - return addrDecoder(nilAwareDecoder(typ, unmarshalBinaryValue)) - } - if ptr.Implements(textUnmarshalerType) { - return addrDecoder(nilAwareDecoder(typ, unmarshalTextValue)) - } - } - - switch kind { - case reflect.Ptr: - return ptrValueDecoder(typ) - case reflect.Slice: - elem := typ.Elem() - if elem.Kind() == reflect.Uint8 { - return decodeBytesValue - } - if elem == stringType { - return decodeStringSliceValue - } - case reflect.Array: - if typ.Elem().Kind() == reflect.Uint8 { - return decodeByteArrayValue - } - case reflect.Map: - if typ.Key() == stringType { - switch typ.Elem() { - case stringType: - return decodeMapStringStringValue - case interfaceType: - return decodeMapStringInterfaceValue - } - } - } - - return valueDecoders[kind] -} - -func ptrValueDecoder(typ reflect.Type) decoderFunc { - decoder := getDecoder(typ.Elem()) - return func(d *Decoder, v reflect.Value) error { - if d.hasNilCode() { - if !v.IsNil() { - v.Set(reflect.Zero(v.Type())) - } - return d.DecodeNil() - } - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) - } - return decoder(d, v.Elem()) - } -} - -func addrDecoder(fn decoderFunc) decoderFunc { - return func(d *Decoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) - } - return fn(d, v.Addr()) - } -} - -func nilAwareDecoder(typ reflect.Type, fn decoderFunc) decoderFunc { - if nilable(typ.Kind()) { - return func(d *Decoder, v reflect.Value) error { - if d.hasNilCode() { - return d.decodeNilValue(v) - } - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) - } - return fn(d, v) - } - } - - return func(d *Decoder, v reflect.Value) error { - if d.hasNilCode() { - return d.decodeNilValue(v) - } - return fn(d, v) - } -} - -func decodeBoolValue(d *Decoder, v reflect.Value) error { - flag, err := d.DecodeBool() - if err != nil { - return err - } - v.SetBool(flag) - return nil -} - -func decodeInterfaceValue(d *Decoder, v reflect.Value) error { - if v.IsNil() { - return d.interfaceValue(v) - } - return d.DecodeValue(v.Elem()) -} - -func (d *Decoder) interfaceValue(v reflect.Value) error { - vv, err := d.decodeInterfaceCond() - if err != nil { - return err - } - - if vv != nil { - if v.Type() == errorType { - if vv, ok := vv.(string); ok { - v.Set(reflect.ValueOf(errors.New(vv))) - return nil - } - } - - v.Set(reflect.ValueOf(vv)) - } - - return nil -} - -func decodeUnsupportedValue(d *Decoder, v reflect.Value) error { - return fmt.Errorf("msgpack: Decode(unsupported %s)", v.Type()) -} - -//------------------------------------------------------------------------------ - -func decodeCustomValue(d *Decoder, v reflect.Value) error { - decoder := v.Interface().(CustomDecoder) - return decoder.DecodeMsgpack(d) -} - -func unmarshalValue(d *Decoder, v reflect.Value) error { - var b []byte - - d.rec = make([]byte, 0, 64) - if err := d.Skip(); err != nil { - return err - } - b = d.rec - d.rec = nil - - unmarshaler := v.Interface().(Unmarshaler) - return unmarshaler.UnmarshalMsgpack(b) -} - -func unmarshalBinaryValue(d *Decoder, v reflect.Value) error { - data, err := d.DecodeBytes() - if err != nil { - return err - } - - unmarshaler := v.Interface().(encoding.BinaryUnmarshaler) - return unmarshaler.UnmarshalBinary(data) -} - -func unmarshalTextValue(d *Decoder, v reflect.Value) error { - data, err := d.DecodeBytes() - if err != nil { - return err - } - - unmarshaler := v.Interface().(encoding.TextUnmarshaler) - return unmarshaler.UnmarshalText(data) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/encode.go b/vendor/github.com/vmihailenco/msgpack/v5/encode.go deleted file mode 100644 index 0ef6212e..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/encode.go +++ /dev/null @@ -1,269 +0,0 @@ -package msgpack - -import ( - "bytes" - "io" - "reflect" - "sync" - "time" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -const ( - sortMapKeysFlag uint32 = 1 << iota - arrayEncodedStructsFlag - useCompactIntsFlag - useCompactFloatsFlag - useInternedStringsFlag - omitEmptyFlag -) - -type writer interface { - io.Writer - WriteByte(byte) error -} - -type byteWriter struct { - io.Writer -} - -func newByteWriter(w io.Writer) byteWriter { - return byteWriter{ - Writer: w, - } -} - -func (bw byteWriter) WriteByte(c byte) error { - _, err := bw.Write([]byte{c}) - return err -} - -//------------------------------------------------------------------------------ - -var encPool = sync.Pool{ - New: func() interface{} { - return NewEncoder(nil) - }, -} - -func GetEncoder() *Encoder { - return encPool.Get().(*Encoder) -} - -func PutEncoder(enc *Encoder) { - enc.w = nil - encPool.Put(enc) -} - -// Marshal returns the MessagePack encoding of v. -func Marshal(v interface{}) ([]byte, error) { - enc := GetEncoder() - - var buf bytes.Buffer - enc.Reset(&buf) - - err := enc.Encode(v) - b := buf.Bytes() - - PutEncoder(enc) - - if err != nil { - return nil, err - } - return b, err -} - -type Encoder struct { - w writer - - buf []byte - timeBuf []byte - - dict map[string]int - - flags uint32 - structTag string -} - -// NewEncoder returns a new encoder that writes to w. -func NewEncoder(w io.Writer) *Encoder { - e := &Encoder{ - buf: make([]byte, 9), - } - e.Reset(w) - return e -} - -// Writer returns the Encoder's writer. -func (e *Encoder) Writer() io.Writer { - return e.w -} - -// Reset discards any buffered data, resets all state, and switches the writer to write to w. -func (e *Encoder) Reset(w io.Writer) { - e.ResetDict(w, nil) -} - -// ResetDict is like Reset, but also resets the dict. -func (e *Encoder) ResetDict(w io.Writer, dict map[string]int) { - e.resetWriter(w) - e.flags = 0 - e.structTag = "" - e.dict = dict -} - -func (e *Encoder) WithDict(dict map[string]int, fn func(*Encoder) error) error { - oldDict := e.dict - e.dict = dict - err := fn(e) - e.dict = oldDict - return err -} - -func (e *Encoder) resetWriter(w io.Writer) { - if bw, ok := w.(writer); ok { - e.w = bw - } else { - e.w = newByteWriter(w) - } -} - -// SetSortMapKeys causes the Encoder to encode map keys in increasing order. -// Supported map types are: -// - map[string]string -// - map[string]interface{} -func (e *Encoder) SetSortMapKeys(on bool) *Encoder { - if on { - e.flags |= sortMapKeysFlag - } else { - e.flags &= ^sortMapKeysFlag - } - return e -} - -// SetCustomStructTag causes the Encoder to use a custom struct tag as -// fallback option if there is no msgpack tag. -func (e *Encoder) SetCustomStructTag(tag string) { - e.structTag = tag -} - -// SetOmitEmpty causes the Encoder to omit empty values by default. -func (e *Encoder) SetOmitEmpty(on bool) { - if on { - e.flags |= omitEmptyFlag - } else { - e.flags &= ^omitEmptyFlag - } -} - -// UseArrayEncodedStructs causes the Encoder to encode Go structs as msgpack arrays. -func (e *Encoder) UseArrayEncodedStructs(on bool) { - if on { - e.flags |= arrayEncodedStructsFlag - } else { - e.flags &= ^arrayEncodedStructsFlag - } -} - -// UseCompactEncoding causes the Encoder to chose the most compact encoding. -// For example, it allows to encode small Go int64 as msgpack int8 saving 7 bytes. -func (e *Encoder) UseCompactInts(on bool) { - if on { - e.flags |= useCompactIntsFlag - } else { - e.flags &= ^useCompactIntsFlag - } -} - -// UseCompactFloats causes the Encoder to chose a compact integer encoding -// for floats that can be represented as integers. -func (e *Encoder) UseCompactFloats(on bool) { - if on { - e.flags |= useCompactFloatsFlag - } else { - e.flags &= ^useCompactFloatsFlag - } -} - -// UseInternedStrings causes the Encoder to intern strings. -func (e *Encoder) UseInternedStrings(on bool) { - if on { - e.flags |= useInternedStringsFlag - } else { - e.flags &= ^useInternedStringsFlag - } -} - -func (e *Encoder) Encode(v interface{}) error { - switch v := v.(type) { - case nil: - return e.EncodeNil() - case string: - return e.EncodeString(v) - case []byte: - return e.EncodeBytes(v) - case int: - return e.EncodeInt(int64(v)) - case int64: - return e.encodeInt64Cond(v) - case uint: - return e.EncodeUint(uint64(v)) - case uint64: - return e.encodeUint64Cond(v) - case bool: - return e.EncodeBool(v) - case float32: - return e.EncodeFloat32(v) - case float64: - return e.EncodeFloat64(v) - case time.Duration: - return e.encodeInt64Cond(int64(v)) - case time.Time: - return e.EncodeTime(v) - } - return e.EncodeValue(reflect.ValueOf(v)) -} - -func (e *Encoder) EncodeMulti(v ...interface{}) error { - for _, vv := range v { - if err := e.Encode(vv); err != nil { - return err - } - } - return nil -} - -func (e *Encoder) EncodeValue(v reflect.Value) error { - fn := getEncoder(v.Type()) - return fn(e, v) -} - -func (e *Encoder) EncodeNil() error { - return e.writeCode(msgpcode.Nil) -} - -func (e *Encoder) EncodeBool(value bool) error { - if value { - return e.writeCode(msgpcode.True) - } - return e.writeCode(msgpcode.False) -} - -func (e *Encoder) EncodeDuration(d time.Duration) error { - return e.EncodeInt(int64(d)) -} - -func (e *Encoder) writeCode(c byte) error { - return e.w.WriteByte(c) -} - -func (e *Encoder) write(b []byte) error { - _, err := e.w.Write(b) - return err -} - -func (e *Encoder) writeString(s string) error { - _, err := e.w.Write(stringToBytes(s)) - return err -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go b/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go deleted file mode 100644 index ba4c61be..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go +++ /dev/null @@ -1,179 +0,0 @@ -package msgpack - -import ( - "math" - "reflect" - "sort" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -func encodeMapValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - - if err := e.EncodeMapLen(v.Len()); err != nil { - return err - } - - iter := v.MapRange() - for iter.Next() { - if err := e.EncodeValue(iter.Key()); err != nil { - return err - } - if err := e.EncodeValue(iter.Value()); err != nil { - return err - } - } - - return nil -} - -func encodeMapStringStringValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - - if err := e.EncodeMapLen(v.Len()); err != nil { - return err - } - - m := v.Convert(mapStringStringType).Interface().(map[string]string) - if e.flags&sortMapKeysFlag != 0 { - return e.encodeSortedMapStringString(m) - } - - for mk, mv := range m { - if err := e.EncodeString(mk); err != nil { - return err - } - if err := e.EncodeString(mv); err != nil { - return err - } - } - - return nil -} - -func encodeMapStringInterfaceValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - m := v.Convert(mapStringInterfaceType).Interface().(map[string]interface{}) - if e.flags&sortMapKeysFlag != 0 { - return e.EncodeMapSorted(m) - } - return e.EncodeMap(m) -} - -func (e *Encoder) EncodeMap(m map[string]interface{}) error { - if m == nil { - return e.EncodeNil() - } - if err := e.EncodeMapLen(len(m)); err != nil { - return err - } - for mk, mv := range m { - if err := e.EncodeString(mk); err != nil { - return err - } - if err := e.Encode(mv); err != nil { - return err - } - } - return nil -} - -func (e *Encoder) EncodeMapSorted(m map[string]interface{}) error { - if m == nil { - return e.EncodeNil() - } - if err := e.EncodeMapLen(len(m)); err != nil { - return err - } - - keys := make([]string, 0, len(m)) - - for k := range m { - keys = append(keys, k) - } - - sort.Strings(keys) - - for _, k := range keys { - if err := e.EncodeString(k); err != nil { - return err - } - if err := e.Encode(m[k]); err != nil { - return err - } - } - - return nil -} - -func (e *Encoder) encodeSortedMapStringString(m map[string]string) error { - keys := make([]string, 0, len(m)) - for k := range m { - keys = append(keys, k) - } - sort.Strings(keys) - - for _, k := range keys { - err := e.EncodeString(k) - if err != nil { - return err - } - if err = e.EncodeString(m[k]); err != nil { - return err - } - } - - return nil -} - -func (e *Encoder) EncodeMapLen(l int) error { - if l < 16 { - return e.writeCode(msgpcode.FixedMapLow | byte(l)) - } - if l <= math.MaxUint16 { - return e.write2(msgpcode.Map16, uint16(l)) - } - return e.write4(msgpcode.Map32, uint32(l)) -} - -func encodeStructValue(e *Encoder, strct reflect.Value) error { - structFields := structs.Fields(strct.Type(), e.structTag) - if e.flags&arrayEncodedStructsFlag != 0 || structFields.AsArray { - return encodeStructValueAsArray(e, strct, structFields.List) - } - fields := structFields.OmitEmpty(strct, e.flags&omitEmptyFlag != 0) - - if err := e.EncodeMapLen(len(fields)); err != nil { - return err - } - - for _, f := range fields { - if err := e.EncodeString(f.name); err != nil { - return err - } - if err := f.EncodeValue(e, strct); err != nil { - return err - } - } - - return nil -} - -func encodeStructValueAsArray(e *Encoder, strct reflect.Value, fields []*field) error { - if err := e.EncodeArrayLen(len(fields)); err != nil { - return err - } - for _, f := range fields { - if err := f.EncodeValue(e, strct); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go b/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go deleted file mode 100644 index 63c311bf..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go +++ /dev/null @@ -1,252 +0,0 @@ -package msgpack - -import ( - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -// EncodeUint8 encodes an uint8 in 2 bytes preserving type of the number. -func (e *Encoder) EncodeUint8(n uint8) error { - return e.write1(msgpcode.Uint8, n) -} - -func (e *Encoder) encodeUint8Cond(n uint8) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeUint(uint64(n)) - } - return e.EncodeUint8(n) -} - -// EncodeUint16 encodes an uint16 in 3 bytes preserving type of the number. -func (e *Encoder) EncodeUint16(n uint16) error { - return e.write2(msgpcode.Uint16, n) -} - -func (e *Encoder) encodeUint16Cond(n uint16) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeUint(uint64(n)) - } - return e.EncodeUint16(n) -} - -// EncodeUint32 encodes an uint16 in 5 bytes preserving type of the number. -func (e *Encoder) EncodeUint32(n uint32) error { - return e.write4(msgpcode.Uint32, n) -} - -func (e *Encoder) encodeUint32Cond(n uint32) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeUint(uint64(n)) - } - return e.EncodeUint32(n) -} - -// EncodeUint64 encodes an uint16 in 9 bytes preserving type of the number. -func (e *Encoder) EncodeUint64(n uint64) error { - return e.write8(msgpcode.Uint64, n) -} - -func (e *Encoder) encodeUint64Cond(n uint64) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeUint(n) - } - return e.EncodeUint64(n) -} - -// EncodeInt8 encodes an int8 in 2 bytes preserving type of the number. -func (e *Encoder) EncodeInt8(n int8) error { - return e.write1(msgpcode.Int8, uint8(n)) -} - -func (e *Encoder) encodeInt8Cond(n int8) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeInt(int64(n)) - } - return e.EncodeInt8(n) -} - -// EncodeInt16 encodes an int16 in 3 bytes preserving type of the number. -func (e *Encoder) EncodeInt16(n int16) error { - return e.write2(msgpcode.Int16, uint16(n)) -} - -func (e *Encoder) encodeInt16Cond(n int16) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeInt(int64(n)) - } - return e.EncodeInt16(n) -} - -// EncodeInt32 encodes an int32 in 5 bytes preserving type of the number. -func (e *Encoder) EncodeInt32(n int32) error { - return e.write4(msgpcode.Int32, uint32(n)) -} - -func (e *Encoder) encodeInt32Cond(n int32) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeInt(int64(n)) - } - return e.EncodeInt32(n) -} - -// EncodeInt64 encodes an int64 in 9 bytes preserving type of the number. -func (e *Encoder) EncodeInt64(n int64) error { - return e.write8(msgpcode.Int64, uint64(n)) -} - -func (e *Encoder) encodeInt64Cond(n int64) error { - if e.flags&useCompactIntsFlag != 0 { - return e.EncodeInt(n) - } - return e.EncodeInt64(n) -} - -// EncodeUnsignedNumber encodes an uint64 in 1, 2, 3, 5, or 9 bytes. -// Type of the number is lost during encoding. -func (e *Encoder) EncodeUint(n uint64) error { - if n <= math.MaxInt8 { - return e.w.WriteByte(byte(n)) - } - if n <= math.MaxUint8 { - return e.EncodeUint8(uint8(n)) - } - if n <= math.MaxUint16 { - return e.EncodeUint16(uint16(n)) - } - if n <= math.MaxUint32 { - return e.EncodeUint32(uint32(n)) - } - return e.EncodeUint64(n) -} - -// EncodeNumber encodes an int64 in 1, 2, 3, 5, or 9 bytes. -// Type of the number is lost during encoding. -func (e *Encoder) EncodeInt(n int64) error { - if n >= 0 { - return e.EncodeUint(uint64(n)) - } - if n >= int64(int8(msgpcode.NegFixedNumLow)) { - return e.w.WriteByte(byte(n)) - } - if n >= math.MinInt8 { - return e.EncodeInt8(int8(n)) - } - if n >= math.MinInt16 { - return e.EncodeInt16(int16(n)) - } - if n >= math.MinInt32 { - return e.EncodeInt32(int32(n)) - } - return e.EncodeInt64(n) -} - -func (e *Encoder) EncodeFloat32(n float32) error { - if e.flags&useCompactFloatsFlag != 0 { - if float32(int64(n)) == n { - return e.EncodeInt(int64(n)) - } - } - return e.write4(msgpcode.Float, math.Float32bits(n)) -} - -func (e *Encoder) EncodeFloat64(n float64) error { - if e.flags&useCompactFloatsFlag != 0 { - // Both NaN and Inf convert to int64(-0x8000000000000000) - // If n is NaN then it never compares true with any other value - // If n is Inf then it doesn't convert from int64 back to +/-Inf - // In both cases the comparison works. - if float64(int64(n)) == n { - return e.EncodeInt(int64(n)) - } - } - return e.write8(msgpcode.Double, math.Float64bits(n)) -} - -func (e *Encoder) write1(code byte, n uint8) error { - e.buf = e.buf[:2] - e.buf[0] = code - e.buf[1] = n - return e.write(e.buf) -} - -func (e *Encoder) write2(code byte, n uint16) error { - e.buf = e.buf[:3] - e.buf[0] = code - e.buf[1] = byte(n >> 8) - e.buf[2] = byte(n) - return e.write(e.buf) -} - -func (e *Encoder) write4(code byte, n uint32) error { - e.buf = e.buf[:5] - e.buf[0] = code - e.buf[1] = byte(n >> 24) - e.buf[2] = byte(n >> 16) - e.buf[3] = byte(n >> 8) - e.buf[4] = byte(n) - return e.write(e.buf) -} - -func (e *Encoder) write8(code byte, n uint64) error { - e.buf = e.buf[:9] - e.buf[0] = code - e.buf[1] = byte(n >> 56) - e.buf[2] = byte(n >> 48) - e.buf[3] = byte(n >> 40) - e.buf[4] = byte(n >> 32) - e.buf[5] = byte(n >> 24) - e.buf[6] = byte(n >> 16) - e.buf[7] = byte(n >> 8) - e.buf[8] = byte(n) - return e.write(e.buf) -} - -func encodeUintValue(e *Encoder, v reflect.Value) error { - return e.EncodeUint(v.Uint()) -} - -func encodeIntValue(e *Encoder, v reflect.Value) error { - return e.EncodeInt(v.Int()) -} - -func encodeUint8CondValue(e *Encoder, v reflect.Value) error { - return e.encodeUint8Cond(uint8(v.Uint())) -} - -func encodeUint16CondValue(e *Encoder, v reflect.Value) error { - return e.encodeUint16Cond(uint16(v.Uint())) -} - -func encodeUint32CondValue(e *Encoder, v reflect.Value) error { - return e.encodeUint32Cond(uint32(v.Uint())) -} - -func encodeUint64CondValue(e *Encoder, v reflect.Value) error { - return e.encodeUint64Cond(v.Uint()) -} - -func encodeInt8CondValue(e *Encoder, v reflect.Value) error { - return e.encodeInt8Cond(int8(v.Int())) -} - -func encodeInt16CondValue(e *Encoder, v reflect.Value) error { - return e.encodeInt16Cond(int16(v.Int())) -} - -func encodeInt32CondValue(e *Encoder, v reflect.Value) error { - return e.encodeInt32Cond(int32(v.Int())) -} - -func encodeInt64CondValue(e *Encoder, v reflect.Value) error { - return e.encodeInt64Cond(v.Int()) -} - -func encodeFloat32Value(e *Encoder, v reflect.Value) error { - return e.EncodeFloat32(float32(v.Float())) -} - -func encodeFloat64Value(e *Encoder, v reflect.Value) error { - return e.EncodeFloat64(v.Float()) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go b/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go deleted file mode 100644 index ca46eada..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go +++ /dev/null @@ -1,139 +0,0 @@ -package msgpack - -import ( - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -var stringSliceType = reflect.TypeOf(([]string)(nil)) - -func encodeStringValue(e *Encoder, v reflect.Value) error { - return e.EncodeString(v.String()) -} - -func encodeByteSliceValue(e *Encoder, v reflect.Value) error { - return e.EncodeBytes(v.Bytes()) -} - -func encodeByteArrayValue(e *Encoder, v reflect.Value) error { - if err := e.EncodeBytesLen(v.Len()); err != nil { - return err - } - - if v.CanAddr() { - b := v.Slice(0, v.Len()).Bytes() - return e.write(b) - } - - e.buf = grow(e.buf, v.Len()) - reflect.Copy(reflect.ValueOf(e.buf), v) - return e.write(e.buf) -} - -func grow(b []byte, n int) []byte { - if cap(b) >= n { - return b[:n] - } - b = b[:cap(b)] - b = append(b, make([]byte, n-len(b))...) - return b -} - -func (e *Encoder) EncodeBytesLen(l int) error { - if l < 256 { - return e.write1(msgpcode.Bin8, uint8(l)) - } - if l <= math.MaxUint16 { - return e.write2(msgpcode.Bin16, uint16(l)) - } - return e.write4(msgpcode.Bin32, uint32(l)) -} - -func (e *Encoder) encodeStringLen(l int) error { - if l < 32 { - return e.writeCode(msgpcode.FixedStrLow | byte(l)) - } - if l < 256 { - return e.write1(msgpcode.Str8, uint8(l)) - } - if l <= math.MaxUint16 { - return e.write2(msgpcode.Str16, uint16(l)) - } - return e.write4(msgpcode.Str32, uint32(l)) -} - -func (e *Encoder) EncodeString(v string) error { - if intern := e.flags&useInternedStringsFlag != 0; intern || len(e.dict) > 0 { - return e.encodeInternedString(v, intern) - } - return e.encodeNormalString(v) -} - -func (e *Encoder) encodeNormalString(v string) error { - if err := e.encodeStringLen(len(v)); err != nil { - return err - } - return e.writeString(v) -} - -func (e *Encoder) EncodeBytes(v []byte) error { - if v == nil { - return e.EncodeNil() - } - if err := e.EncodeBytesLen(len(v)); err != nil { - return err - } - return e.write(v) -} - -func (e *Encoder) EncodeArrayLen(l int) error { - if l < 16 { - return e.writeCode(msgpcode.FixedArrayLow | byte(l)) - } - if l <= math.MaxUint16 { - return e.write2(msgpcode.Array16, uint16(l)) - } - return e.write4(msgpcode.Array32, uint32(l)) -} - -func encodeStringSliceValue(e *Encoder, v reflect.Value) error { - ss := v.Convert(stringSliceType).Interface().([]string) - return e.encodeStringSlice(ss) -} - -func (e *Encoder) encodeStringSlice(s []string) error { - if s == nil { - return e.EncodeNil() - } - if err := e.EncodeArrayLen(len(s)); err != nil { - return err - } - for _, v := range s { - if err := e.EncodeString(v); err != nil { - return err - } - } - return nil -} - -func encodeSliceValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - return encodeArrayValue(e, v) -} - -func encodeArrayValue(e *Encoder, v reflect.Value) error { - l := v.Len() - if err := e.EncodeArrayLen(l); err != nil { - return err - } - for i := 0; i < l; i++ { - if err := e.EncodeValue(v.Index(i)); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go b/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go deleted file mode 100644 index 48cf489f..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go +++ /dev/null @@ -1,245 +0,0 @@ -package msgpack - -import ( - "encoding" - "fmt" - "reflect" -) - -var valueEncoders []encoderFunc - -//nolint:gochecknoinits -func init() { - valueEncoders = []encoderFunc{ - reflect.Bool: encodeBoolValue, - reflect.Int: encodeIntValue, - reflect.Int8: encodeInt8CondValue, - reflect.Int16: encodeInt16CondValue, - reflect.Int32: encodeInt32CondValue, - reflect.Int64: encodeInt64CondValue, - reflect.Uint: encodeUintValue, - reflect.Uint8: encodeUint8CondValue, - reflect.Uint16: encodeUint16CondValue, - reflect.Uint32: encodeUint32CondValue, - reflect.Uint64: encodeUint64CondValue, - reflect.Float32: encodeFloat32Value, - reflect.Float64: encodeFloat64Value, - reflect.Complex64: encodeUnsupportedValue, - reflect.Complex128: encodeUnsupportedValue, - reflect.Array: encodeArrayValue, - reflect.Chan: encodeUnsupportedValue, - reflect.Func: encodeUnsupportedValue, - reflect.Interface: encodeInterfaceValue, - reflect.Map: encodeMapValue, - reflect.Ptr: encodeUnsupportedValue, - reflect.Slice: encodeSliceValue, - reflect.String: encodeStringValue, - reflect.Struct: encodeStructValue, - reflect.UnsafePointer: encodeUnsupportedValue, - } -} - -func getEncoder(typ reflect.Type) encoderFunc { - if v, ok := typeEncMap.Load(typ); ok { - return v.(encoderFunc) - } - fn := _getEncoder(typ) - typeEncMap.Store(typ, fn) - return fn -} - -func _getEncoder(typ reflect.Type) encoderFunc { - kind := typ.Kind() - - if kind == reflect.Ptr { - if _, ok := typeEncMap.Load(typ.Elem()); ok { - return ptrEncoderFunc(typ) - } - } - - if typ.Implements(customEncoderType) { - return encodeCustomValue - } - if typ.Implements(marshalerType) { - return marshalValue - } - if typ.Implements(binaryMarshalerType) { - return marshalBinaryValue - } - if typ.Implements(textMarshalerType) { - return marshalTextValue - } - - // Addressable struct field value. - if kind != reflect.Ptr { - ptr := reflect.PtrTo(typ) - if ptr.Implements(customEncoderType) { - return encodeCustomValuePtr - } - if ptr.Implements(marshalerType) { - return marshalValuePtr - } - if ptr.Implements(binaryMarshalerType) { - return marshalBinaryValueAddr - } - if ptr.Implements(textMarshalerType) { - return marshalTextValueAddr - } - } - - if typ == errorType { - return encodeErrorValue - } - - switch kind { - case reflect.Ptr: - return ptrEncoderFunc(typ) - case reflect.Slice: - elem := typ.Elem() - if elem.Kind() == reflect.Uint8 { - return encodeByteSliceValue - } - if elem == stringType { - return encodeStringSliceValue - } - case reflect.Array: - if typ.Elem().Kind() == reflect.Uint8 { - return encodeByteArrayValue - } - case reflect.Map: - if typ.Key() == stringType { - switch typ.Elem() { - case stringType: - return encodeMapStringStringValue - case interfaceType: - return encodeMapStringInterfaceValue - } - } - } - - return valueEncoders[kind] -} - -func ptrEncoderFunc(typ reflect.Type) encoderFunc { - encoder := getEncoder(typ.Elem()) - return func(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - return encoder(e, v.Elem()) - } -} - -func encodeCustomValuePtr(e *Encoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) - } - encoder := v.Addr().Interface().(CustomEncoder) - return encoder.EncodeMsgpack(e) -} - -func encodeCustomValue(e *Encoder, v reflect.Value) error { - if nilable(v.Kind()) && v.IsNil() { - return e.EncodeNil() - } - - encoder := v.Interface().(CustomEncoder) - return encoder.EncodeMsgpack(e) -} - -func marshalValuePtr(e *Encoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) - } - return marshalValue(e, v.Addr()) -} - -func marshalValue(e *Encoder, v reflect.Value) error { - if nilable(v.Kind()) && v.IsNil() { - return e.EncodeNil() - } - - marshaler := v.Interface().(Marshaler) - b, err := marshaler.MarshalMsgpack() - if err != nil { - return err - } - _, err = e.w.Write(b) - return err -} - -func encodeBoolValue(e *Encoder, v reflect.Value) error { - return e.EncodeBool(v.Bool()) -} - -func encodeInterfaceValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - return e.EncodeValue(v.Elem()) -} - -func encodeErrorValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - return e.EncodeString(v.Interface().(error).Error()) -} - -func encodeUnsupportedValue(e *Encoder, v reflect.Value) error { - return fmt.Errorf("msgpack: Encode(unsupported %s)", v.Type()) -} - -func nilable(kind reflect.Kind) bool { - switch kind { - case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: - return true - } - return false -} - -//------------------------------------------------------------------------------ - -func marshalBinaryValueAddr(e *Encoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) - } - return marshalBinaryValue(e, v.Addr()) -} - -func marshalBinaryValue(e *Encoder, v reflect.Value) error { - if nilable(v.Kind()) && v.IsNil() { - return e.EncodeNil() - } - - marshaler := v.Interface().(encoding.BinaryMarshaler) - data, err := marshaler.MarshalBinary() - if err != nil { - return err - } - - return e.EncodeBytes(data) -} - -//------------------------------------------------------------------------------ - -func marshalTextValueAddr(e *Encoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) - } - return marshalTextValue(e, v.Addr()) -} - -func marshalTextValue(e *Encoder, v reflect.Value) error { - if nilable(v.Kind()) && v.IsNil() { - return e.EncodeNil() - } - - marshaler := v.Interface().(encoding.TextMarshaler) - data, err := marshaler.MarshalText() - if err != nil { - return err - } - - return e.EncodeBytes(data) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/ext.go b/vendor/github.com/vmihailenco/msgpack/v5/ext.go deleted file mode 100644 index 76e11603..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/ext.go +++ /dev/null @@ -1,303 +0,0 @@ -package msgpack - -import ( - "fmt" - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -type extInfo struct { - Type reflect.Type - Decoder func(d *Decoder, v reflect.Value, extLen int) error -} - -var extTypes = make(map[int8]*extInfo) - -type MarshalerUnmarshaler interface { - Marshaler - Unmarshaler -} - -func RegisterExt(extID int8, value MarshalerUnmarshaler) { - RegisterExtEncoder(extID, value, func(e *Encoder, v reflect.Value) ([]byte, error) { - marshaler := v.Interface().(Marshaler) - return marshaler.MarshalMsgpack() - }) - RegisterExtDecoder(extID, value, func(d *Decoder, v reflect.Value, extLen int) error { - b, err := d.readN(extLen) - if err != nil { - return err - } - return v.Interface().(Unmarshaler).UnmarshalMsgpack(b) - }) -} - -func UnregisterExt(extID int8) { - unregisterExtEncoder(extID) - unregisterExtDecoder(extID) -} - -func RegisterExtEncoder( - extID int8, - value interface{}, - encoder func(enc *Encoder, v reflect.Value) ([]byte, error), -) { - unregisterExtEncoder(extID) - - typ := reflect.TypeOf(value) - extEncoder := makeExtEncoder(extID, typ, encoder) - typeEncMap.Store(extID, typ) - typeEncMap.Store(typ, extEncoder) - if typ.Kind() == reflect.Ptr { - typeEncMap.Store(typ.Elem(), makeExtEncoderAddr(extEncoder)) - } -} - -func unregisterExtEncoder(extID int8) { - t, ok := typeEncMap.Load(extID) - if !ok { - return - } - typeEncMap.Delete(extID) - typ := t.(reflect.Type) - typeEncMap.Delete(typ) - if typ.Kind() == reflect.Ptr { - typeEncMap.Delete(typ.Elem()) - } -} - -func makeExtEncoder( - extID int8, - typ reflect.Type, - encoder func(enc *Encoder, v reflect.Value) ([]byte, error), -) encoderFunc { - nilable := typ.Kind() == reflect.Ptr - - return func(e *Encoder, v reflect.Value) error { - if nilable && v.IsNil() { - return e.EncodeNil() - } - - b, err := encoder(e, v) - if err != nil { - return err - } - - if err := e.EncodeExtHeader(extID, len(b)); err != nil { - return err - } - - return e.write(b) - } -} - -func makeExtEncoderAddr(extEncoder encoderFunc) encoderFunc { - return func(e *Encoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) - } - return extEncoder(e, v.Addr()) - } -} - -func RegisterExtDecoder( - extID int8, - value interface{}, - decoder func(dec *Decoder, v reflect.Value, extLen int) error, -) { - unregisterExtDecoder(extID) - - typ := reflect.TypeOf(value) - extDecoder := makeExtDecoder(extID, typ, decoder) - extTypes[extID] = &extInfo{ - Type: typ, - Decoder: decoder, - } - - typeDecMap.Store(extID, typ) - typeDecMap.Store(typ, extDecoder) - if typ.Kind() == reflect.Ptr { - typeDecMap.Store(typ.Elem(), makeExtDecoderAddr(extDecoder)) - } -} - -func unregisterExtDecoder(extID int8) { - t, ok := typeDecMap.Load(extID) - if !ok { - return - } - typeDecMap.Delete(extID) - delete(extTypes, extID) - typ := t.(reflect.Type) - typeDecMap.Delete(typ) - if typ.Kind() == reflect.Ptr { - typeDecMap.Delete(typ.Elem()) - } -} - -func makeExtDecoder( - wantedExtID int8, - typ reflect.Type, - decoder func(d *Decoder, v reflect.Value, extLen int) error, -) decoderFunc { - return nilAwareDecoder(typ, func(d *Decoder, v reflect.Value) error { - extID, extLen, err := d.DecodeExtHeader() - if err != nil { - return err - } - if extID != wantedExtID { - return fmt.Errorf("msgpack: got ext type=%d, wanted %d", extID, wantedExtID) - } - return decoder(d, v, extLen) - }) -} - -func makeExtDecoderAddr(extDecoder decoderFunc) decoderFunc { - return func(d *Decoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) - } - return extDecoder(d, v.Addr()) - } -} - -func (e *Encoder) EncodeExtHeader(extID int8, extLen int) error { - if err := e.encodeExtLen(extLen); err != nil { - return err - } - if err := e.w.WriteByte(byte(extID)); err != nil { - return err - } - return nil -} - -func (e *Encoder) encodeExtLen(l int) error { - switch l { - case 1: - return e.writeCode(msgpcode.FixExt1) - case 2: - return e.writeCode(msgpcode.FixExt2) - case 4: - return e.writeCode(msgpcode.FixExt4) - case 8: - return e.writeCode(msgpcode.FixExt8) - case 16: - return e.writeCode(msgpcode.FixExt16) - } - if l <= math.MaxUint8 { - return e.write1(msgpcode.Ext8, uint8(l)) - } - if l <= math.MaxUint16 { - return e.write2(msgpcode.Ext16, uint16(l)) - } - return e.write4(msgpcode.Ext32, uint32(l)) -} - -func (d *Decoder) DecodeExtHeader() (extID int8, extLen int, err error) { - c, err := d.readCode() - if err != nil { - return - } - return d.extHeader(c) -} - -func (d *Decoder) extHeader(c byte) (int8, int, error) { - extLen, err := d.parseExtLen(c) - if err != nil { - return 0, 0, err - } - - extID, err := d.readCode() - if err != nil { - return 0, 0, err - } - - return int8(extID), extLen, nil -} - -func (d *Decoder) parseExtLen(c byte) (int, error) { - switch c { - case msgpcode.FixExt1: - return 1, nil - case msgpcode.FixExt2: - return 2, nil - case msgpcode.FixExt4: - return 4, nil - case msgpcode.FixExt8: - return 8, nil - case msgpcode.FixExt16: - return 16, nil - case msgpcode.Ext8: - n, err := d.uint8() - return int(n), err - case msgpcode.Ext16: - n, err := d.uint16() - return int(n), err - case msgpcode.Ext32: - n, err := d.uint32() - return int(n), err - default: - return 0, fmt.Errorf("msgpack: invalid code=%x decoding ext len", c) - } -} - -func (d *Decoder) decodeInterfaceExt(c byte) (interface{}, error) { - extID, extLen, err := d.extHeader(c) - if err != nil { - return nil, err - } - - info, ok := extTypes[extID] - if !ok { - return nil, fmt.Errorf("msgpack: unknown ext id=%d", extID) - } - - v := reflect.New(info.Type).Elem() - if nilable(v.Kind()) && v.IsNil() { - v.Set(reflect.New(info.Type.Elem())) - } - - if err := info.Decoder(d, v, extLen); err != nil { - return nil, err - } - - return v.Interface(), nil -} - -func (d *Decoder) skipExt(c byte) error { - n, err := d.parseExtLen(c) - if err != nil { - return err - } - return d.skipN(n + 1) -} - -func (d *Decoder) skipExtHeader(c byte) error { - // Read ext type. - _, err := d.readCode() - if err != nil { - return err - } - // Read ext body len. - for i := 0; i < extHeaderLen(c); i++ { - _, err := d.readCode() - if err != nil { - return err - } - } - return nil -} - -func extHeaderLen(c byte) int { - switch c { - case msgpcode.Ext8: - return 1 - case msgpcode.Ext16: - return 2 - case msgpcode.Ext32: - return 4 - } - return 0 -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/intern.go b/vendor/github.com/vmihailenco/msgpack/v5/intern.go deleted file mode 100644 index be0316a8..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/intern.go +++ /dev/null @@ -1,238 +0,0 @@ -package msgpack - -import ( - "fmt" - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -const ( - minInternedStringLen = 3 - maxDictLen = math.MaxUint16 -) - -var internedStringExtID = int8(math.MinInt8) - -func init() { - extTypes[internedStringExtID] = &extInfo{ - Type: stringType, - Decoder: decodeInternedStringExt, - } -} - -func decodeInternedStringExt(d *Decoder, v reflect.Value, extLen int) error { - idx, err := d.decodeInternedStringIndex(extLen) - if err != nil { - return err - } - - s, err := d.internedStringAtIndex(idx) - if err != nil { - return err - } - - v.SetString(s) - return nil -} - -//------------------------------------------------------------------------------ - -func encodeInternedInterfaceValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - - v = v.Elem() - if v.Kind() == reflect.String { - return e.encodeInternedString(v.String(), true) - } - return e.EncodeValue(v) -} - -func encodeInternedStringValue(e *Encoder, v reflect.Value) error { - return e.encodeInternedString(v.String(), true) -} - -func (e *Encoder) encodeInternedString(s string, intern bool) error { - // Interned string takes at least 3 bytes. Plain string 1 byte + string len. - if len(s) >= minInternedStringLen { - if idx, ok := e.dict[s]; ok { - return e.encodeInternedStringIndex(idx) - } - - if intern && len(e.dict) < maxDictLen { - if e.dict == nil { - e.dict = make(map[string]int) - } - idx := len(e.dict) - e.dict[s] = idx - } - } - - return e.encodeNormalString(s) -} - -func (e *Encoder) encodeInternedStringIndex(idx int) error { - if idx <= math.MaxUint8 { - if err := e.writeCode(msgpcode.FixExt1); err != nil { - return err - } - return e.write1(byte(internedStringExtID), uint8(idx)) - } - - if idx <= math.MaxUint16 { - if err := e.writeCode(msgpcode.FixExt2); err != nil { - return err - } - return e.write2(byte(internedStringExtID), uint16(idx)) - } - - if uint64(idx) <= math.MaxUint32 { - if err := e.writeCode(msgpcode.FixExt4); err != nil { - return err - } - return e.write4(byte(internedStringExtID), uint32(idx)) - } - - return fmt.Errorf("msgpack: interned string index=%d is too large", idx) -} - -//------------------------------------------------------------------------------ - -func decodeInternedInterfaceValue(d *Decoder, v reflect.Value) error { - s, err := d.decodeInternedString(true) - if err == nil { - v.Set(reflect.ValueOf(s)) - return nil - } - if err != nil { - if _, ok := err.(unexpectedCodeError); !ok { - return err - } - } - - if err := d.s.UnreadByte(); err != nil { - return err - } - return decodeInterfaceValue(d, v) -} - -func decodeInternedStringValue(d *Decoder, v reflect.Value) error { - s, err := d.decodeInternedString(true) - if err != nil { - return err - } - - v.SetString(s) - return nil -} - -func (d *Decoder) decodeInternedString(intern bool) (string, error) { - c, err := d.readCode() - if err != nil { - return "", err - } - - if msgpcode.IsFixedString(c) { - n := int(c & msgpcode.FixedStrMask) - return d.decodeInternedStringWithLen(n, intern) - } - - switch c { - case msgpcode.Nil: - return "", nil - case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4: - typeID, extLen, err := d.extHeader(c) - if err != nil { - return "", err - } - if typeID != internedStringExtID { - err := fmt.Errorf("msgpack: got ext type=%d, wanted %d", - typeID, internedStringExtID) - return "", err - } - - idx, err := d.decodeInternedStringIndex(extLen) - if err != nil { - return "", err - } - - return d.internedStringAtIndex(idx) - case msgpcode.Str8, msgpcode.Bin8: - n, err := d.uint8() - if err != nil { - return "", err - } - return d.decodeInternedStringWithLen(int(n), intern) - case msgpcode.Str16, msgpcode.Bin16: - n, err := d.uint16() - if err != nil { - return "", err - } - return d.decodeInternedStringWithLen(int(n), intern) - case msgpcode.Str32, msgpcode.Bin32: - n, err := d.uint32() - if err != nil { - return "", err - } - return d.decodeInternedStringWithLen(int(n), intern) - } - - return "", unexpectedCodeError{ - code: c, - hint: "interned string", - } -} - -func (d *Decoder) decodeInternedStringIndex(extLen int) (int, error) { - switch extLen { - case 1: - n, err := d.uint8() - if err != nil { - return 0, err - } - return int(n), nil - case 2: - n, err := d.uint16() - if err != nil { - return 0, err - } - return int(n), nil - case 4: - n, err := d.uint32() - if err != nil { - return 0, err - } - return int(n), nil - } - - err := fmt.Errorf("msgpack: unsupported ext len=%d decoding interned string", extLen) - return 0, err -} - -func (d *Decoder) internedStringAtIndex(idx int) (string, error) { - if idx >= len(d.dict) { - err := fmt.Errorf("msgpack: interned string at index=%d does not exist", idx) - return "", err - } - return d.dict[idx], nil -} - -func (d *Decoder) decodeInternedStringWithLen(n int, intern bool) (string, error) { - if n <= 0 { - return "", nil - } - - s, err := d.stringWithLen(n) - if err != nil { - return "", err - } - - if intern && len(s) >= minInternedStringLen && len(d.dict) < maxDictLen { - d.dict = append(d.dict, s) - } - - return s, nil -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go b/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go deleted file mode 100644 index 4db2fa2c..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go +++ /dev/null @@ -1,52 +0,0 @@ -package msgpack - -import "fmt" - -type Marshaler interface { - MarshalMsgpack() ([]byte, error) -} - -type Unmarshaler interface { - UnmarshalMsgpack([]byte) error -} - -type CustomEncoder interface { - EncodeMsgpack(*Encoder) error -} - -type CustomDecoder interface { - DecodeMsgpack(*Decoder) error -} - -//------------------------------------------------------------------------------ - -type RawMessage []byte - -var ( - _ CustomEncoder = (RawMessage)(nil) - _ CustomDecoder = (*RawMessage)(nil) -) - -func (m RawMessage) EncodeMsgpack(enc *Encoder) error { - return enc.write(m) -} - -func (m *RawMessage) DecodeMsgpack(dec *Decoder) error { - msg, err := dec.DecodeRaw() - if err != nil { - return err - } - *m = msg - return nil -} - -//------------------------------------------------------------------------------ - -type unexpectedCodeError struct { - code byte - hint string -} - -func (err unexpectedCodeError) Error() string { - return fmt.Sprintf("msgpack: unexpected code=%x decoding %s", err.code, err.hint) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go b/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go deleted file mode 100644 index e35389cc..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go +++ /dev/null @@ -1,88 +0,0 @@ -package msgpcode - -var ( - PosFixedNumHigh byte = 0x7f - NegFixedNumLow byte = 0xe0 - - Nil byte = 0xc0 - - False byte = 0xc2 - True byte = 0xc3 - - Float byte = 0xca - Double byte = 0xcb - - Uint8 byte = 0xcc - Uint16 byte = 0xcd - Uint32 byte = 0xce - Uint64 byte = 0xcf - - Int8 byte = 0xd0 - Int16 byte = 0xd1 - Int32 byte = 0xd2 - Int64 byte = 0xd3 - - FixedStrLow byte = 0xa0 - FixedStrHigh byte = 0xbf - FixedStrMask byte = 0x1f - Str8 byte = 0xd9 - Str16 byte = 0xda - Str32 byte = 0xdb - - Bin8 byte = 0xc4 - Bin16 byte = 0xc5 - Bin32 byte = 0xc6 - - FixedArrayLow byte = 0x90 - FixedArrayHigh byte = 0x9f - FixedArrayMask byte = 0xf - Array16 byte = 0xdc - Array32 byte = 0xdd - - FixedMapLow byte = 0x80 - FixedMapHigh byte = 0x8f - FixedMapMask byte = 0xf - Map16 byte = 0xde - Map32 byte = 0xdf - - FixExt1 byte = 0xd4 - FixExt2 byte = 0xd5 - FixExt4 byte = 0xd6 - FixExt8 byte = 0xd7 - FixExt16 byte = 0xd8 - Ext8 byte = 0xc7 - Ext16 byte = 0xc8 - Ext32 byte = 0xc9 -) - -func IsFixedNum(c byte) bool { - return c <= PosFixedNumHigh || c >= NegFixedNumLow -} - -func IsFixedMap(c byte) bool { - return c >= FixedMapLow && c <= FixedMapHigh -} - -func IsFixedArray(c byte) bool { - return c >= FixedArrayLow && c <= FixedArrayHigh -} - -func IsFixedString(c byte) bool { - return c >= FixedStrLow && c <= FixedStrHigh -} - -func IsString(c byte) bool { - return IsFixedString(c) || c == Str8 || c == Str16 || c == Str32 -} - -func IsBin(c byte) bool { - return c == Bin8 || c == Bin16 || c == Bin32 -} - -func IsFixedExt(c byte) bool { - return c >= FixExt1 && c <= FixExt16 -} - -func IsExt(c byte) bool { - return IsFixedExt(c) || c == Ext8 || c == Ext16 || c == Ext32 -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/package.json b/vendor/github.com/vmihailenco/msgpack/v5/package.json deleted file mode 100644 index 298910d4..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/package.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "name": "msgpack", - "version": "5.3.5" -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/safe.go b/vendor/github.com/vmihailenco/msgpack/v5/safe.go deleted file mode 100644 index 8352c9dc..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/safe.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build appengine - -package msgpack - -// bytesToString converts byte slice to string. -func bytesToString(b []byte) string { - return string(b) -} - -// stringToBytes converts string to byte slice. -func stringToBytes(s string) []byte { - return []byte(s) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/time.go b/vendor/github.com/vmihailenco/msgpack/v5/time.go deleted file mode 100644 index 44566ec0..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/time.go +++ /dev/null @@ -1,145 +0,0 @@ -package msgpack - -import ( - "encoding/binary" - "fmt" - "reflect" - "time" - - "github.com/vmihailenco/msgpack/v5/msgpcode" -) - -var timeExtID int8 = -1 - -func init() { - RegisterExtEncoder(timeExtID, time.Time{}, timeEncoder) - RegisterExtDecoder(timeExtID, time.Time{}, timeDecoder) -} - -func timeEncoder(e *Encoder, v reflect.Value) ([]byte, error) { - return e.encodeTime(v.Interface().(time.Time)), nil -} - -func timeDecoder(d *Decoder, v reflect.Value, extLen int) error { - tm, err := d.decodeTime(extLen) - if err != nil { - return err - } - - ptr := v.Addr().Interface().(*time.Time) - *ptr = tm - - return nil -} - -func (e *Encoder) EncodeTime(tm time.Time) error { - b := e.encodeTime(tm) - if err := e.encodeExtLen(len(b)); err != nil { - return err - } - if err := e.w.WriteByte(byte(timeExtID)); err != nil { - return err - } - return e.write(b) -} - -func (e *Encoder) encodeTime(tm time.Time) []byte { - if e.timeBuf == nil { - e.timeBuf = make([]byte, 12) - } - - secs := uint64(tm.Unix()) - if secs>>34 == 0 { - data := uint64(tm.Nanosecond())<<34 | secs - - if data&0xffffffff00000000 == 0 { - b := e.timeBuf[:4] - binary.BigEndian.PutUint32(b, uint32(data)) - return b - } - - b := e.timeBuf[:8] - binary.BigEndian.PutUint64(b, data) - return b - } - - b := e.timeBuf[:12] - binary.BigEndian.PutUint32(b, uint32(tm.Nanosecond())) - binary.BigEndian.PutUint64(b[4:], secs) - return b -} - -func (d *Decoder) DecodeTime() (time.Time, error) { - c, err := d.readCode() - if err != nil { - return time.Time{}, err - } - - // Legacy format. - if c == msgpcode.FixedArrayLow|2 { - sec, err := d.DecodeInt64() - if err != nil { - return time.Time{}, err - } - - nsec, err := d.DecodeInt64() - if err != nil { - return time.Time{}, err - } - - return time.Unix(sec, nsec), nil - } - - if msgpcode.IsString(c) { - s, err := d.string(c) - if err != nil { - return time.Time{}, err - } - return time.Parse(time.RFC3339Nano, s) - } - - extID, extLen, err := d.extHeader(c) - if err != nil { - return time.Time{}, err - } - - if extID != timeExtID { - return time.Time{}, fmt.Errorf("msgpack: invalid time ext id=%d", extID) - } - - tm, err := d.decodeTime(extLen) - if err != nil { - return tm, err - } - - if tm.IsZero() { - // Zero time does not have timezone information. - return tm.UTC(), nil - } - return tm, nil -} - -func (d *Decoder) decodeTime(extLen int) (time.Time, error) { - b, err := d.readN(extLen) - if err != nil { - return time.Time{}, err - } - - switch len(b) { - case 4: - sec := binary.BigEndian.Uint32(b) - return time.Unix(int64(sec), 0), nil - case 8: - sec := binary.BigEndian.Uint64(b) - nsec := int64(sec >> 34) - sec &= 0x00000003ffffffff - return time.Unix(int64(sec), nsec), nil - case 12: - nsec := binary.BigEndian.Uint32(b) - sec := binary.BigEndian.Uint64(b[4:]) - return time.Unix(int64(sec), int64(nsec)), nil - default: - err = fmt.Errorf("msgpack: invalid ext len=%d decoding time", extLen) - return time.Time{}, err - } -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/types.go b/vendor/github.com/vmihailenco/msgpack/v5/types.go deleted file mode 100644 index 69aca611..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/types.go +++ /dev/null @@ -1,407 +0,0 @@ -package msgpack - -import ( - "encoding" - "fmt" - "log" - "reflect" - "sync" - - "github.com/vmihailenco/tagparser/v2" -) - -var errorType = reflect.TypeOf((*error)(nil)).Elem() - -var ( - customEncoderType = reflect.TypeOf((*CustomEncoder)(nil)).Elem() - customDecoderType = reflect.TypeOf((*CustomDecoder)(nil)).Elem() -) - -var ( - marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem() - unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem() -) - -var ( - binaryMarshalerType = reflect.TypeOf((*encoding.BinaryMarshaler)(nil)).Elem() - binaryUnmarshalerType = reflect.TypeOf((*encoding.BinaryUnmarshaler)(nil)).Elem() -) - -var ( - textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem() - textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() -) - -type ( - encoderFunc func(*Encoder, reflect.Value) error - decoderFunc func(*Decoder, reflect.Value) error -) - -var ( - typeEncMap sync.Map - typeDecMap sync.Map -) - -// Register registers encoder and decoder functions for a value. -// This is low level API and in most cases you should prefer implementing -// CustomEncoder/CustomDecoder or Marshaler/Unmarshaler interfaces. -func Register(value interface{}, enc encoderFunc, dec decoderFunc) { - typ := reflect.TypeOf(value) - if enc != nil { - typeEncMap.Store(typ, enc) - } - if dec != nil { - typeDecMap.Store(typ, dec) - } -} - -//------------------------------------------------------------------------------ - -const defaultStructTag = "msgpack" - -var structs = newStructCache() - -type structCache struct { - m sync.Map -} - -type structCacheKey struct { - tag string - typ reflect.Type -} - -func newStructCache() *structCache { - return new(structCache) -} - -func (m *structCache) Fields(typ reflect.Type, tag string) *fields { - key := structCacheKey{tag: tag, typ: typ} - - if v, ok := m.m.Load(key); ok { - return v.(*fields) - } - - fs := getFields(typ, tag) - m.m.Store(key, fs) - - return fs -} - -//------------------------------------------------------------------------------ - -type field struct { - name string - index []int - omitEmpty bool - encoder encoderFunc - decoder decoderFunc -} - -func (f *field) Omit(strct reflect.Value, forced bool) bool { - v, ok := fieldByIndex(strct, f.index) - if !ok { - return true - } - return (f.omitEmpty || forced) && isEmptyValue(v) -} - -func (f *field) EncodeValue(e *Encoder, strct reflect.Value) error { - v, ok := fieldByIndex(strct, f.index) - if !ok { - return e.EncodeNil() - } - return f.encoder(e, v) -} - -func (f *field) DecodeValue(d *Decoder, strct reflect.Value) error { - v := fieldByIndexAlloc(strct, f.index) - return f.decoder(d, v) -} - -//------------------------------------------------------------------------------ - -type fields struct { - Type reflect.Type - Map map[string]*field - List []*field - AsArray bool - - hasOmitEmpty bool -} - -func newFields(typ reflect.Type) *fields { - return &fields{ - Type: typ, - Map: make(map[string]*field, typ.NumField()), - List: make([]*field, 0, typ.NumField()), - } -} - -func (fs *fields) Add(field *field) { - fs.warnIfFieldExists(field.name) - fs.Map[field.name] = field - fs.List = append(fs.List, field) - if field.omitEmpty { - fs.hasOmitEmpty = true - } -} - -func (fs *fields) warnIfFieldExists(name string) { - if _, ok := fs.Map[name]; ok { - log.Printf("msgpack: %s already has field=%s", fs.Type, name) - } -} - -func (fs *fields) OmitEmpty(strct reflect.Value, forced bool) []*field { - if !fs.hasOmitEmpty && !forced { - return fs.List - } - - fields := make([]*field, 0, len(fs.List)) - - for _, f := range fs.List { - if !f.Omit(strct, forced) { - fields = append(fields, f) - } - } - - return fields -} - -func getFields(typ reflect.Type, fallbackTag string) *fields { - fs := newFields(typ) - - var omitEmpty bool - for i := 0; i < typ.NumField(); i++ { - f := typ.Field(i) - - tagStr := f.Tag.Get(defaultStructTag) - if tagStr == "" && fallbackTag != "" { - tagStr = f.Tag.Get(fallbackTag) - } - - tag := tagparser.Parse(tagStr) - if tag.Name == "-" { - continue - } - - if f.Name == "_msgpack" { - fs.AsArray = tag.HasOption("as_array") || tag.HasOption("asArray") - if tag.HasOption("omitempty") { - omitEmpty = true - } - } - - if f.PkgPath != "" && !f.Anonymous { - continue - } - - field := &field{ - name: tag.Name, - index: f.Index, - omitEmpty: omitEmpty || tag.HasOption("omitempty"), - } - - if tag.HasOption("intern") { - switch f.Type.Kind() { - case reflect.Interface: - field.encoder = encodeInternedInterfaceValue - field.decoder = decodeInternedInterfaceValue - case reflect.String: - field.encoder = encodeInternedStringValue - field.decoder = decodeInternedStringValue - default: - err := fmt.Errorf("msgpack: intern strings are not supported on %s", f.Type) - panic(err) - } - } else { - field.encoder = getEncoder(f.Type) - field.decoder = getDecoder(f.Type) - } - - if field.name == "" { - field.name = f.Name - } - - if f.Anonymous && !tag.HasOption("noinline") { - inline := tag.HasOption("inline") - if inline { - inlineFields(fs, f.Type, field, fallbackTag) - } else { - inline = shouldInline(fs, f.Type, field, fallbackTag) - } - - if inline { - if _, ok := fs.Map[field.name]; ok { - log.Printf("msgpack: %s already has field=%s", fs.Type, field.name) - } - fs.Map[field.name] = field - continue - } - } - - fs.Add(field) - - if alias, ok := tag.Options["alias"]; ok { - fs.warnIfFieldExists(alias) - fs.Map[alias] = field - } - } - return fs -} - -var ( - encodeStructValuePtr uintptr - decodeStructValuePtr uintptr -) - -//nolint:gochecknoinits -func init() { - encodeStructValuePtr = reflect.ValueOf(encodeStructValue).Pointer() - decodeStructValuePtr = reflect.ValueOf(decodeStructValue).Pointer() -} - -func inlineFields(fs *fields, typ reflect.Type, f *field, tag string) { - inlinedFields := getFields(typ, tag).List - for _, field := range inlinedFields { - if _, ok := fs.Map[field.name]; ok { - // Don't inline shadowed fields. - continue - } - field.index = append(f.index, field.index...) - fs.Add(field) - } -} - -func shouldInline(fs *fields, typ reflect.Type, f *field, tag string) bool { - var encoder encoderFunc - var decoder decoderFunc - - if typ.Kind() == reflect.Struct { - encoder = f.encoder - decoder = f.decoder - } else { - for typ.Kind() == reflect.Ptr { - typ = typ.Elem() - encoder = getEncoder(typ) - decoder = getDecoder(typ) - } - if typ.Kind() != reflect.Struct { - return false - } - } - - if reflect.ValueOf(encoder).Pointer() != encodeStructValuePtr { - return false - } - if reflect.ValueOf(decoder).Pointer() != decodeStructValuePtr { - return false - } - - inlinedFields := getFields(typ, tag).List - for _, field := range inlinedFields { - if _, ok := fs.Map[field.name]; ok { - // Don't auto inline if there are shadowed fields. - return false - } - } - - for _, field := range inlinedFields { - field.index = append(f.index, field.index...) - fs.Add(field) - } - return true -} - -type isZeroer interface { - IsZero() bool -} - -func isEmptyValue(v reflect.Value) bool { - kind := v.Kind() - - for kind == reflect.Interface { - if v.IsNil() { - return true - } - v = v.Elem() - kind = v.Kind() - } - - if z, ok := v.Interface().(isZeroer); ok { - return nilable(kind) && v.IsNil() || z.IsZero() - } - - switch kind { - case reflect.Array, reflect.Map, reflect.Slice, reflect.String: - return v.Len() == 0 - case reflect.Bool: - return !v.Bool() - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - return v.Int() == 0 - case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: - return v.Uint() == 0 - case reflect.Float32, reflect.Float64: - return v.Float() == 0 - case reflect.Ptr: - return v.IsNil() - default: - return false - } -} - -func fieldByIndex(v reflect.Value, index []int) (_ reflect.Value, ok bool) { - if len(index) == 1 { - return v.Field(index[0]), true - } - - for i, idx := range index { - if i > 0 { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - return v, false - } - v = v.Elem() - } - } - v = v.Field(idx) - } - - return v, true -} - -func fieldByIndexAlloc(v reflect.Value, index []int) reflect.Value { - if len(index) == 1 { - return v.Field(index[0]) - } - - for i, idx := range index { - if i > 0 { - var ok bool - v, ok = indirectNil(v) - if !ok { - return v - } - } - v = v.Field(idx) - } - - return v -} - -func indirectNil(v reflect.Value) (reflect.Value, bool) { - if v.Kind() == reflect.Ptr { - if v.IsNil() { - if !v.CanSet() { - return v, false - } - elemType := v.Type().Elem() - if elemType.Kind() != reflect.Struct { - return v, false - } - v.Set(reflect.New(elemType)) - } - v = v.Elem() - } - return v, true -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go b/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go deleted file mode 100644 index 192ac479..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go +++ /dev/null @@ -1,22 +0,0 @@ -// +build !appengine - -package msgpack - -import ( - "unsafe" -) - -// bytesToString converts byte slice to string. -func bytesToString(b []byte) string { - return *(*string)(unsafe.Pointer(&b)) -} - -// stringToBytes converts string to byte slice. -func stringToBytes(s string) []byte { - return *(*[]byte)(unsafe.Pointer( - &struct { - string - Cap int - }{s, len(s)}, - )) -} diff --git a/vendor/github.com/vmihailenco/msgpack/v5/version.go b/vendor/github.com/vmihailenco/msgpack/v5/version.go deleted file mode 100644 index 1d49337c..00000000 --- a/vendor/github.com/vmihailenco/msgpack/v5/version.go +++ /dev/null @@ -1,6 +0,0 @@ -package msgpack - -// Version is the current release version. -func Version() string { - return "5.3.5" -} diff --git a/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml b/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml deleted file mode 100644 index 7194cd00..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml +++ /dev/null @@ -1,19 +0,0 @@ -dist: xenial -language: go - -go: - - 1.14.x - - 1.15.x - - tip - -matrix: - allow_failures: - - go: tip - -env: - - GO111MODULE=on - -go_import_path: github.com/vmihailenco/tagparser - -before_install: - - curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.17.1 diff --git a/vendor/github.com/vmihailenco/tagparser/v2/LICENSE b/vendor/github.com/vmihailenco/tagparser/v2/LICENSE deleted file mode 100644 index 3fc93fdf..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/LICENSE +++ /dev/null @@ -1,25 +0,0 @@ -Copyright (c) 2019 The github.com/vmihailenco/tagparser Authors. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/vmihailenco/tagparser/v2/Makefile b/vendor/github.com/vmihailenco/tagparser/v2/Makefile deleted file mode 100644 index 0b1b5959..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/Makefile +++ /dev/null @@ -1,9 +0,0 @@ -all: - go test ./... - go test ./... -short -race - go test ./... -run=NONE -bench=. -benchmem - env GOOS=linux GOARCH=386 go test ./... - go vet ./... - go get github.com/gordonklaus/ineffassign - ineffassign . - golangci-lint run diff --git a/vendor/github.com/vmihailenco/tagparser/v2/README.md b/vendor/github.com/vmihailenco/tagparser/v2/README.md deleted file mode 100644 index c0259de5..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/README.md +++ /dev/null @@ -1,24 +0,0 @@ -# Opinionated Golang tag parser - -[![Build Status](https://travis-ci.org/vmihailenco/tagparser.png?branch=master)](https://travis-ci.org/vmihailenco/tagparser) -[![GoDoc](https://godoc.org/github.com/vmihailenco/tagparser?status.svg)](https://godoc.org/github.com/vmihailenco/tagparser) - -## Installation - -Install: - -```shell -go get github.com/vmihailenco/tagparser/v2 -``` - -## Quickstart - -```go -func ExampleParse() { - tag := tagparser.Parse("some_name,key:value,key2:'complex value'") - fmt.Println(tag.Name) - fmt.Println(tag.Options) - // Output: some_name - // map[key:value key2:'complex value'] -} -``` diff --git a/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go b/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go deleted file mode 100644 index 21a9bc7f..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go +++ /dev/null @@ -1,82 +0,0 @@ -package parser - -import ( - "bytes" - - "github.com/vmihailenco/tagparser/v2/internal" -) - -type Parser struct { - b []byte - i int -} - -func New(b []byte) *Parser { - return &Parser{ - b: b, - } -} - -func NewString(s string) *Parser { - return New(internal.StringToBytes(s)) -} - -func (p *Parser) Bytes() []byte { - return p.b[p.i:] -} - -func (p *Parser) Valid() bool { - return p.i < len(p.b) -} - -func (p *Parser) Read() byte { - if p.Valid() { - c := p.b[p.i] - p.Advance() - return c - } - return 0 -} - -func (p *Parser) Peek() byte { - if p.Valid() { - return p.b[p.i] - } - return 0 -} - -func (p *Parser) Advance() { - p.i++ -} - -func (p *Parser) Skip(skip byte) bool { - if p.Peek() == skip { - p.Advance() - return true - } - return false -} - -func (p *Parser) SkipBytes(skip []byte) bool { - if len(skip) > len(p.b[p.i:]) { - return false - } - if !bytes.Equal(p.b[p.i:p.i+len(skip)], skip) { - return false - } - p.i += len(skip) - return true -} - -func (p *Parser) ReadSep(sep byte) ([]byte, bool) { - ind := bytes.IndexByte(p.b[p.i:], sep) - if ind == -1 { - b := p.b[p.i:] - p.i = len(p.b) - return b, false - } - - b := p.b[p.i : p.i+ind] - p.i += ind + 1 - return b, true -} diff --git a/vendor/github.com/vmihailenco/tagparser/v2/internal/safe.go b/vendor/github.com/vmihailenco/tagparser/v2/internal/safe.go deleted file mode 100644 index 870fe541..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/internal/safe.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build appengine - -package internal - -func BytesToString(b []byte) string { - return string(b) -} - -func StringToBytes(s string) []byte { - return []byte(s) -} diff --git a/vendor/github.com/vmihailenco/tagparser/v2/internal/unsafe.go b/vendor/github.com/vmihailenco/tagparser/v2/internal/unsafe.go deleted file mode 100644 index f8bc18d9..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/internal/unsafe.go +++ /dev/null @@ -1,22 +0,0 @@ -// +build !appengine - -package internal - -import ( - "unsafe" -) - -// BytesToString converts byte slice to string. -func BytesToString(b []byte) string { - return *(*string)(unsafe.Pointer(&b)) -} - -// StringToBytes converts string to byte slice. -func StringToBytes(s string) []byte { - return *(*[]byte)(unsafe.Pointer( - &struct { - string - Cap int - }{s, len(s)}, - )) -} diff --git a/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go b/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go deleted file mode 100644 index 5002e645..00000000 --- a/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go +++ /dev/null @@ -1,166 +0,0 @@ -package tagparser - -import ( - "strings" - - "github.com/vmihailenco/tagparser/v2/internal/parser" -) - -type Tag struct { - Name string - Options map[string]string -} - -func (t *Tag) HasOption(name string) bool { - _, ok := t.Options[name] - return ok -} - -func Parse(s string) *Tag { - p := &tagParser{ - Parser: parser.NewString(s), - } - p.parseKey() - return &p.Tag -} - -type tagParser struct { - *parser.Parser - - Tag Tag - hasName bool - key string -} - -func (p *tagParser) setTagOption(key, value string) { - key = strings.TrimSpace(key) - value = strings.TrimSpace(value) - - if !p.hasName { - p.hasName = true - if key == "" { - p.Tag.Name = value - return - } - } - if p.Tag.Options == nil { - p.Tag.Options = make(map[string]string) - } - if key == "" { - p.Tag.Options[value] = "" - } else { - p.Tag.Options[key] = value - } -} - -func (p *tagParser) parseKey() { - p.key = "" - - var b []byte - for p.Valid() { - c := p.Read() - switch c { - case ',': - p.Skip(' ') - p.setTagOption("", string(b)) - p.parseKey() - return - case ':': - p.key = string(b) - p.parseValue() - return - case '\'': - p.parseQuotedValue() - return - default: - b = append(b, c) - } - } - - if len(b) > 0 { - p.setTagOption("", string(b)) - } -} - -func (p *tagParser) parseValue() { - const quote = '\'' - c := p.Peek() - if c == quote { - p.Skip(quote) - p.parseQuotedValue() - return - } - - var b []byte - for p.Valid() { - c = p.Read() - switch c { - case '\\': - b = append(b, p.Read()) - case '(': - b = append(b, c) - b = p.readBrackets(b) - case ',': - p.Skip(' ') - p.setTagOption(p.key, string(b)) - p.parseKey() - return - default: - b = append(b, c) - } - } - p.setTagOption(p.key, string(b)) -} - -func (p *tagParser) readBrackets(b []byte) []byte { - var lvl int -loop: - for p.Valid() { - c := p.Read() - switch c { - case '\\': - b = append(b, p.Read()) - case '(': - b = append(b, c) - lvl++ - case ')': - b = append(b, c) - lvl-- - if lvl < 0 { - break loop - } - default: - b = append(b, c) - } - } - return b -} - -func (p *tagParser) parseQuotedValue() { - const quote = '\'' - var b []byte - for p.Valid() { - bb, ok := p.ReadSep(quote) - if !ok { - b = append(b, bb...) - break - } - - // keep the escaped single-quote, and continue until we've found the - // one that isn't. - if len(bb) > 0 && bb[len(bb)-1] == '\\' { - b = append(b, bb[:len(bb)-1]...) - b = append(b, quote) - continue - } - - b = append(b, bb...) - break - } - - p.setTagOption(p.key, string(b)) - if p.Skip(',') { - p.Skip(' ') - } - p.parseKey() -} diff --git a/vendor/mellium.im/sasl/.gitignore b/vendor/mellium.im/sasl/.gitignore deleted file mode 100644 index ec356a1a..00000000 --- a/vendor/mellium.im/sasl/.gitignore +++ /dev/null @@ -1,6 +0,0 @@ -*.sw[op] -*.svg -*.xml -*.out -Gopkg.lock -vendor/ diff --git a/vendor/mellium.im/sasl/CHANGELOG.md b/vendor/mellium.im/sasl/CHANGELOG.md deleted file mode 100644 index f5eab4de..00000000 --- a/vendor/mellium.im/sasl/CHANGELOG.md +++ /dev/null @@ -1,28 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - - -## v0.3.1 — 2022-12-28 - -### Fixed - -- Sometimes the nonce was not set on the SASL state machine, resulting in - authentication failing - - -## v0.3.0 — 2022-08-15 - -### Added - -- Support for tls-exporter channel binding method as defined in [RFC 9266] -- Support for fast XOR using SIMD/VSX on more architectures - - -### Fixed - -- Return an error if no tls-unique channel binding (CB) data is present in the - TLS connection state (or no connection state exists) and we use SCRAM with CB - - -[RFC 9266]: https://datatracker.ietf.org/doc/html/rfc9266 diff --git a/vendor/mellium.im/sasl/DCO b/vendor/mellium.im/sasl/DCO deleted file mode 100644 index 8201f992..00000000 --- a/vendor/mellium.im/sasl/DCO +++ /dev/null @@ -1,37 +0,0 @@ -Developer Certificate of Origin -Version 1.1 - -Copyright (C) 2004, 2006 The Linux Foundation and its contributors. -1 Letterman Drive -Suite D4700 -San Francisco, CA, 94129 - -Everyone is permitted to copy and distribute verbatim copies of this -license document, but changing it is not allowed. - - -Developer's Certificate of Origin 1.1 - -By making a contribution to this project, I certify that: - -(a) The contribution was created in whole or in part by me and I - have the right to submit it under the open source license - indicated in the file; or - -(b) The contribution is based upon previous work that, to the best - of my knowledge, is covered under an appropriate open source - license and I have the right under that license to submit that - work with modifications, whether created in whole or in part - by me, under the same open source license (unless I am - permitted to submit under a different license), as indicated - in the file; or - -(c) The contribution was provided directly to me by some other - person who certified (a), (b) or (c) and I have not modified - it. - -(d) I understand and agree that this project and the contribution - are public and that a record of the contribution (including all - personal information I submit with it, including my sign-off) is - maintained indefinitely and may be redistributed consistent with - this project or the open source license(s) involved. diff --git a/vendor/mellium.im/sasl/LICENSE b/vendor/mellium.im/sasl/LICENSE deleted file mode 100644 index 08ed8f4d..00000000 --- a/vendor/mellium.im/sasl/LICENSE +++ /dev/null @@ -1,23 +0,0 @@ -Copyright © 2014 The Mellium Contributors. -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -1. Redistributions of source code must retain the above copyright notice, this -list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright notice, -this list of conditions and the following disclaimer in the documentation -and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/mellium.im/sasl/README.md b/vendor/mellium.im/sasl/README.md deleted file mode 100644 index af6983c7..00000000 --- a/vendor/mellium.im/sasl/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# SASL - -[![Issue Tracker][badge]](https://mellium.im/issue) -[![Docs](https://pkg.go.dev/badge/mellium.im/sasl)](https://pkg.go.dev/mellium.im/sasl) -[![Chat](https://img.shields.io/badge/XMPP-users@mellium.chat-orange.svg)](https://mellium.chat) -[![License](https://img.shields.io/badge/license-FreeBSD-blue.svg)](https://opensource.org/licenses/BSD-2-Clause) - - - -A Go library implementing the Simple Authentication and Security Layer (SASL) as -defined by [RFC 4422][rfc4422]. - - -## License - -The package may be used under the terms of the BSD 2-Clause License a copy of -which may be found in the file [LICENSE.md][LICENSE]. - -[badge]: https://img.shields.io/badge/style-mellium%2fxmpp-green.svg?longCache=true&style=popout-square&label=issues -[rfc4422]: https://tools.ietf.org/html/rfc4422 -[LICENSE]: https://codeberg.org/mellium/xmpp/src/branch/main/LICENSE diff --git a/vendor/mellium.im/sasl/doc.go b/vendor/mellium.im/sasl/doc.go deleted file mode 100644 index a725cb54..00000000 --- a/vendor/mellium.im/sasl/doc.go +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -// Package sasl implements the Simple Authentication and Security Layer (SASL) -// as defined by RFC 4422. -// -// Most users of this package will only need to create a Negotiator using -// NewClient or NewServer and call its Step method repeatedly. -// Authors implementing SASL mechanisms other than the builtin ones will want to -// create a Mechanism struct which will likely use the other methods on the -// Negotiator. -// -// Be advised: This API is still unstable and is subject to change. -package sasl // import "mellium.im/sasl" diff --git a/vendor/mellium.im/sasl/mechanism.go b/vendor/mellium.im/sasl/mechanism.go deleted file mode 100644 index d8bbc5a9..00000000 --- a/vendor/mellium.im/sasl/mechanism.go +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - /* #nosec */ - "crypto/sha1" - "crypto/sha256" - "errors" -) - -// Define common errors used by SASL mechanisms and negotiators. -var ( - ErrInvalidState = errors.New("invalid state") - ErrInvalidChallenge = errors.New("invalid or missing challenge") - ErrAuthn = errors.New("authentication error") - ErrTooManySteps = errors.New("step called too many times") -) - -var ( - // Plain is a Mechanism that implements the PLAIN authentication mechanism - // as defined by RFC 4616. - Plain Mechanism = plain - - // ScramSha256Plus is a Mechanism that implements the SCRAM-SHA-256-PLUS - // authentication mechanism defined in RFC 7677. - // The only supported channel binding types are tls-unique as defined in RFC - // 5929 and tls-exporter defined in RFC 9266. - ScramSha256Plus Mechanism = scram("SCRAM-SHA-256-PLUS", sha256.New) - - // ScramSha256 is a Mechanism that implements the SCRAM-SHA-256 - // authentication mechanism defined in RFC 7677. - ScramSha256 Mechanism = scram("SCRAM-SHA-256", sha256.New) - - // ScramSha1Plus is a Mechanism that implements the SCRAM-SHA-1-PLUS - // authentication mechanism defined in RFC 5802. - // The only supported channel binding types are tls-unique as defined in RFC - // 5929 and tls-exporter defined in RFC 9266. - ScramSha1Plus Mechanism = scram("SCRAM-SHA-1-PLUS", sha1.New) - - // ScramSha1 is a Mechanism that implements the SCRAM-SHA-1 authentication - // mechanism defined in RFC 5802. - ScramSha1 Mechanism = scram("SCRAM-SHA-1", sha1.New) -) - -// Mechanism represents a SASL mechanism that can be used by a Client or Server -// to perform the actual negotiation. Base64 encoding the final challenges and -// responses should not be performed by the mechanism. -// -// Mechanisms must be stateless and may be shared between goroutines. When a -// mechanism needs to store state between the different steps it can return -// anything that it needs to store and the value will be cached by the -// negotiator and passed in as the data parameter when the next challenge is -// received. -type Mechanism struct { - Name string - Start func(n *Negotiator) (more bool, resp []byte, cache interface{}, err error) - Next func(n *Negotiator, challenge []byte, data interface{}) (more bool, resp []byte, cache interface{}, err error) -} diff --git a/vendor/mellium.im/sasl/negotiator.go b/vendor/mellium.im/sasl/negotiator.go deleted file mode 100644 index 8b4c3de0..00000000 --- a/vendor/mellium.im/sasl/negotiator.go +++ /dev/null @@ -1,196 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - "crypto/rand" - "crypto/tls" - "strings" -) - -// State represents the current state of a Negotiator. -// The first two bits represent the actual state of the state machine and the -// last 3 bits are a bitmask that define the machines behavior. -// The remaining bits should not be used. -type State uint8 - -// The current step of the Server or Client (represented by the first two bits -// of the state byte). -const ( - Initial State = iota - AuthTextSent - ResponseSent - ValidServerResponse - - // Bitmask used for extracting the step from the state byte. - StepMask = 0x3 -) - -const ( - // RemoteCB bit is on if the remote client or server supports channel binding. - RemoteCB State = 1 << (iota + 3) - - // Errored bit is on if the machine has errored. - Errored - - // Receiving bit is on if the machine is a server. - Receiving -) - -// NewClient creates a new SASL Negotiator that supports creating authentication -// requests using the given mechanism. -func NewClient(m Mechanism, opts ...Option) *Negotiator { - machine := &Negotiator{ - mechanism: m, - } - getOpts(machine, opts...) - for _, rname := range machine.remoteMechanisms { - lname := m.Name - if lname == rname && strings.HasSuffix(lname, "-PLUS") { - machine.state |= RemoteCB - break - } - } - if len(machine.nonce) == 0 { - machine.nonce = nonce(noncerandlen, rand.Reader) - } - return machine -} - -// NewServer creates a new SASL Negotiator that supports receiving -// authentication requests using the given mechanism. -// A nil permissions function is the same as a function that always returns -// false. -func NewServer(m Mechanism, permissions func(*Negotiator) bool, opts ...Option) *Negotiator { - machine := &Negotiator{ - mechanism: m, - state: AuthTextSent | Receiving, - } - getOpts(machine, opts...) - if permissions != nil { - machine.permissions = permissions - } - for _, rname := range machine.remoteMechanisms { - lname := m.Name - if lname == rname && strings.HasSuffix(lname, "-PLUS") { - machine.state |= RemoteCB - break - } - } - if len(machine.nonce) == 0 { - machine.nonce = nonce(noncerandlen, rand.Reader) - } - return machine -} - -// A Negotiator represents a SASL client or server state machine that can -// attempt to negotiate auth. Negotiators should not be used from multiple -// goroutines, and must be reset between negotiation attempts. -type Negotiator struct { - tlsState *tls.ConnectionState - remoteMechanisms []string - credentials func() (Username, Password, Identity []byte) - permissions func(*Negotiator) bool - mechanism Mechanism - state State - nonce []byte - cache interface{} -} - -// Nonce returns a unique nonce that is reset for each negotiation attempt. It -// is used by SASL Mechanisms and should generally not be called directly. -func (c *Negotiator) Nonce() []byte { - return c.nonce -} - -// Step attempts to transition the state machine to its next state. If Step is -// called after a previous invocation generates an error (and the state machine -// has not been reset to its initial state), Step panics. -func (c *Negotiator) Step(challenge []byte) (more bool, resp []byte, err error) { - if c.state&Errored == Errored { - panic("sasl: Step called on a SASL state machine that has errored") - } - defer func() { - if err != nil { - c.state |= Errored - } - }() - - switch c.state & StepMask { - case Initial: - more, resp, c.cache, err = c.mechanism.Start(c) - c.state = c.state&^StepMask | AuthTextSent - case AuthTextSent: - more, resp, c.cache, err = c.mechanism.Next(c, challenge, c.cache) - c.state = c.state&^StepMask | ResponseSent - case ResponseSent: - more, resp, c.cache, err = c.mechanism.Next(c, challenge, c.cache) - c.state = c.state&^StepMask | ValidServerResponse - case ValidServerResponse: - more, resp, c.cache, err = c.mechanism.Next(c, challenge, c.cache) - } - - if err != nil { - return false, nil, err - } - - return more, resp, err -} - -// State returns the internal state of the SASL state machine. -func (c *Negotiator) State() State { - return c.state -} - -// Reset resets the state machine to its initial state so that it can be reused -// in another SASL exchange. -func (c *Negotiator) Reset() { - c.state = c.state & (Receiving | RemoteCB) - - // Skip the start step for servers - if c.state&Receiving == Receiving { - c.state = c.state&^StepMask | AuthTextSent - } - - c.nonce = nonce(noncerandlen, rand.Reader) - c.cache = nil -} - -// Credentials returns a username, and password for authentication and optional -// identity for authorization. -func (c *Negotiator) Credentials() (username, password, identity []byte) { - if c.credentials != nil { - return c.credentials() - } - return -} - -// Permissions is the callback used by the server to authenticate the user. -func (c *Negotiator) Permissions(opts ...Option) bool { - if c.permissions != nil { - nn := *c - getOpts(&nn, opts...) - return c.permissions(&nn) - } - return false -} - -// TLSState is the state of any TLS connections being used to negotiate SASL -// (it can be used for channel binding). -func (c *Negotiator) TLSState() *tls.ConnectionState { - if c.tlsState != nil { - return c.tlsState - } - return nil -} - -// RemoteMechanisms is a list of mechanisms as advertised by the other side of a -// SASL negotiation. -func (c *Negotiator) RemoteMechanisms() []string { - if c.remoteMechanisms != nil { - return c.remoteMechanisms - } - return nil -} diff --git a/vendor/mellium.im/sasl/nonce.go b/vendor/mellium.im/sasl/nonce.go deleted file mode 100644 index e944977b..00000000 --- a/vendor/mellium.im/sasl/nonce.go +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - "encoding/base64" - "io" -) - -// Generates a nonce with n random bytes base64 encoded to ensure that it meets -// the criteria for inclusion in a SCRAM message. -func nonce(n int, r io.Reader) []byte { - if n < 1 { - panic("Cannot generate zero or negative length nonce") - } - b := make([]byte, n) - n2, err := r.Read(b) - switch { - case err != nil: - panic(err) - case n2 != n: - panic("Could not read enough randomness to generate nonce") - } - val := make([]byte, base64.RawStdEncoding.EncodedLen(n)) - base64.RawStdEncoding.Encode(val, b) - - return val -} diff --git a/vendor/mellium.im/sasl/options.go b/vendor/mellium.im/sasl/options.go deleted file mode 100644 index 86c295df..00000000 --- a/vendor/mellium.im/sasl/options.go +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - "crypto/tls" -) - -// An Option represents an input to a SASL state machine. -type Option func(*Negotiator) - -func getOpts(n *Negotiator, o ...Option) { - n.credentials = func() (username, password, identity []byte) { - return - } - n.permissions = func(_ *Negotiator) bool { - return false - } - for _, f := range o { - f(n) - } -} - -// TLSState lets the state machine negotiate channel binding with a TLS session -// if supported by the underlying mechanism. -func TLSState(cs tls.ConnectionState) Option { - return func(n *Negotiator) { - n.tlsState = &cs - } -} - -// nonce overrides the nonce used for authentication attempts. -// This defaults to a random value and should not be changed. -func setNonce(v []byte) Option { - return func(n *Negotiator) { - n.nonce = v - } -} - -// RemoteMechanisms sets a list of mechanisms supported by the remote client or -// server with which the state machine will be negotiating. -// It is used to determine if the server supports channel binding. -func RemoteMechanisms(m ...string) Option { - return func(n *Negotiator) { - n.remoteMechanisms = m - } -} - -// Credentials provides the negotiator with a username and password to -// authenticate with and (optionally) an authorization identity. -// Identity will normally be left empty to act as the username. -// The Credentials function is called lazily and may be called multiple times by -// the mechanism. -// It is not memoized by the negotiator. -func Credentials(f func() (Username, Password, Identity []byte)) Option { - return func(n *Negotiator) { - n.credentials = f - } -} diff --git a/vendor/mellium.im/sasl/plain.go b/vendor/mellium.im/sasl/plain.go deleted file mode 100644 index 7d3d1a81..00000000 --- a/vendor/mellium.im/sasl/plain.go +++ /dev/null @@ -1,52 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - "bytes" -) - -var plainSep = []byte{0} - -var plain = Mechanism{ - Name: "PLAIN", - Start: func(m *Negotiator) (more bool, resp []byte, _ interface{}, err error) { - username, password, identity := m.credentials() - payload := make([]byte, 0, len(identity)+len(username)+len(password)+2) - payload = append(payload, identity...) - payload = append(payload, '\x00') - payload = append(payload, username...) - payload = append(payload, '\x00') - payload = append(payload, password...) - return false, payload, nil, nil - }, - Next: func(m *Negotiator, challenge []byte, _ interface{}) (more bool, resp []byte, _ interface{}, err error) { - // If we're a client, or we're a server that's past the AuthTextSent step, - // we should never actually hit this step. - if m.State()&Receiving != Receiving || m.State()&StepMask != AuthTextSent { - err = ErrTooManySteps - return - } - - // If we're a server, validate that the challenge looks like: - // "Identity\x00Username\x00Password" - parts := bytes.Split(challenge, plainSep) - if len(parts) != 3 { - err = ErrInvalidChallenge - return - } - - if m.Permissions(Credentials(func() (Username, Password, Identity []byte) { - return parts[1], parts[2], parts[0] - })) { - // Everything checks out as far as we know and the server should continue - // to authenticate the user. - return - } - - err = ErrAuthn - return - }, -} diff --git a/vendor/mellium.im/sasl/scram.go b/vendor/mellium.im/sasl/scram.go deleted file mode 100644 index 17473970..00000000 --- a/vendor/mellium.im/sasl/scram.go +++ /dev/null @@ -1,286 +0,0 @@ -// Copyright 2016 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -package sasl - -import ( - "bytes" - "crypto/hmac" - "crypto/tls" - "encoding/base64" - "errors" - "hash" - "strconv" - "strings" - - "golang.org/x/crypto/pbkdf2" -) - -const ( - exporterLen = 32 - exporterLabel = "EXPORTER-Channel-Binding" - gs2HeaderCBSupportUnique = "p=tls-unique," - gs2HeaderCBSupportExporter = "p=tls-exporter," - gs2HeaderNoServerCBSupport = "y," - gs2HeaderNoCBSupport = "n," -) - -var ( - clientKeyInput = []byte("Client Key") - serverKeyInput = []byte("Server Key") -) - -// The number of random bytes to generate for a nonce. -const noncerandlen = 16 - -func getGS2Header(name string, n *Negotiator) (gs2Header []byte) { - _, _, identity := n.Credentials() - tlsState := n.TLSState() - switch { - case tlsState == nil || !strings.HasSuffix(name, "-PLUS"): - // We do not support channel binding - gs2Header = []byte(gs2HeaderNoCBSupport) - case n.State()&RemoteCB == RemoteCB: - // We support channel binding and the server does too - if tlsState.Version >= tls.VersionTLS13 { - gs2Header = []byte(gs2HeaderCBSupportExporter) - } else { - gs2Header = []byte(gs2HeaderCBSupportUnique) - } - case n.State()&RemoteCB != RemoteCB: - // We support channel binding but the server does not - gs2Header = []byte(gs2HeaderNoServerCBSupport) - } - if len(identity) > 0 { - gs2Header = append(gs2Header, []byte(`a=`)...) - gs2Header = append(gs2Header, identity...) - } - gs2Header = append(gs2Header, ',') - return -} - -func scram(name string, fn func() hash.Hash) Mechanism { - // BUG(ssw): We need a way to cache the SCRAM client and server key - // calculations. - return Mechanism{ - Name: name, - Start: func(m *Negotiator) (bool, []byte, interface{}, error) { - user, _, _ := m.Credentials() - - // Escape "=" and ",". This is mostly the same as bytes.Replace but - // faster because we can do both replacements in a single pass. - n := bytes.Count(user, []byte{'='}) + bytes.Count(user, []byte{','}) - username := make([]byte, len(user)+(n*2)) - w := 0 - start := 0 - for i := 0; i < n; i++ { - j := start - j += bytes.IndexAny(user[start:], "=,") - w += copy(username[w:], user[start:j]) - switch user[j] { - case '=': - w += copy(username[w:], "=3D") - case ',': - w += copy(username[w:], "=2C") - } - start = j + 1 - } - copy(username[w:], user[start:]) - - clientFirstMessage := make([]byte, 5+len(m.Nonce())+len(username)) - copy(clientFirstMessage, "n=") - copy(clientFirstMessage[2:], username) - copy(clientFirstMessage[2+len(username):], ",r=") - copy(clientFirstMessage[5+len(username):], m.Nonce()) - - return true, append(getGS2Header(name, m), clientFirstMessage...), clientFirstMessage, nil - }, - Next: func(m *Negotiator, challenge []byte, data interface{}) (more bool, resp []byte, cache interface{}, err error) { - if len(challenge) == 0 { - return more, resp, cache, ErrInvalidChallenge - } - - if m.State()&Receiving == Receiving { - panic("not yet implemented") - } - return scramClientNext(name, fn, m, challenge, data) - }, - } -} - -func scramClientNext(name string, fn func() hash.Hash, m *Negotiator, challenge []byte, data interface{}) (more bool, resp []byte, cache interface{}, err error) { - _, password, _ := m.Credentials() - state := m.State() - - switch state & StepMask { - case AuthTextSent: - iter := -1 - var salt, nonce []byte - remain := challenge - for { - var field []byte - field, remain = nextParam(remain) - if len(field) < 3 || (len(field) >= 2 && field[1] != '=') { - continue - } - switch field[0] { - case 'i': - ival := string(bytes.TrimRight(field[2:], "\x00")) - - if iter, err = strconv.Atoi(ival); err != nil { - return - } - case 's': - salt = make([]byte, base64.StdEncoding.DecodedLen(len(field)-2)) - var n int - n, err = base64.StdEncoding.Decode(salt, field[2:]) - salt = salt[:n] - if err != nil { - return - } - case 'r': - nonce = field[2:] - case 'm': - // RFC 5802: - // m: This attribute is reserved for future extensibility. In this - // version of SCRAM, its presence in a client or a server message - // MUST cause authentication failure when the attribute is parsed by - // the other end. - err = errors.New("server sent reserved attribute `m'") - return - } - if remain == nil { - break - } - } - - switch { - case iter < 0: - err = errors.New("iteration count is invalid") - return - case nonce == nil || !bytes.HasPrefix(nonce, m.Nonce()): - err = errors.New("server nonce does not match client nonce") - return - case salt == nil: - err = errors.New("server sent empty salt") - return - } - - gs2Header := getGS2Header(name, m) - tlsState := m.TLSState() - var channelBinding []byte - switch plus := strings.HasSuffix(name, "-PLUS"); { - case plus && tlsState == nil: - err = errors.New("sasl: SCRAM with channel binding requires a TLS connection") - return - case bytes.Contains(gs2Header, []byte(gs2HeaderCBSupportExporter)): - keying, err := tlsState.ExportKeyingMaterial(exporterLabel, nil, exporterLen) - if err != nil { - return false, nil, nil, err - } - if len(keying) == 0 { - err = errors.New("sasl: SCRAM with channel binding requires valid TLS keying material") - return false, nil, nil, err - } - channelBinding = make([]byte, 2+base64.StdEncoding.EncodedLen(len(gs2Header)+len(keying))) - channelBinding[0] = 'c' - channelBinding[1] = '=' - base64.StdEncoding.Encode(channelBinding[2:], append(gs2Header, keying...)) - case bytes.Contains(gs2Header, []byte(gs2HeaderCBSupportUnique)): - //lint:ignore SA1019 TLS unique must be supported by SCRAM - if len(tlsState.TLSUnique) == 0 { - err = errors.New("sasl: SCRAM with channel binding requires valid tls-unique data") - return false, nil, nil, err - } - channelBinding = make( - []byte, - //lint:ignore SA1019 TLS unique must be supported by SCRAM - 2+base64.StdEncoding.EncodedLen(len(gs2Header)+len(tlsState.TLSUnique)), - ) - channelBinding[0] = 'c' - channelBinding[1] = '=' - //lint:ignore SA1019 TLS unique must be supported by SCRAM - base64.StdEncoding.Encode(channelBinding[2:], append(gs2Header, tlsState.TLSUnique...)) - default: - channelBinding = make( - []byte, - 2+base64.StdEncoding.EncodedLen(len(gs2Header)), - ) - channelBinding[0] = 'c' - channelBinding[1] = '=' - base64.StdEncoding.Encode(channelBinding[2:], gs2Header) - } - clientFinalMessageWithoutProof := append(channelBinding, []byte(",r=")...) - clientFinalMessageWithoutProof = append(clientFinalMessageWithoutProof, nonce...) - - clientFirstMessage := data.([]byte) - authMessage := append(clientFirstMessage, ',') - authMessage = append(authMessage, challenge...) - authMessage = append(authMessage, ',') - authMessage = append(authMessage, clientFinalMessageWithoutProof...) - - saltedPassword := pbkdf2.Key(password, salt, iter, fn().Size(), fn) - - h := hmac.New(fn, saltedPassword) - _, err = h.Write(serverKeyInput) - if err != nil { - return - } - serverKey := h.Sum(nil) - h.Reset() - - _, err = h.Write(clientKeyInput) - if err != nil { - return - } - clientKey := h.Sum(nil) - - h = hmac.New(fn, serverKey) - _, err = h.Write(authMessage) - if err != nil { - return - } - serverSignature := h.Sum(nil) - - h = fn() - _, err = h.Write(clientKey) - if err != nil { - return - } - storedKey := h.Sum(nil) - h = hmac.New(fn, storedKey) - _, err = h.Write(authMessage) - if err != nil { - return - } - clientSignature := h.Sum(nil) - clientProof := make([]byte, len(clientKey)) - goXORBytes(clientProof, clientKey, clientSignature) - - encodedClientProof := make([]byte, base64.StdEncoding.EncodedLen(len(clientProof))) - base64.StdEncoding.Encode(encodedClientProof, clientProof) - clientFinalMessage := append(clientFinalMessageWithoutProof, []byte(",p=")...) - clientFinalMessage = append(clientFinalMessage, encodedClientProof...) - - return true, clientFinalMessage, serverSignature, nil - case ResponseSent: - clientCalculatedServerFinalMessage := "v=" + base64.StdEncoding.EncodeToString(data.([]byte)) - if clientCalculatedServerFinalMessage != string(challenge) { - err = ErrAuthn - return - } - // Success! - return false, nil, nil, nil - } - err = ErrInvalidState - return -} - -func nextParam(params []byte) ([]byte, []byte) { - idx := bytes.IndexByte(params, ',') - if idx == -1 { - return params, nil - } - return params[:idx], params[idx+1:] -} diff --git a/vendor/mellium.im/sasl/xor.go b/vendor/mellium.im/sasl/xor.go deleted file mode 100644 index 90d21a82..00000000 --- a/vendor/mellium.im/sasl/xor.go +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright 2022 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !go1.20 - -package sasl - -// TODO: remove all the specialized XOR code and use "crypto/subtle".XORBytes -// when Go v1.21 comes out. For more information see: -// https://mellium.im/issue/338 - -func goXORBytes(dst, x, y []byte) int { - n := len(x) - if len(y) < n { - n = len(y) - } - if n == 0 { - return 0 - } - if n > len(dst) { - panic("subtle.XORBytes: dst too short") - } - xorBytes(&dst[0], &x[0], &y[0], n) // arch-specific - return n -} diff --git a/vendor/mellium.im/sasl/xor_amd64.go b/vendor/mellium.im/sasl/xor_amd64.go deleted file mode 100644 index d424bf4d..00000000 --- a/vendor/mellium.im/sasl/xor_amd64.go +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !purego - -package sasl - -//go:noescape -func xorBytes(dst, a, b *byte, n int) diff --git a/vendor/mellium.im/sasl/xor_amd64.s b/vendor/mellium.im/sasl/xor_amd64.s deleted file mode 100644 index 8b04b587..00000000 --- a/vendor/mellium.im/sasl/xor_amd64.s +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !purego - -#include "textflag.h" - -// func xorBytes(dst, a, b *byte, n int) -TEXT ·xorBytes(SB), NOSPLIT, $0 - MOVQ dst+0(FP), BX - MOVQ a+8(FP), SI - MOVQ b+16(FP), CX - MOVQ n+24(FP), DX - TESTQ $15, DX // AND 15 & len, if not zero jump to not_aligned. - JNZ not_aligned - -aligned: - MOVQ $0, AX // position in slices - -loop16b: - MOVOU (SI)(AX*1), X0 // XOR 16byte forwards. - MOVOU (CX)(AX*1), X1 - PXOR X1, X0 - MOVOU X0, (BX)(AX*1) - ADDQ $16, AX - CMPQ DX, AX - JNE loop16b - RET - -loop_1b: - SUBQ $1, DX // XOR 1byte backwards. - MOVB (SI)(DX*1), DI - MOVB (CX)(DX*1), AX - XORB AX, DI - MOVB DI, (BX)(DX*1) - TESTQ $7, DX // AND 7 & len, if not zero jump to loop_1b. - JNZ loop_1b - CMPQ DX, $0 // if len is 0, ret. - JE ret - TESTQ $15, DX // AND 15 & len, if zero jump to aligned. - JZ aligned - -not_aligned: - TESTQ $7, DX // AND $7 & len, if not zero jump to loop_1b. - JNE loop_1b - SUBQ $8, DX // XOR 8bytes backwards. - MOVQ (SI)(DX*1), DI - MOVQ (CX)(DX*1), AX - XORQ AX, DI - MOVQ DI, (BX)(DX*1) - CMPQ DX, $16 // if len is greater or equal 16 here, it must be aligned. - JGE aligned - -ret: - RET diff --git a/vendor/mellium.im/sasl/xor_arm64.go b/vendor/mellium.im/sasl/xor_arm64.go deleted file mode 100644 index 08525c6d..00000000 --- a/vendor/mellium.im/sasl/xor_arm64.go +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright 2020 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !purego - -package sasl - -//go:noescape -func xorBytes(dst, a, b *byte, n int) diff --git a/vendor/mellium.im/sasl/xor_arm64.s b/vendor/mellium.im/sasl/xor_arm64.s deleted file mode 100644 index 76321645..00000000 --- a/vendor/mellium.im/sasl/xor_arm64.s +++ /dev/null @@ -1,69 +0,0 @@ -// Copyright 2020 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !purego - -#include "textflag.h" - -// func xorBytes(dst, a, b *byte, n int) -TEXT ·xorBytes(SB), NOSPLIT|NOFRAME, $0 - MOVD dst+0(FP), R0 - MOVD a+8(FP), R1 - MOVD b+16(FP), R2 - MOVD n+24(FP), R3 - CMP $64, R3 - BLT tail -loop_64: - VLD1.P 64(R1), [V0.B16, V1.B16, V2.B16, V3.B16] - VLD1.P 64(R2), [V4.B16, V5.B16, V6.B16, V7.B16] - VEOR V0.B16, V4.B16, V4.B16 - VEOR V1.B16, V5.B16, V5.B16 - VEOR V2.B16, V6.B16, V6.B16 - VEOR V3.B16, V7.B16, V7.B16 - VST1.P [V4.B16, V5.B16, V6.B16, V7.B16], 64(R0) - SUBS $64, R3 - CMP $64, R3 - BGE loop_64 -tail: - // quick end - CBZ R3, end - TBZ $5, R3, less_than32 - VLD1.P 32(R1), [V0.B16, V1.B16] - VLD1.P 32(R2), [V2.B16, V3.B16] - VEOR V0.B16, V2.B16, V2.B16 - VEOR V1.B16, V3.B16, V3.B16 - VST1.P [V2.B16, V3.B16], 32(R0) -less_than32: - TBZ $4, R3, less_than16 - LDP.P 16(R1), (R11, R12) - LDP.P 16(R2), (R13, R14) - EOR R11, R13, R13 - EOR R12, R14, R14 - STP.P (R13, R14), 16(R0) -less_than16: - TBZ $3, R3, less_than8 - MOVD.P 8(R1), R11 - MOVD.P 8(R2), R12 - EOR R11, R12, R12 - MOVD.P R12, 8(R0) -less_than8: - TBZ $2, R3, less_than4 - MOVWU.P 4(R1), R13 - MOVWU.P 4(R2), R14 - EORW R13, R14, R14 - MOVWU.P R14, 4(R0) -less_than4: - TBZ $1, R3, less_than2 - MOVHU.P 2(R1), R15 - MOVHU.P 2(R2), R16 - EORW R15, R16, R16 - MOVHU.P R16, 2(R0) -less_than2: - TBZ $0, R3, end - MOVBU (R1), R17 - MOVBU (R2), R19 - EORW R17, R19, R19 - MOVBU R19, (R0) -end: - RET diff --git a/vendor/mellium.im/sasl/xor_generic.go b/vendor/mellium.im/sasl/xor_generic.go deleted file mode 100644 index 1b49158e..00000000 --- a/vendor/mellium.im/sasl/xor_generic.go +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build (!amd64 && !arm64 && !ppc64 && !ppc64le) || purego - -package sasl - -import ( - "runtime" - "unsafe" -) - -const wordSize = unsafe.Sizeof(uintptr(0)) - -const supportsUnaligned = runtime.GOARCH == "386" || - runtime.GOARCH == "amd64" || - runtime.GOARCH == "ppc64" || - runtime.GOARCH == "ppc64le" || - runtime.GOARCH == "s390x" - -func xorBytes(dstb, xb, yb *byte, n int) { - // xorBytes assembly is written using pointers and n. Back to slices. - dst := unsafe.Slice(dstb, n) - x := unsafe.Slice(xb, n) - y := unsafe.Slice(yb, n) - - if supportsUnaligned || aligned(dstb, xb, yb) { - xorLoop(words(dst), words(x), words(y)) - if uintptr(n)%wordSize == 0 { - return - } - done := n &^ int(wordSize-1) - dst = dst[done:] - x = x[done:] - y = y[done:] - } - xorLoop(dst, x, y) -} - -// aligned reports whether dst, x, and y are all word-aligned pointers. -func aligned(dst, x, y *byte) bool { - return (uintptr(unsafe.Pointer(dst))|uintptr(unsafe.Pointer(x))|uintptr(unsafe.Pointer(y)))&(wordSize-1) == 0 -} - -// words returns a []uintptr pointing at the same data as x, -// with any trailing partial word removed. -func words(x []byte) []uintptr { - return unsafe.Slice((*uintptr)(unsafe.Pointer(&x[0])), uintptr(len(x))/wordSize) -} - -func xorLoop[T byte | uintptr](dst, x, y []T) { - x = x[:len(dst)] // remove bounds check in loop - y = y[:len(dst)] // remove bounds check in loop - for i := range dst { - dst[i] = x[i] ^ y[i] - } -} diff --git a/vendor/mellium.im/sasl/xor_go.go b/vendor/mellium.im/sasl/xor_go.go deleted file mode 100644 index 3d742f50..00000000 --- a/vendor/mellium.im/sasl/xor_go.go +++ /dev/null @@ -1,15 +0,0 @@ -// Copyright 2022 The Mellium Contributors. -// Use of this source code is governed by the BSD 2-clause -// license that can be found in the LICENSE file. - -//go:build go1.20 - -package sasl - -import ( - "crypto/subtle" -) - -func goXORBytes(dst, x, y []byte) int { - return subtle.XORBytes(dst, x, y) -} diff --git a/vendor/mellium.im/sasl/xor_ppc64x.go b/vendor/mellium.im/sasl/xor_ppc64x.go deleted file mode 100644 index 0148b300..00000000 --- a/vendor/mellium.im/sasl/xor_ppc64x.go +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build (ppc64 || ppc64le) && !purego - -package sasl - -//go:noescape -func xorBytes(dst, a, b *byte, n int) diff --git a/vendor/mellium.im/sasl/xor_ppc64x.s b/vendor/mellium.im/sasl/xor_ppc64x.s deleted file mode 100644 index 72bb80d2..00000000 --- a/vendor/mellium.im/sasl/xor_ppc64x.s +++ /dev/null @@ -1,87 +0,0 @@ -// Copyright 2018 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build (ppc64 || ppc64le) && !purego - -#include "textflag.h" - -// func xorBytes(dst, a, b *byte, n int) -TEXT ·xorBytes(SB), NOSPLIT, $0 - MOVD dst+0(FP), R3 // R3 = dst - MOVD a+8(FP), R4 // R4 = a - MOVD b+16(FP), R5 // R5 = b - MOVD n+24(FP), R6 // R6 = n - - CMPU R6, $32, CR7 // Check if n ≥ 32 bytes - MOVD R0, R8 // R8 = index - CMPU R6, $8, CR6 // Check if 8 ≤ n < 32 bytes - BLT CR6, small // Smaller than 8 - BLT CR7, xor16 // Case for 16 ≤ n < 32 bytes - - // Case for n ≥ 32 bytes -preloop32: - SRD $5, R6, R7 // Setup loop counter - MOVD R7, CTR - MOVD $16, R10 - ANDCC $31, R6, R9 // Check for tailing bytes for later -loop32: - LXVD2X (R4)(R8), VS32 // VS32 = a[i,...,i+15] - LXVD2X (R4)(R10), VS34 - LXVD2X (R5)(R8), VS33 // VS33 = b[i,...,i+15] - LXVD2X (R5)(R10), VS35 - XXLXOR VS32, VS33, VS32 // VS34 = a[] ^ b[] - XXLXOR VS34, VS35, VS34 - STXVD2X VS32, (R3)(R8) // Store to dst - STXVD2X VS34, (R3)(R10) - ADD $32, R8 // Update index - ADD $32, R10 - BC 16, 0, loop32 // bdnz loop16 - - BEQ CR0, done - - MOVD R9, R6 - CMP R6, $8 - BLT small -xor16: - CMP R6, $16 - BLT xor8 - LXVD2X (R4)(R8), VS32 - LXVD2X (R5)(R8), VS33 - XXLXOR VS32, VS33, VS32 - STXVD2X VS32, (R3)(R8) - ADD $16, R8 - ADD $-16, R6 - CMP R6, $8 - BLT small -xor8: - // Case for 8 ≤ n < 16 bytes - MOVD (R4)(R8), R14 // R14 = a[i,...,i+7] - MOVD (R5)(R8), R15 // R15 = b[i,...,i+7] - XOR R14, R15, R16 // R16 = a[] ^ b[] - SUB $8, R6 // n = n - 8 - MOVD R16, (R3)(R8) // Store to dst - ADD $8, R8 - - // Check if we're finished - CMP R6, R0 - BGT small - RET - - // Case for n < 8 bytes and tailing bytes from the - // previous cases. -small: - CMP R6, R0 - BEQ done - MOVD R6, CTR // Setup loop counter - -loop: - MOVBZ (R4)(R8), R14 // R14 = a[i] - MOVBZ (R5)(R8), R15 // R15 = b[i] - XOR R14, R15, R16 // R16 = a[i] ^ b[i] - MOVB R16, (R3)(R8) // Store to dst - ADD $1, R8 - BC 16, 0, loop // bdnz loop - -done: - RET diff --git a/vendor/modules.txt b/vendor/modules.txt index 4e27b6f2..d1a6ddb2 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -1,7 +1,3 @@ -# gitee.com/chunanyong/zorm v1.6.6 -## explicit; go 1.13 -gitee.com/chunanyong/zorm -gitee.com/chunanyong/zorm/decimal # github.com/aliyun/aliyun-oss-go-sdk v2.2.6+incompatible ## explicit github.com/aliyun/aliyun-oss-go-sdk/oss @@ -25,10 +21,7 @@ github.com/baidubce/bce-sdk-go/util/log github.com/basgys/goxml2json # github.com/bitly/go-simplejson v0.5.0 ## explicit -# github.com/bmizerany/pq v0.0.0-20131128184720-da2b95e392c1 -## explicit -github.com/bmizerany/pq -# github.com/bytedance/sonic v1.8.1 +# github.com/bytedance/sonic v1.8.2 ## explicit; go 1.15 github.com/bytedance/sonic github.com/bytedance/sonic/ast @@ -109,34 +102,12 @@ github.com/golang/snappy # github.com/google/go-querystring v1.1.0 ## explicit; go 1.10 github.com/google/go-querystring/query -# github.com/jackc/chunkreader/v2 v2.0.1 -## explicit; go 1.12 -github.com/jackc/chunkreader/v2 -# github.com/jackc/pgconn v1.14.0 -## explicit; go 1.12 -github.com/jackc/pgconn -github.com/jackc/pgconn/internal/ctxwatch -github.com/jackc/pgconn/stmtcache -# github.com/jackc/pgio v1.0.0 -## explicit; go 1.12 -github.com/jackc/pgio # github.com/jackc/pgpassfile v1.0.0 ## explicit; go 1.12 github.com/jackc/pgpassfile -# github.com/jackc/pgproto3/v2 v2.3.2 -## explicit; go 1.12 -github.com/jackc/pgproto3/v2 # github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a ## explicit; go 1.14 github.com/jackc/pgservicefile -# github.com/jackc/pgtype v1.14.0 -## explicit; go 1.13 -github.com/jackc/pgtype -# github.com/jackc/pgx/v4 v4.18.0 -## explicit; go 1.13 -github.com/jackc/pgx/v4 -github.com/jackc/pgx/v4/internal/sanitize -github.com/jackc/pgx/v4/stdlib # github.com/jackc/pgx/v5 v5.3.0 ## explicit; go 1.19 github.com/jackc/pgx/v5 @@ -257,9 +228,6 @@ github.com/saracen/go7z/headers # github.com/saracen/solidblock v0.0.0-20190426153529-45df20abab6f ## explicit github.com/saracen/solidblock -# github.com/segmentio/fasthash v1.0.3 -## explicit; go 1.11 -github.com/segmentio/fasthash/fnv1a # github.com/shirou/gopsutil v3.21.11+incompatible ## explicit github.com/shirou/gopsutil/cpu @@ -296,9 +264,6 @@ github.com/tklauser/go-sysconf # github.com/tklauser/numcpus v0.6.0 ## explicit; go 1.13 github.com/tklauser/numcpus -# github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc -## explicit -github.com/tmthrgd/go-hex # github.com/twitchyliquid64/golang-asm v0.15.1 ## explicit; go 1.13 github.com/twitchyliquid64/golang-asm/asm/arch @@ -326,48 +291,6 @@ github.com/ugorji/go/codec github.com/ulikunitz/xz/internal/hash github.com/ulikunitz/xz/internal/xlog github.com/ulikunitz/xz/lzma -# github.com/upper/db/v4 v4.6.0 -## explicit; go 1.15 -github.com/upper/db/v4 -github.com/upper/db/v4/adapter/mysql -github.com/upper/db/v4/adapter/postgresql -github.com/upper/db/v4/internal/adapter -github.com/upper/db/v4/internal/cache -github.com/upper/db/v4/internal/immutable -github.com/upper/db/v4/internal/reflectx -github.com/upper/db/v4/internal/sqladapter -github.com/upper/db/v4/internal/sqladapter/compat -github.com/upper/db/v4/internal/sqladapter/exql -github.com/upper/db/v4/internal/sqlbuilder -# github.com/uptrace/bun v1.1.12 -## explicit; go 1.18 -github.com/uptrace/bun -github.com/uptrace/bun/dialect -github.com/uptrace/bun/dialect/feature -github.com/uptrace/bun/dialect/sqltype -github.com/uptrace/bun/extra/bunjson -github.com/uptrace/bun/internal -github.com/uptrace/bun/internal/parser -github.com/uptrace/bun/internal/tagparser -github.com/uptrace/bun/schema -# github.com/uptrace/bun/dialect/mysqldialect v1.1.12 -## explicit; go 1.18 -github.com/uptrace/bun/dialect/mysqldialect -# github.com/uptrace/bun/dialect/pgdialect v1.1.12 -## explicit; go 1.18 -github.com/uptrace/bun/dialect/pgdialect -# github.com/uptrace/bun/driver/pgdriver v1.1.12 -## explicit; go 1.18 -github.com/uptrace/bun/driver/pgdriver -# github.com/vmihailenco/msgpack/v5 v5.3.5 -## explicit; go 1.11 -github.com/vmihailenco/msgpack/v5 -github.com/vmihailenco/msgpack/v5/msgpcode -# github.com/vmihailenco/tagparser/v2 v2.0.0 -## explicit; go 1.15 -github.com/vmihailenco/tagparser/v2 -github.com/vmihailenco/tagparser/v2/internal -github.com/vmihailenco/tagparser/v2/internal/parser # github.com/xdg-go/pbkdf2 v1.0.0 ## explicit; go 1.9 github.com/xdg-go/pbkdf2 @@ -552,6 +475,8 @@ gopkg.in/alexcesaro/quotedprintable.v3 gopkg.in/gomail.v2 # gopkg.in/natefinch/lumberjack.v2 v2.0.0 ## explicit +# gopkg.in/yaml.v2 v2.4.0 +## explicit; go 1.15 # gopkg.in/yaml.v3 v3.0.1 ## explicit gopkg.in/yaml.v3 @@ -590,9 +515,6 @@ gorm.io/hints # gorm.io/plugin/dbresolver v1.4.1 ## explicit; go 1.14 gorm.io/plugin/dbresolver -# mellium.im/sasl v0.3.1 -## explicit; go 1.18 -mellium.im/sasl # xorm.io/builder v0.3.12 ## explicit; go 1.11 xorm.io/builder