update vendor

master
李光春 2 years ago
parent bc71e2ab0f
commit ec312764d8

@ -0,0 +1,34 @@
# idea ignore
.idea/
*.ipr
*.iml
*.iws
.vscode/
*.swp
# temp ignore
*.log
*.cache
*.diff
*.exe
*.exe~
*.patch
*.tmp
*debug.test
debug.test
go.sum
# system ignore
.DS_Store
Thumbs.db
# project
*.cert
*.key
.test
iprepo.txt
_output

@ -0,0 +1,198 @@
v1.5.5
更新内容:
- 增加CloseDB函数,关闭数据库连接池
- 完善文档,注释
v1.5.4
更新内容:
- QueryRow如果查询一个字段,而且这个字段数据库为null,会有异常,没有赋为默认值
- reflect.Type 类型的参数,修改为 *reflect.Type 指针,包括CustomDriverValueConver接口的参数
- 完善文档,注释
v1.5.3
更新内容:
- 感谢@Howard.TSE的建议,判断配置是否为空
- 感谢@haming123反馈性能问题.zorm 1.2.x 版本实现了基础功能,读性能比gorm和xorm快一倍.随着功能持续增加,造成性能下降,目前读性能只快了50%.
- 性能优化,去掉不必要的反射
- 完善文档,注释
v1.5.2
更新内容:
- 感谢奔跑(@zeqjone)提供的正则,排除不在括号内的from,已经满足绝大部分场景
- 感谢奔跑(@zeqjone) pr,修复 金仓数据库模型定义中tag数据库列标签与数据库内置关键词冲突时,加双引号处理
- 升级 decimal 到1.3.1
- 完善文档,注释
v1.5.1
更新内容:
- 完善文档,注释
- 注释未使用的代码
- 先判断error,再执行defer rows.Close()
- 增加微信社区支持(负责人是八块腹肌的单身小伙 @zhou-a-xing)
v1.5.0
更新内容:
- 完善文档,注释
- 支持clickhouse,更新,删除语句使用SQL92标准语法
- ID默认使用时间戳+随机数,代替UUID实现
- 优化SQL提取的正则表达式
- 集成seata-golang,支持全局托管,不修改业务代码,零侵入分布式事务
v1.4.9
更新内容:
- 完善文档,注释
- 摊牌了,不装了,就是修改注释,刷刷版本活跃度
v1.4.8
更新内容:
- 完善文档,注释
- 数据库字段和实体类额外映射时,支持 _ 下划线转驼峰
v1.4.7
更新内容:
- 情人节版本,返回map时,如果无法正常转换值类型,就返回原值,而不是nil
v1.4.6
更新内容:
- 完善文档,注释
- 千行代码,胜他十万,牛气冲天,zorm零依赖.(uuid和decimal这两个工具包竟然有1700行代码)
- 在涉密内网开发环境中,零依赖能减少很多麻烦,做不到请不要说没必要......
v1.4.5
更新内容:
- 增强自定义类型转换的功能
- 完善文档,注释
- 非常感谢 @anxuanzi 完善代码生成器
- 非常感谢 @chien_tung 增加changelog,以后版本发布都会记录changelog
v1.4.4
更新内容:
- 如果查询的字段在column tag中没有找到,就会根据名称(不区分大小写)映射到struct的属性上
- 给QueryRow方法增加 has 的返回值,标识数据库是有一条记录的,各位已经使用的大佬,升级时注意修改代码,非常抱歉*3
v1.4.3
更新内容:
- 正式支持南大通用(gbase)数据库,完成国产四库的适配
- 增加设置全局事务隔离级别和单个事务的隔离级别
- 修复触发器自增主键的逻辑bug
- 文档完善和细节调整
v1.4.2
更新内容:
- 正式支持神州通用(shentong)数据库
- 完善pgsql和kingbase的自增主键返回值支持
- 七家公司的同学建议查询和golang sql方法命名保持统一.做了一个艰难的决定,修改zorm的部分方法名.全局依次替换字符串即可.
zorm.Query( 替换为 zorm.QueryRow(
zorm.QuerySlice( 替换为 zorm.Query(
zorm.QueryMap( 替换为 zorm.QueryRowMap(
zorm.QueryMapSlice( 替换为 zorm.QueryMap(
v1.4.1
更新内容:
- 支持自定义扩展字段映射逻辑
v1.4.0
更新内容:
- 修改多条数据的判断逻辑
v1.3.9
更新内容:
- 支持自定义数据类型,包括json/jsonb
- 非常感谢 @chien_tung 同学反馈的问题, QuerySlice方法支持*[]*struct类型,简化从xorm迁移
- 其他代码细节优化.
v1.3.7
更新内容:
- 非常感谢 @zhou- a- xing 同学(八块腹肌的单身少年)的英文翻译,zorm的核心代码注释已经是中英双语了.
- 非常感谢 @chien_tung 同学反馈的问题,修复主键自增int和init64类型的兼容性.
- 其他代码细节优化.
v1.3.6
更新内容:
- 完善注释文档
- 修复Delete方法的参数类型错误
- 其他代码细节优化.
v1.3.5
更新内容:
- 完善注释文档
- 兼容处理数据库为null时,基本类型取默认值,感谢@fastabler的pr
- 修复批量保存方法的一个bug:如果slice的长度为1,在pgsql和oracle会出现异常
- 其他代码细节优化.
v1.3.4
更新内容:
- 完善注释文档
- 取消分页语句必须有order by的限制
- 支持人大金仓数据库
- 人大金仓驱动说明: https://help.kingbase.com.cn/doc- view- 8108.html
- 人大金仓kingbase 8核心是基于postgresql 9.6,可以使用 https://github.com/lib/pq 进行测试,生产环境建议使用官方驱动
v1.3.3
更新内容:
- 完善注释文档
- 增加批量保存Struct对象方法
- 正式支持达梦数据库
- 基于达梦官方驱动,发布go mod项目 https://gitee.com/chunanyong/dm
v1.3.2
更新内容:
- 增加达梦数据的分页适配
- 完善调整代码注释
- 增加存储过程和函数的调用示例
v1.3.1
更新内容:
- 修改方法名称,和gorm和xorm保持相似,降低迁移和学习成本
- 更新测试用例文档
v1.3.0
更新内容:
- 去掉zap日志依赖,通过复写 FuncLogError FuncLogPanic FuncPrintSQL 实现自定义日志
- golang版本依赖调整为v1.13
- 迁移测试用到readygo,zorm项目不依赖任何数据库驱动包
v1.2.9
更新内容:
- IEntityMap支持主键自增或主键序列
- 更新方法返回影响的行数affected
- 修复 查询IEntityMap时数据库无记录出现异常的bug
- 测试用例即文档 https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/BaseDao_test.go
v1.2.8
更新内容:
- 暴露FuncGenerateStringID函数,方便自定义扩展字符串主键ID
- Finder.Append 默认加一个空格,避免手误出现语法错误
- 缓存字段信息时,使用map代替sync.Map,提高性能
- 第三方性能压测结果
v1.2.6
更新内容:
- DataSourceConfig 配置区分 DriverName 和 DBType兼容一种数据库的多个驱动包
- 不再显示依赖数据库驱动,由使用者确定依赖的数据库驱动包
v1.2.5
更新内容:
- 分页语句必须有明确的order by,避免数据库迁移时出现分页语法不兼容.
- 修复列表查询时,page对象为nil的bug
v1.2.3
更新内容:
- 完善数据库支持,目前支持MySQL,SQLServer,Oracle,PostgreSQL,SQLite3
- 简化数据库读写分离实现,暴露zorm.FuncReadWriteBaseDao函数属性,用于自定义读写分离策略
- 精简zorm.DataSourceConfig属性,增加PrintSQL属性
v1.2.2
更新内容:
- 修改NewPage()返回Page对象指针,传递时少写一个 & 符号
- 取消GetDBConnection()方法,使用BindContextConnection()方法进行多个数据库库绑定
- 隐藏DBConnection对象,不再对外暴露数据库对象,避免手动初始化造成的异常
v1.1.8
更新内容:
- 修复UUID支持
- 数据库连接和事务隐藏到context.Context为统一参数,符合golang规范,更好的性能
- 封装logger实现,方便更换log包
- 增加zorm.UpdateStructNotZeroValue 方法,只更新不为零值的字段
- 完善测试用例

File diff suppressed because it is too large Load Diff

@ -0,0 +1,268 @@
package zorm
import (
"errors"
"reflect"
"strconv"
"strings"
)
//Finder 查询数据库的载体,所有的sql语句都要通过Finder执行.
//Finder To query the database carrier, all SQL statements must be executed through Finder
type Finder struct {
//拼接SQL
//Splicing SQL.
sqlBuilder strings.Builder
//SQL的参数值
//SQL parameter values.
values []interface{}
//注入检查,默认true 不允许SQL注入的 ' 单引号
//Injection check, default true does not allow SQL injection single quote
InjectionCheck bool
//CountFinder 自定义的查询总条数'Finder',使用指针默认为nil.主要是为了在'group by'等复杂情况下,为了性能,手动编写总条数语句
//CountFinder The total number of custom queries is'Finder', and the pointer is nil by default. It is mainly used to manually write the total number of statements for performance in complex situations such as'group by'
CountFinder *Finder
//是否自动查询总条数,默认true.同时需要Page不为nil,才查询总条数
//Whether to automatically query the total number of entries, the default is true. At the same time, the Page is not nil to query the total number of entries
SelectTotalCount bool
//SQL语句
//SQL statement
sqlstr string
}
//NewFinder 初始化一个Finder,生成一个空的Finder
//NewFinder Initialize a Finder and generate an empty Finder
func NewFinder() *Finder {
finder := Finder{}
finder.SelectTotalCount = true
finder.InjectionCheck = true
finder.values = make([]interface{}, 0)
return &finder
}
//NewSelectFinder 根据表名初始化查询的Finder | Finder that initializes the query based on the table name
//NewSelectFinder("tableName") SELECT * FROM tableName
//NewSelectFinder("tableName", "id,name") SELECT id,name FROM tableName
func NewSelectFinder(tableName string, strs ...string) *Finder {
finder := NewFinder()
finder.sqlBuilder.WriteString("SELECT ")
if len(strs) > 0 {
for _, str := range strs {
finder.sqlBuilder.WriteString(str)
}
} else {
finder.sqlBuilder.WriteString("*")
}
finder.sqlBuilder.WriteString(" FROM ")
finder.sqlBuilder.WriteString(tableName)
return finder
}
//NewUpdateFinder 根据表名初始化更新的Finder, UPDATE tableName SET
//NewUpdateFinder Initialize the updated Finder according to the table name, UPDATE tableName SET
func NewUpdateFinder(tableName string) *Finder {
finder := NewFinder()
finder.sqlBuilder.WriteString("UPDATE ")
finder.sqlBuilder.WriteString(tableName)
finder.sqlBuilder.WriteString(" SET ")
return finder
}
//NewDeleteFinder 根据表名初始化删除的'Finder', DELETE FROM tableName
//NewDeleteFinder Finder for initial deletion based on table name. DELETE FROM tableName
func NewDeleteFinder(tableName string) *Finder {
finder := NewFinder()
finder.sqlBuilder.WriteString("DELETE FROM ")
finder.sqlBuilder.WriteString(tableName)
//所有的 WHERE 都不加,规则统一,好记
//No WHERE is added, the rules are unified, easy to remember
//finder.sqlBuilder.WriteString(" WHERE ")
return finder
}
//Append 添加SQL和参数的值,第一个参数是语句,后面的参数[可选]是参数的值,顺序要正确
//例如: finder.Append(" and id=? and name=? ",23123,"abc")
//只拼接SQL,例如: finder.Append(" and name=123 ")
//Append:Add SQL and parameter values, the first parameter is the statement, and the following parameter (optional) is the value of the parameter, in the correct order
//E.g: finder.Append(" and id=? and name=? ",23123,"abc")
//Only splice SQL, E.g: finder.Append(" and name=123 ")
func (finder *Finder) Append(s string, values ...interface{}) *Finder {
//不要自己构建finder,使用Newxxx方法
//Don't build finder by yourself, use Newxxx method
if finder.values == nil {
return nil
}
if len(s) > 0 {
if len(finder.sqlstr) > 0 {
finder.sqlstr = ""
}
//默认加一个空格,避免手误两个字符串连接再一起
//A space is added by default to avoid hand mistakes when connecting two strings together
finder.sqlBuilder.WriteString(" ")
finder.sqlBuilder.WriteString(s)
}
if values == nil || len(values) < 1 {
return finder
}
//for _, v := range values {
// finder.Values = append(finder.Values, v)
//}
finder.values = append(finder.values, values...)
return finder
}
//AppendFinder 添加另一个Finder finder.AppendFinder(f)
//AppendFinder Add another Finder . finder.AppendFinder(f)
func (finder *Finder) AppendFinder(f *Finder) (*Finder, error) {
if f == nil {
return nil, errors.New("finder-->AppendFinder参数是nil")
}
//不要自己构建finder,使用Newxxx方法
//Don't build finder by yourself, use Newxxx method
if finder.values == nil {
return nil, errors.New("finder-->AppendFinder不要自己构建finder,使用Newxxx方法")
}
//添加f的SQL
//SQL to add f。
sqlstr, err := f.GetSQL()
if err != nil {
return nil, err
}
finder.sqlstr = ""
finder.sqlBuilder.WriteString(sqlstr)
//添加f的值
//Add the value of f
finder.values = append(finder.values, f.values...)
return finder, nil
}
//GetSQL 返回Finder封装的SQL语句
//GetSQL Return the SQL statement encapsulated by the Finder
func (finder *Finder) GetSQL() (string, error) {
//不要自己构建finder,使用Newxxx方法
//Don't build finder by yourself, use Newxxx method
if finder.values == nil {
return "", errors.New("finder-->GetSQL不要自己构建finder,使用Newxxx方法")
}
if len(finder.sqlstr) > 0 {
return finder.sqlstr, nil
}
sqlstr := finder.sqlBuilder.String()
finder.sqlstr = sqlstr
//包含单引号,属于非法字符串
//Contains single quotes, which are illegal strings
if finder.InjectionCheck && (strings.Contains(sqlstr, "'")) {
return "", errors.New("finder-->GetSQL SQL语句请不要直接拼接字符串参数!!!使用标准的占位符实现,例如 finder.Append(' and id=? and name=? ','123','abc')")
}
//处理sql语句中的in,实际就是把数组变量展开,例如 id in(?) ["1","2","3"] 语句变更为 id in (?,?,?) 参数也展开到参数数组里
//这里认为 slice类型的参数就是in
//Processing the in in the SQL statement is actually expanding the array variables,
//for example, id in(?) ["1","2","3"] The statement is changed to id in (?,?,?)
//The parameters are also expanded to the parameters In the array
//It is considered that the parameter of the slice type is in
if finder.values == nil || len(finder.values) < 1 { //如果没有参数
return sqlstr, nil
}
//?问号切割的数组
//Question mark cut array
questions := strings.Split(sqlstr, "?")
//语句中没有?问号
//No in the sentence Question mark
if len(questions) < 1 {
return sqlstr, nil
}
//重新记录参数值
//Re-record the parameter value
newValues := make([]interface{}, 0)
//新的sql
//new sql
var newSQLStr strings.Builder
//问号切割的语句实际长度比问号号个数多1个,先把第一个语句片段加上,后面就是比参数的索引大1
//The actual length of the sentence cut by the question mark is one more than the number of question marks. First, add the first sentence fragment, and the latter is one greater than the index of the parameter
newSQLStr.WriteString(questions[0])
//遍历所有的参数
//Traverse all parameters
for i, v := range finder.values {
//先拼接问号,问号切割之后,问号就丢失了,先补充上
//First splicing the question mark, after the question mark is cut, the question mark is lost, add it first
newSQLStr.WriteString("?")
valueOf := reflect.ValueOf(v)
typeOf := reflect.TypeOf(v)
kind := valueOf.Kind()
//如果参数是个指针类型
//If the parameter is a pointer type
if kind == reflect.Ptr { //如果是指针 If it is a pointer
valueOf = valueOf.Elem()
typeOf = typeOf.Elem()
kind = valueOf.Kind()
}
//如果不是数组或者slice
//If it is not an array or slice
if !(kind == reflect.Array || kind == reflect.Slice) {
//记录新值
//Record new value.
newValues = append(newValues, v)
//记录SQL
//Log SQL。
newSQLStr.WriteString(questions[i+1])
continue
}
//字节数组是特殊的情况
//Byte array is a special case
if typeOf == reflect.TypeOf([]byte{}) {
//记录新值
//Record new value
newValues = append(newValues, v)
//记录SQL
//Log SQL
newSQLStr.WriteString(questions[i+1])
continue
}
//如果不是字符串类型的值,无法取长度,这个是个bug,先注释了
//获取数组类型参数值的长度
//If it is not a string type value, the length cannot be taken, this is a bug, first comment
//Get the length of the array type parameter value
sliceLen := valueOf.Len()
//数组类型的参数长度小于1,认为是有异常的参数
//The parameter length of the array type is less than 1, which is considered to be an abnormal parameter
if sliceLen < 1 {
return sqlstr, errors.New("finder-->GetSQL语句:" + sqlstr + ",第" + strconv.Itoa(i+1) + "个参数,类型是Array或者Slice,值的长度为0,请检查sql参数有效性")
}
for j := 0; j < sliceLen; j++ {
//每多一个参数,对应",?" 两个符号.增加的问号长度总计是(sliceLen-1)*2
//Every additional parameter, correspond ",?" ,The total length of the increased question mark is (sliceLen-1)*2
if j >= 1 {
//记录SQL
//Log SQL.
newSQLStr.WriteString(",?")
}
//记录新值
//Record new value
sliceValue := valueOf.Index(j).Interface()
newValues = append(newValues, sliceValue)
}
//记录SQL
//Log SQL
newSQLStr.WriteString(questions[i+1])
}
//重新赋值
//Reassign
finder.sqlstr = newSQLStr.String()
finder.values = newValues
return finder.sqlstr, nil
}

@ -0,0 +1,143 @@
package zorm
//IEntityStruct "struct"实体类的接口,所有的struct实体类都要实现这个接口
//IEntityStruct The interface of the "struct" entity class, all struct entity classes must implement this interface
type IEntityStruct interface {
//获取表名称
//Get the table name.
GetTableName() string
//获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称
//Get the primary key field name of the database table. Because it is compatible with Map, it can only be the field name of the database
GetPKColumnName() string
//GetPkSequence 主键序列,因为需要兼容多种数据库的序列,所以使用map
//key是DBType,value是序列的值,例如oracle的TESTSEQ.NEXTVAL,如果有值,优先级最高
//如果key对应的value是 "",则代表是触发器触发的序列,兼容自增关键字,例如 ["oracle"]""
//GetPkSequence Primary key sequence, because it needs to be compatible with multiple database sequences, map is used
//The key is the DB Type, and the value is the value of the sequence,
//such as Oracle's TESTSEQ.NEXTVAL. If there is a value, the priority is the highest
//If the value corresponding to the key is "", it means the sequence triggered by the trigger
//Compatible with auto-increment keywords, such as ["oracle"]""
GetPkSequence() map[string]string
}
//IEntityMap 使用Map保存数据,用于不方便使用struct的场景,如果主键是自增或者序列,不要"entityMap.Set"主键的值
//IEntityMap Use Map to save data for scenarios where it is not convenient to use struct
//If the primary key is auto-increment or sequence, do not "entity Map.Set" the value of the primary key
type IEntityMap interface {
//获取表名称
//Get the table name
GetTableName() string
//获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称.
//Get the primary key field name of the database table. Because it is compatible with Map, it can only be the field name of the database.
GetPKColumnName() string
//GetPkSequence 主键序列,因为需要兼容多种数据库的序列,所以使用map
//key是DBType,value是序列的值,例如oracle的TESTSEQ.NEXTVAL,如果有值,优先级最高
//如果key对应的value是 "",则代表是触发器触发的序列,兼容自增关键字,例如 ["oracle"]""
//GetPkSequence Primary key sequence, because it needs to be compatible with multiple database sequences, map is used
//The key is the DB Type, and the value is the value of the sequence,
//such as Oracle's TESTSEQ.NEXTVAL. If there is a value, the priority is the highest
//If the value corresponding to the key is "", it means the sequence triggered by the trigger
//Compatible with auto-increment keywords, such as ["oracle"]""
GetPkSequence() map[string]string
//针对Map类型,记录数据库字段
//For Map type, record database fields.
GetDBFieldMap() map[string]interface{}
//设置数据库字段的值
//Set the value of a database field.
Set(key string, value interface{}) map[string]interface{}
}
//EntityStruct "IBaseEntity" 的基础实现,所有的实体类都匿名注入.这样就类似实现继承了,如果接口增加方法,调整这个默认实现即可
//EntityStruct The basic implementation of "IBaseEntity", all entity classes are injected anonymously
//This is similar to implementation inheritance. If the interface adds methods, adjust the default implementation
type EntityStruct struct {
}
//默认数据库的主键列名
//Primary key column name of the default database
const defaultPkName = "id"
//获取表名称
/*
func (entity *EntityStruct) GetTableName() string {
return ""
}
*/
//GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称
//GetPKColumnName Get the primary key field name of the database table
//Because it is compatible with Map, it can only be the field name of the database
func (entity *EntityStruct) GetPKColumnName() string {
return defaultPkName
}
//var defaultPkSequence = make(map[string]string, 0)
//GetPkSequence 主键序列,需要兼容多种数据库的序列,使用map,key是DBType,value是序列的值,例如oracle的TESTSEQ.NEXTVAL,如果有值,优先级最高
//如果key对应的value是 "",则代表是触发器触发的序列,兼容自增关键字,例如 ["oracle"]""
func (entity *EntityStruct) GetPkSequence() map[string]string {
return nil
}
//-------------------------------------------------------------------------//
//EntityMap IEntityMap的基础实现,可以直接使用或者匿名注入
type EntityMap struct {
//表名
tableName string
//主键列名
PkColumnName string
//主键序列,需要兼容多种数据库的序列,使用map,key是DBType,value是序列的值,例如oracle的TESTSEQ.NEXTVAL,如果有值,优先级最高
PkSequence map[string]string
//数据库字段,不暴露外部
dbFieldMap map[string]interface{}
}
//NewEntityMap 初始化Map,必须传入表名称
func NewEntityMap(tbName string) *EntityMap {
entityMap := EntityMap{}
entityMap.dbFieldMap = map[string]interface{}{}
entityMap.tableName = tbName
entityMap.PkColumnName = defaultPkName
return &entityMap
}
//GetTableName 获取表名称
func (entity *EntityMap) GetTableName() string {
return entity.tableName
}
//GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称
func (entity *EntityMap) GetPKColumnName() string {
return entity.PkColumnName
}
//GetPkSequence 主键序列,因为需要兼容多种数据库的序列,所以使用map
//key是DBType,value是序列的值,例如oracle的TESTSEQ.NEXTVAL,如果有值,优先级最高
//如果key对应的value是 "",则代表是触发器触发的序列,兼容自增关键字,例如 ["oracle"]""
//GetPkSequence Primary key sequence, because it needs to be compatible with multiple database sequences, map is used
//The key is the DB Type, and the value is the value of the sequence,
//such as Oracle's TESTSEQ.NEXTVAL. If there is a value, the priority is the highest
//If the value corresponding to the key is "", it means the sequence triggered by the trigger
//Compatible with auto-increment keywords, such as ["oracle"]""
func (entity *EntityMap) GetPkSequence() map[string]string {
return entity.PkSequence
}
//GetDBFieldMap 针对Map类型,记录数据库字段
//GetDBFieldMap For Map type, record database fields
func (entity *EntityMap) GetDBFieldMap() map[string]interface{} {
return entity.dbFieldMap
}
//Set 设置数据库字段
//Set Set database fields
func (entity *EntityMap) Set(key string, value interface{}) map[string]interface{} {
entity.dbFieldMap[key] = value
return entity.dbFieldMap
}

@ -0,0 +1,110 @@
package zorm
import "context"
// ISeataGlobalTransaction seata-golang的包装接口,隔离seata-golang的依赖
// 声明一个struct,实现这个接口,并配置实现 FuncSeataGlobalTransaction 函数
/**
//不使用proxy代理模式,全局托管,不修改业务代码,零侵入实现分布式事务
//tm.Implement(svc.ProxySvc)
// 分布式事务示例代码
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
// 获取当前分布式事务的XID.不用考虑怎么来的,如果是分布式事务环境,会自动设置值
// xid := ctx.Value("XID").(string)
// 把xid传递到第三方应用
// req.Header.Set("XID", xid)
// 如果返回的err不是nil,本地事务和分布式事务就会回滚
return nil, err
})
///----------第三方应用-------///
// 第三方应用开启事务前,ctx需要绑定XID,例如使用了gin框架
// 接受传递过来的XID,绑定到本地ctx
// xid:=c.Request.Header.Get("XID")
// 获取到ctx
// ctx := c.Request.Context()
// ctx = context.WithValue(ctx,"XID",xid)
// ctx绑定XID之后,调用业务事务
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
// 业务代码......
// 如果返回的err不是nil,本地事务和分布式事务就会回滚
return nil, err
})
// 建议以下代码放到单独的文件里
//................//
// ZormSeataGlobalTransaction 包装seata的*tm.DefaultGlobalTransaction,实现zorm.ISeataGlobalTransaction接口
type ZormSeataGlobalTransaction struct {
*tm.DefaultGlobalTransaction
}
// MyFuncSeataGlobalTransaction zorm适配seata分布式事务的函数
// 重要!!!!需要配置zorm.DataSourceConfig.FuncSeataGlobalTransaction=MyFuncSeataGlobalTransaction 重要!!!
func MyFuncSeataGlobalTransaction(ctx context.Context) (zorm.ISeataGlobalTransaction, context.Context, error) {
//获取seata的rootContext
rootContext := seataContext.NewRootContext(ctx)
//创建seata事务
seataTx := tm.GetCurrentOrCreate(rootContext)
//使用zorm.ISeataGlobalTransaction接口对象包装seata事务,隔离seata-golang依赖
seataGlobalTransaction := ZormSeataGlobalTransaction{seataTx}
return seataGlobalTransaction, rootContext, nil
}
//实现zorm.ISeataGlobalTransaction接口
func (gtx ZormSeataGlobalTransaction) SeataBegin(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.BeginWithTimeout(int32(6000), rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataCommit(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.Commit(rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataRollback(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
//如果是Participant角色,修改为Launcher角色,允许分支事务提交全局事务.
if gtx.Role != tm.Launcher {
gtx.Role = tm.Launcher
}
return gtx.Rollback(rootContext)
}
func (gtx ZormSeataGlobalTransaction) GetSeataXID(ctx context.Context) string {
rootContext := ctx.(*seataContext.RootContext)
return rootContext.GetXID()
}
//................//
**/
type ISeataGlobalTransaction interface {
//开启seata全局事务
SeataBegin(ctx context.Context) error
//提交seata全局事务
SeataCommit(ctx context.Context) error
//回滚seata全局事务
SeataRollback(ctx context.Context) error
//获取seata事务的XID
GetSeataXID(ctx context.Context) string
//重新包装为seata的context.RootContext
//context.RootContext 如果后续使用了 context.WithValue,类型就是context.valueCtx 就会造成无法再类型断言为 context.RootContext
//所以DBDao里使用了 seataRootContext变量,区分业务的ctx和seata的RootContext
//SeataNewRootContext(ctx context.Context) context.Context
}

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

@ -0,0 +1,43 @@
package zorm
import (
"fmt"
"log"
)
func init() {
//设置默认的日志显示信息,显示文件和行号
//Set the default log display information, display file and line number.
log.SetFlags(log.Llongfile | log.LstdFlags)
}
//LogCallDepth 记录日志调用层级,用于定位到业务层代码
//Log Call Depth Record the log call level, used to locate the business layer code
var LogCallDepth = 4
//FuncLogError 记录error日志
//FuncLogError Record error log
var FuncLogError func(err error) = defaultLogError
//FuncLogPanic 记录panic日志,默认使用"defaultLogError"实现
//FuncLogPanic Record panic log, using "defaultLogError" by default
var FuncLogPanic func(err error) = defaultLogPanic
//FuncPrintSQL 打印sql语句和参数
//FuncPrintSQL Print sql statement and parameters
var FuncPrintSQL func(sqlstr string, args []interface{}) = defaultPrintSQL
func defaultLogError(err error) {
log.Output(LogCallDepth, fmt.Sprintln(err))
}
func defaultLogPanic(err error) {
defaultLogError(err)
}
func defaultPrintSQL(sqlstr string, args []interface{}) {
if args != nil {
log.Output(LogCallDepth, fmt.Sprintln("sql:", sqlstr, ",args:", args))
} else {
log.Output(LogCallDepth, fmt.Sprintln("sql:", sqlstr))
}
}

@ -0,0 +1,64 @@
package zorm
//Page 分页对象
//Page Pagination object
type Page struct {
//当前页码,从1开始
//Current page number, starting from 1
PageNo int
//每页多少条,默认20条
//How many items per page, 20 items by default
PageSize int
//数据总条数
//Total number of data
TotalCount int
//共多少页
//How many pages
PageCount int
//是否是第一页
//Is it the first page
FirstPage bool
//是否有上一页
//Whether there is a previous page
HasPrev bool
//是否有下一页
//Is there a next page
HasNext bool
//是否是最后一页
//Is it the last page
LastPage bool
}
//NewPage 创建Page对象
//NewPage Create Page object
func NewPage() *Page {
page := Page{}
page.PageNo = 1
page.PageSize = 20
return &page
}
//setTotalCount 设置总条数,计算其他值
//setTotalCount Set the total number of bars, calculate other values
func (page *Page) setTotalCount(total int) {
page.TotalCount = total
page.PageCount = (page.TotalCount + page.PageSize - 1) / page.PageSize
if page.PageNo >= page.PageCount {
page.LastPage = true
} else {
page.HasNext = true
}
if page.PageNo > 1 {
page.HasPrev = true
} else {
page.FirstPage = true
}
}

@ -0,0 +1,760 @@
## 介绍
go(golang)轻量级ORM,零依赖,零侵入分布式事务,支持达梦(dm),金仓(kingbase),神通(shentong),南大通用(gbase),mysql,postgresql,oracle,mssql,sqlite,clickhouse数据库.
源码地址:https://gitee.com/chunanyong/zorm
作者博客:[https://www.jiagou.com](https://www.jiagou.com)
交流QQ群[727723736]() 添加进入社区群聊,问题交流,技术探讨
社区微信: [LAUV927]()
```
go get gitee.com/chunanyong/zorm
```
* 基于原生sql语句编写,是[springrain](https://gitee.com/chunanyong/springrain)的精简和优化.
* [代码生成器](https://gitee.com/zhou-a-xing/wsgt)
* 代码精简,主体2500行,零依赖4000行,注释详细,方便定制修改.
* <font color=red>支持事务传播,这是zorm诞生的主要原因</font>
* 支持mysql,postgresql,oracle,mssql,sqlite,clickhouse,dm(达梦),kingbase(金仓),shentong(神通),gbase(南通),clickhouse数据库
* 支持多库和读写分离
* 更新性能zorm,gorm,xorm相当. 读取性能zorm比gorm,xorm快50%
* 不支持联合主键,变通认为无主键,业务控制实现(艰难取舍)
* 集成seata-golang,支持全局托管,不修改业务代码,零侵入分布式事务
* 支持clickhouse,更新,删除语句使用SQL92标准语法.clickhouse-go官方驱动不支持批量insert语法,建议使用https://github.com/mailru/go-clickhouse
zorm生产环境使用参考: [UserStructService.go](https://gitee.com/chunanyong/readygo/tree/master/permission/permservice)
## 源码仓库说明
我主导的开源项目主库都在gitee,github上留有项目说明,引导跳转到gitee,这样也造成了项目star增长缓慢,毕竟github社区更加强大.
**开源没有国界,开发者却有自己的祖国.**
严格意义上,github是受美国法律管辖的 https://www.infoq.cn/article/SA72SsSeZBpUSH_ZH8XB
尽我所能,支持国内开源社区,不喜勿喷,谢谢!
## 支持国产数据库
### 达梦(dm)
配置zorm.DataSourceConfig的 DriverName:dm ,DBType:dm
达梦数据库驱动: [https://gitee.com/chunanyong/dm](https://gitee.com/chunanyong/dm)
达梦的text类型会映射为dm.DmClob,string不能接收,需要实现zorm.CustomDriverValueConver接口,自定义扩展处理
### 人大金仓(kingbase)
配置zorm.DataSourceConfig的 DriverName:kingbase ,DBType:kingbase
金仓驱动说明: [https://help.kingbase.com.cn/doc-view-8108.html](https://help.kingbase.com.cn/doc-view-8108.html)
金仓kingbase 8核心是基于postgresql 9.6,可以使用 [https://github.com/lib/pq](https://github.com/lib/pq) 进行测试,生产环境建议使用官方驱动.
注意修改 data/kingbase.conf中 ```ora_input_emptystr_isnull = false```,因为golang没有null值,一般数据库都是not null,golang的string默认是'',如果这个设置为true,数据库就会把值设置为null,和字段属性not null 冲突,因此报错.
### 神舟通用(shentong)
建议使用官方驱动,配置zorm.DataSourceConfig的 DriverName:aci ,DBType:shentong
### 南大通用(gbase)
~~暂时还未找到官方golang驱动,配置zorm.DataSourceConfig的 DriverName:gbase ,DBType:gbase~~
暂时先使用odbc驱动,DriverName:odbc ,DBType:gbase
## 测试用例
https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/BaseDao_test.go
```go
// zorm 使用原生的sql语句,没有对sql语法做限制.语句使用Finder作为载体
// 占位符统一使用?,zorm会根据数据库类型,自动替换占位符,例如postgresql数据库把?替换成$1,$2...
// zorm使用 ctx context.Context 参数实现事务传播,ctx从web层传递进来即可,例如gin的c.Request.Context()
// zorm的事务操作需要显示使用zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})开启
```
## 数据库脚本和实体类
https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/demoStruct.go
生成实体类或手动编写,建议使用代码生成器 https://gitee.com/zhou-a-xing/wsgt
```go
package testzorm
import (
"time"
"gitee.com/chunanyong/zorm"
)
//建表语句
/*
DROP TABLE IF EXISTS `t_demo`;
CREATE TABLE `t_demo` (
`id` varchar(50) NOT NULL COMMENT '主键',
`userName` varchar(30) NOT NULL COMMENT '姓名',
`password` varchar(50) NOT NULL COMMENT '密码',
`createTime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP(0),
`active` int COMMENT '是否有效(0否,1是)',
PRIMARY KEY (`id`)
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COMMENT = '例子' ;
*/
//demoStructTableName 表名常量,方便直接调用
const demoStructTableName = "t_demo"
// demoStruct 例子
type demoStruct struct {
//引入默认的struct,隔离IEntityStruct的方法改动
zorm.EntityStruct
//Id 主键
Id string `column:"id"`
//UserName 姓名
UserName string `column:"userName"`
//Password 密码
Password string `column:"password"`
//CreateTime <no value>
CreateTime time.Time `column:"createTime"`
//Active 是否有效(0否,1是)
//Active int `column:"active"`
//------------------数据库字段结束,自定义字段写在下面---------------//
//如果查询的字段在column tag中没有找到,就会根据名称(不区分大小写,支持 _ 下划线转驼峰)映射到struct的属性上
//模拟自定义的字段Active
Active int
}
//GetTableName 获取表名称
//IEntityStruct 接口的方法,实体类需要实现!!!
func (entity *demoStruct) GetTableName() string {
return demoStructTableName
}
//GetPKColumnName 获取数据库表的主键字段名称.因为要兼容Map,只能是数据库的字段名称
//不支持联合主键,变通认为无主键,业务控制实现(艰难取舍)
//如果没有主键,也需要实现这个方法, return "" 即可
//IEntityStruct 接口的方法,实体类需要实现!!!
func (entity *demoStruct) GetPKColumnName() string {
//如果没有主键
//return ""
return "id"
}
//newDemoStruct 创建一个默认对象
func newDemoStruct() demoStruct {
demo := demoStruct{
//如果Id=="",保存时zorm会调用zorm.FuncGenerateStringID(),默认时间戳+随机数,也可以自己定义实现方式,例如 zorm.FuncGenerateStringID=funcmyId
Id: zorm.FuncGenerateStringID(),
UserName: "defaultUserName",
Password: "defaultPassword",
Active: 1,
CreateTime: time.Now(),
}
return demo
}
```
## 测试用例即文档
```go
// testzorm 使用原生的sql语句,没有对sql语法做限制.语句使用Finder作为载体
// 占位符统一使用?,zorm会根据数据库类型,自动替换占位符,例如postgresql数据库把?替换成$1,$2...
// zorm使用 ctx context.Context 参数实现事务传播,ctx从web层传递进来即可,例如gin的c.Request.Context()
// zorm的事务操作需要显示使用zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {})开启
package testzorm
import (
"context"
"fmt"
"testing"
"time"
"gitee.com/chunanyong/zorm"
//00.引入数据库驱动
_ "github.com/go-sql-driver/mysql"
)
//dbDao 代表一个数据库,如果有多个数据库,就对应声明多个DBDao
var dbDao *zorm.DBDao
// ctx默认应该有 web层传入,例如gin的c.Request.Context().这里只是模拟
var ctx = context.Background()
//01.初始化DBDao
func init() {
//自定义zorm日志输出
//zorm.LogCallDepth = 4 //日志调用的层级
//zorm.FuncLogError = myFuncLogError //记录异常日志的函数
//zorm.FuncLogPanic = myFuncLogPanic //记录panic日志,默认使用defaultLogError实现
//zorm.FuncPrintSQL = myFuncPrintSQL //打印sql的函数
//自定义日志输出格式,把FuncPrintSQL函数重新赋值
//log.SetFlags(log.LstdFlags)
//zorm.FuncPrintSQL = zorm.FuncPrintSQL
//dbDaoConfig 数据库的配置.这里只是模拟,生产应该是读取配置配置文件,构造DataSourceConfig
dbDaoConfig := zorm.DataSourceConfig{
//DSN 数据库的连接字符串
DSN: "root:root@tcp(127.0.0.1:3306)/readygo?charset=utf8&parseTime=true",
//数据库驱动名称:mysql,postgres,oci8,sqlserver,sqlite3,clickhouse,dm,kingbase,aci 和DBType对应,处理数据库有多个驱动
DriverName: "mysql",
//数据库类型(方言判断依据):mysql,postgresql,oracle,mssql,sqlite,clickhouse,dm,kingbase,shentong 和 DriverName 对应,处理数据库有多个驱动
DBType: "mysql",
//MaxOpenConns 数据库最大连接数 默认50
MaxOpenConns: 50,
//MaxIdleConns 数据库最大空闲连接数 默认50
MaxIdleConns: 50,
//ConnMaxLifetimeSecond 连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时)
ConnMaxLifetimeSecond: 600,
//PrintSQL 打印SQL.会使用FuncPrintSQL记录SQL
PrintSQL: true,
//DefaultTxOptions 事务隔离级别的默认配置,默认为nil
//DefaultTxOptions: nil,
//如果是使用seata-golang分布式事务,建议使用默认配置
//DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false},
//FuncSeataGlobalTransaction seata-golang分布式的适配函数,返回ISeataGlobalTransaction接口的实现
//FuncSeataGlobalTransaction : MyFuncSeataGlobalTransaction,
}
// 根据dbDaoConfig创建dbDao, 一个数据库只执行一次,第一个执行的数据库为 defaultDao,后续zorm.xxx方法,默认使用的就是defaultDao
dbDao, _ = zorm.NewDBDao(&dbDaoConfig)
}
//TestInsert 02.测试保存Struct对象
func TestInsert(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//创建一个demo对象
demo := newDemoStruct()
//保存对象,参数是对象指针.如果主键是自增,会赋值到对象的主键属性
_, err := zorm.Insert(ctx, &demo)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
//标记测试失败
if err != nil {
t.Errorf("错误:%v", err)
}
}
//TestInsertSlice 03.测试批量保存Struct对象的Slice
//如果是自增主键,无法对Struct对象里的主键属性赋值
func TestInsertSlice(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//slice存放的类型是zorm.IEntityStruct!!!,golang目前没有泛型,使用IEntityStruct接口,兼容Struct实体类
demoSlice := make([]zorm.IEntityStruct, 0)
//创建对象1
demo1 := newDemoStruct()
demo1.UserName = "demo1"
//创建对象2
demo2 := newDemoStruct()
demo2.UserName = "demo2"
demoSlice = append(demoSlice, &demo1, &demo2)
//批量保存对象,如果主键是自增,无法保存自增的ID到对象里.
_, err := zorm.InsertSlice(ctx, demoSlice)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
//标记测试失败
if err != nil {
t.Errorf("错误:%v", err)
}
}
//TestInsertEntityMap 04.测试保存EntityMap对象,用于不方便使用struct的场景,使用Map作为载体
func TestInsertEntityMap(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//创建一个EntityMap,需要传入表名
entityMap := zorm.NewEntityMap(demoStructTableName)
//设置主键名称
entityMap.PkColumnName = "id"
//如果是自增序列,设置序列的值
//entityMap.PkSequence = "mySequence"
//Set 设置数据库的字段值
//如果主键是自增或者序列,不要entityMap.Set主键的值
entityMap.Set("id", zorm.FuncGenerateStringID())
entityMap.Set("userName", "entityMap-userName")
entityMap.Set("password", "entityMap-password")
entityMap.Set("createTime", time.Now())
entityMap.Set("active", 1)
//执行
_, err := zorm.InsertEntityMap(ctx, entityMap)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
//标记测试失败
if err != nil {
t.Errorf("错误:%v", err)
}
}
//TestQueryRow 05.测试查询一个struct对象
func TestQueryRow(t *testing.T) {
//声明一个对象的指针,用于承载返回的数据
demo := &demoStruct{}
//构造查询用的finder
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//finder = zorm.NewSelectFinder(demoStructTableName, "id,userName") // select id,userName from t_demo
//finder = zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo
//finder默认启用了sql注入检查,禁止语句中拼接 ' 单引号,可以设置 finder.InjectionCheck = false 解开限制
//finder.Append 第一个参数是语句,后面的参数是对应的值,值的顺序要正确.语句统一使用?,zorm会处理数据库的差异
finder.Append("WHERE id=? and active in(?)", "20210630163227149563000042432429", []int{0, 1})
//执行查询
_, err := zorm.QueryRow(ctx, finder, demo)
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
//打印结果
fmt.Println(demo)
}
//TestQueryRowMap 06.测试查询map接收结果,用于不太适合struct的场景,比较灵活
func TestQueryRowMap(t *testing.T) {
//构造查询用的finder
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//finder.Append 第一个参数是语句,后面的参数是对应的值,值的顺序要正确.语句统一使用?,zorm会处理数据库的差异
finder.Append("WHERE id=? and active in(?)", "20210630163227149563000042432429", []int{0, 1})
//执行查询
resultMap, err := zorm.QueryRowMap(ctx, finder)
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
//打印结果
fmt.Println(resultMap)
}
//TestQuery 07.测试查询对象列表
func TestQuery(t *testing.T) {
//创建用于接收结果的slice
list := make([]*demoStruct, 0)
//构造查询用的finder
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//创建分页对象,查询完成后,page对象可以直接给前端分页组件使用
page := zorm.NewPage()
page.PageNo = 2 //查询第1页,默认是1
page.PageSize = 2 //每页20条,默认是20
//执行查询
err := zorm.Query(ctx, finder, &list, page)
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
//打印结果
fmt.Println("总条数:", page.TotalCount, " 列表:", list)
}
//TestQueryMap 08.测试查询map列表,用于不方便使用struct的场景,一条记录是一个map对象
func TestQueryMap(t *testing.T) {
//构造查询用的finder
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//创建分页对象,查询完成后,page对象可以直接给前端分页组件使用
page := zorm.NewPage()
page.PageNo = 1 //查询第1页,默认是1
page.PageSize = 2 //每页20条,默认是20
//执行查询
listMap, err := zorm.QueryMap(ctx, finder, page)
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
//打印结果
fmt.Println("总条数:", page.TotalCount, " 列表:", listMap)
}
//TestUpdateNotZeroValue 09.更新struct对象,只更新不为零值的字段.主键必须有值
func TestUpdateNotZeroValue(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//声明一个对象的指针,用于更新数据
demo := &demoStruct{}
demo.Id = "20210630163227149563000042432429"
demo.UserName = "UpdateNotZeroValue"
//更新 "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["UpdateNotZeroValue","41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateNotZeroValue(ctx, demo)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
}
//TestUpdate 10.更新struct对象,更新所有字段.主键必须有值
func TestUpdate(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//声明一个对象的指针,用于更新数据
demo := &demoStruct{}
demo.Id = "20210630163227149563000042432429"
demo.UserName = "TestUpdate"
_, err := zorm.Update(ctx, demo)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
}
//TestUpdateFinder 11.通过finder更新,zorm最灵活的方式,可以编写任何更新语句,甚至手动编写insert语句
func TestUpdateFinder(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
finder := zorm.NewUpdateFinder(demoStructTableName) // UPDATE t_demo SET
//finder = zorm.NewDeleteFinder(demoStructTableName) // DELETE FROM t_demo
//finder = zorm.NewFinder().Append("UPDATE").Append(demoStructTableName).Append("SET") // UPDATE t_demo SET
finder.Append("userName=?,active=?", "TestUpdateFinder", 1).Append("WHERE id=?", "20210630163227149563000042432429")
//更新 "sql":"UPDATE t_demo SET userName=?,active=? WHERE id=?","args":["TestUpdateFinder",1,"41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateFinder(ctx, finder)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
}
//TestUpdateEntityMap 12.更新一个EntityMap,主键必须有值
func TestUpdateEntityMap(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//创建一个EntityMap,需要传入表名
entityMap := zorm.NewEntityMap(demoStructTableName)
//设置主键名称
entityMap.PkColumnName = "id"
//Set 设置数据库的字段值,主键必须有值
entityMap.Set("id", "20210630163227149563000042432429")
entityMap.Set("userName", "TestUpdateEntityMap")
//更新 "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["TestUpdateEntityMap","41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateEntityMap(ctx, entityMap)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
}
//TestDelete 13.删除一个struct对象,主键必须有值
func TestDelete(t *testing.T) {
//需要手动开启事务,匿名函数返回的error如果不是nil,事务就会回滚
//如果全局DefaultTxOptions配置不满足需求,可以在zorm.Transaction事务方法前设置事务的隔离级别,
//例如 ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}),如果txOptions为nil,使用全局DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
demo := &demoStruct{}
demo.Id = "20210630163227149563000042432429"
//删除 "sql":"DELETE FROM t_demo WHERE id=?","args":["ae9987ac-0467-4fe2-a260-516c89292684"]
_, err := zorm.Delete(ctx, demo)
//如果返回的err不是nil,事务就会回滚
return nil, err
})
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
}
//TestProc 14.测试调用存储过程
func TestProc(t *testing.T) {
demo := &demoStruct{}
finder := zorm.NewFinder().Append("call testproc(?) ", "u_10001")
zorm.QueryRow(ctx, finder, demo)
fmt.Println(demo)
}
//TestFunc 15.测试调用自定义函数
func TestFunc(t *testing.T) {
userName := ""
finder := zorm.NewFinder().Append("select testfunc(?) ", "u_10001")
zorm.QueryRow(ctx, finder, &userName)
fmt.Println(userName)
}
//TestOther 16.其他的一些说明.非常感谢您能看到这一行
func TestOther(t *testing.T) {
//场景1.多个数据库.通过对应数据库的dbDao,调用BindContextDBConnection函数,把这个数据库的连接绑定到返回的ctx上,然后把ctx传递到zorm的函数即可.
newCtx, err := dbDao.BindContextDBConnection(ctx)
if err != nil { //标记测试失败
t.Errorf("错误:%v", err)
}
finder := zorm.NewSelectFinder(demoStructTableName)
//把新产生的newCtx传递到zorm的函数
list, _ := zorm.QueryMap(newCtx, finder, nil)
fmt.Println(list)
//场景2.单个数据库的读写分离.设置读写分离的策略函数.
zorm.FuncReadWriteStrategy = myReadWriteStrategy
//场景3.如果是多个数据库,每个数据库还读写分离,按照 场景1 处理
}
//单个数据库的读写分离的策略 rwType=0 read,rwType=1 write
func myReadWriteStrategy(rwType int) *zorm.DBDao {
//根据自己的业务场景,返回需要的读写dao,每次需要数据库的连接的时候,会调用这个函数
return dbDao
}
//---------------------------------//
//实现CustomDriverValueConver接口,扩展自定义类型,例如 达梦数据库text类型,映射出来的是dm.DmClob类型,无法使用string类型直接接收
type CustomDMText struct{}
//GetDriverValue 根据数据库列类型,实体类属性类型,Finder对象,返回driver.Value的实例
//如果无法获取到structFieldType,例如Map查询,会传入nil
//如果返回值为nil,接口扩展逻辑无效,使用原生的方式接收数据库字段值
func (dmtext CustomDMText) GetDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, finder *zorm.Finder) (driver.Value, error) {
return &dm.DmClob{}, nil
}
//ConverDriverValue 数据库列类型,实体类属性类型,GetDriverValue返回的driver.Value的临时接收值,Finder对象
//如果无法获取到structFieldType,例如Map查询,会传入nil
//返回符合接收类型值的指针,指针,指针!!!!
func (dmtext CustomDMText) ConverDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, tempDriverValue driver.Value, finder *zorm.Finder) (interface{}, error) {
//类型转换
dmClob, isok := tempDriverValue.(*dm.DmClob)
if !isok {
return tempDriverValue, errors.New("转换至*dm.DmClob类型失败")
}
//获取长度
dmlen, errLength := dmClob.GetLength()
if errLength != nil {
return dmClob, errLength
}
//int64转成int类型
strInt64 := strconv.FormatInt(dmlen, 10)
dmlenInt, errAtoi := strconv.Atoi(strInt64)
if errAtoi != nil {
return dmClob, errAtoi
}
//读取字符串
str, errReadString := dmClob.ReadString(1, dmlenInt)
return &str, errReadString
}
//CustomDriverValueMap 用于配置driver.Value和对应的处理关系,key是 drier.Value 的字符串,例如 *dm.DmClob
//一般是放到init方法里进行添加
zorm.CustomDriverValueMap["*dm.DmClob"] = CustomDMText{}
```
## 分布式事务
基于seata-golang实现分布式事务.
### proxy模式
```golang
//DataSourceConfig 配置 DefaultTxOptions
//DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false},
// 引入V1版本的依赖包,V2的参考官方例子
import (
"github.com/opentrx/mysql"
"github.com/transaction-wg/seata-golang/pkg/client"
"github.com/transaction-wg/seata-golang/pkg/client/config"
"github.com/transaction-wg/seata-golang/pkg/client/rm"
"github.com/transaction-wg/seata-golang/pkg/client/tm"
seataContext "github.com/transaction-wg/seata-golang/pkg/client/context"
)
//配置文件路径
var configPath = "./conf/client.yml"
func main() {
//初始化配置
conf := config.InitConf(configPath)
//初始化RPC客户端
client.NewRpcClient()
//注册mysql驱动
mysql.InitDataResourceManager()
mysql.RegisterResource(config.GetATConfig().DSN)
//sqlDB, err := sql.Open("mysql", config.GetATConfig().DSN)
//后续正常初始化zorm,一定要放到seata mysql 初始化后面!!!
//................//
//tm注册事务服务,参照官方例子.(全局托管主要是去掉proxy,对业务零侵入)
tm.Implement(svc.ProxySvc)
//................//
//获取seata的rootContext
//rootContext := seataContext.NewRootContext(ctx)
//rootContext := ctx.(*seataContext.RootContext)
//创建seata事务
//seataTx := tm.GetCurrentOrCreate(rootContext)
//开始事务
//seataTx.BeginWithTimeoutAndName(int32(6000), "事务名称", rootContext)
//事务开启之后获取XID.可以通过gin的header传递,或者其他方式传递
//xid:=rootContext.GetXID()
// 如果使用的gin框架,获取到ctx
// ctx := c.Request.Context()
// 接受传递过来的XID,绑定到本地ctx
//ctx =context.WithValue(ctx,mysql.XID,xid)
}
```
### 全局托管模式
```golang
//不使用proxy代理模式,全局托管,不修改业务代码,零侵入实现分布式事务
//tm.Implement(svc.ProxySvc)
// 分布式事务示例代码
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
// 获取当前分布式事务的XID.不用考虑怎么来的,如果是分布式事务环境,会自动设置值
// xid := ctx.Value("XID").(string)
// 把xid传递到第三方应用
// req.Header.Set("XID", xid)
// 如果返回的err不是nil,本地事务和分布式事务就会回滚
return nil, err
})
///----------第三方应用-------///
// 第三方应用开启事务前,ctx需要绑定XID,例如使用了gin框架
// 接受传递过来的XID,绑定到本地ctx
// xid:=c.Request.Header.Get("XID")
// 获取到ctx
// ctx := c.Request.Context()
// ctx = context.WithValue(ctx,"XID",xid)
// ctx绑定XID之后,调用业务事务
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
// 业务代码......
// 如果返回的err不是nil,本地事务和分布式事务就会回滚
return nil, err
})
// 建议以下代码放到单独的文件里
//................//
// ZormSeataGlobalTransaction 包装seata的*tm.DefaultGlobalTransaction,实现zorm.ISeataGlobalTransaction接口
type ZormSeataGlobalTransaction struct {
*tm.DefaultGlobalTransaction
}
// MyFuncSeataGlobalTransaction zorm适配seata分布式事务的函数
// 重要!!!!需要配置zorm.DataSourceConfig.FuncSeataGlobalTransaction=MyFuncSeataGlobalTransaction 重要!!!
func MyFuncSeataGlobalTransaction(ctx context.Context) (zorm.ISeataGlobalTransaction, context.Context, error) {
//获取seata的rootContext
rootContext := seataContext.NewRootContext(ctx)
//创建seata事务
seataTx := tm.GetCurrentOrCreate(rootContext)
//使用zorm.ISeataGlobalTransaction接口对象包装seata事务,隔离seata-golang依赖
seataGlobalTransaction := ZormSeataGlobalTransaction{seataTx}
return seataGlobalTransaction, rootContext, nil
}
//实现zorm.ISeataGlobalTransaction接口
func (gtx ZormSeataGlobalTransaction) SeataBegin(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.BeginWithTimeout(int32(6000), rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataCommit(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.Commit(rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataRollback(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
//如果是Participant角色,修改为Launcher角色,允许分支事务提交全局事务.
if gtx.Role != tm.Launcher {
gtx.Role = tm.Launcher
}
return gtx.Rollback(rootContext)
}
func (gtx ZormSeataGlobalTransaction) GetSeataXID(ctx context.Context) string {
rootContext := ctx.(*seataContext.RootContext)
return rootContext.GetXID()
}
//................//
```
## 性能压测
测试代码:https://github.com/springrain/goormbenchmark
zorm 1.2.x 版本实现了基础功能,读性能比gorm和xorm快一倍.随着功能持续增加,造成性能下降,目前读性能只快了50%.
zorm会持续优化改进性能.

@ -0,0 +1,744 @@
## Introduction
This is a lightweight ORM,zero dependency, that supports DM,Kingbase,shentong,mysql,postgresql,oracle,mssql,sqlite,clickhouse databases.
Source address:https://gitee.com/chunanyong/zorm
Author blog:[https://www.jiagou.com](https://www.jiagou.com)
```
go get gitee.com/chunanyong/zorm
```
* Written based on native SQL statements,It is the streamlining and optimization of [springrain](https://gitee.com/chunanyong/springrain).
* [Built-in code generator](https://gitee.com/chunanyong/readygo/tree/master/codegenerator)
* The code is streamlined, main part 2500 lines, zero dependency 4000 lines, detailed comments, convenient for customization and modification.
* <font color=red>Support transaction propagation, which is the main reason for the birth of zorm</font>
* Support mysql, postgresql, oracle, mssql, sqlite, dm (Da Meng), kingbase (Ren Da Jincang),clickhouse
* Support more databases, read and write separation.
* The update performance of zorm, gorm, and xorm is equivalent. The read performance of zorm is twice as fast as that of gorm and xorm.
* Does not support joint primary keys, alternatively thinks that there is no primary key, and business control is implemented (difficult choice)
* Integrate seata-golang, support global hosting, do not modify business code, and zero intrusive distributed transactions
* Support clickhouse, update and delete statements use SQL92 standard syntax. The official clickhouse-go driver does not support batch insert syntax, it is recommended to use https://github.com/mailru/go-clickhouse
zorm Production environment reference: [UserStructService.go](https://gitee.com/chunanyong/readygo/tree/master/permission/permservice)
## Support domestic database
DM(Da Meng) database driver: [https://gitee.com/chunanyong/dm](https://gitee.com/chunanyong/dm)
kingbase(Ren Da Jincang)Driver Instructions: [https://help.kingbase.com.cn/doc-view-8108.html](https://help.kingbase.com.cn/doc-view-8108.html)
The core of Kingbase(Ren Da Jincang) 8 is based on postgresql 9.6. You can use [https://github.com/lib/pq](https://github.com/lib/pq) for testing. The official driver is recommended for the production environment.
Pay attention to modify ora_input_emptystr_isnull = false in the data/kingbase.conf file , because golang has no null value. Generally, the database is not null, the default value of golang string is' '.
If this value is set to true, the database will set the value to null, which conflicts with the field property not null. Therefore, an error is reported.
shentong(Shenzhou General Data)Instructions:
It is recommended to use official driver, configure zorm.DataSourceConfig DriverName:aci ,DBType:shentong
gbase(GENERAL DATA)
~~The official golang driver has not been found yet. Please configure it zorm.DataSourceConfig DriverName:gbase ,DBType:gbase~~
Use odbc driver for the time being,DriverName:odbc ,DBType:gbase
## Test case
https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/BaseDao_test.go
```go
// Zorm uses native SQL statements and does not impose restrictions on SQL syntax. Statements use Finder as the carrier.
// Use "?" as a placeholder. , Zorm automatically replaces placeholders based on the database type,
// such as "?" in a PostgreSQL database, Replaced with $1, $2...
// Zorm uses the ctx context. context parameter to propagate the transaction, and ctx is passed in from the web layer, such as gin's c.retest.context ().
// The transaction operation of zorm needs to be displayed using zorm.Transaction(ctx, func(ctx context.Context) (interface(), error) ()) to open
```
## Database scripts and entity classes
https://gitee.com/chunanyong/readygo/blob/master/test/testzorm/demoStruct.go
Generate entity classes or write manually, it is recommended to use a code generator
https://gitee.com/chunanyong/readygo/tree/master/codegenerator
```go
package testzorm
import (
"time"
"gitee.com/chunanyong/zorm"
)
//Table building statement
/*
DROP TABLE IF EXISTS `t_demo`;
CREATE TABLE `t_demo` (
`id` varchar(50) NOT NULL COMMENT 'Primary key',
`userName` varchar(30) NOT NULL COMMENT 'Name',
`password` varchar(50) NOT NULL COMMENT 'password',
`createTime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP(0),
`active` int COMMENT 'Is it valid (0 no, 1 yes)',
PRIMARY KEY (`id`)
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COMMENT = 'example' ;
*/
//demoStructTableName Table name constant, easy to call directly
const demoStructTableName = "t_demo"
// demoStruct example
type demoStruct struct {
//Default structs are introduced to insulate IEntityStructs from method changes
zorm.EntityStruct
//Id: Primary key
Id string `column:"id"`
//UserName: Name
UserName string `column:"userName"`
//Password: password
Password string `column:"password"`
//CreateTime <no value>
CreateTime time.Time `column:"createTime"`
//Active: Is it valid (0 no, 1 yes)
//Active int `column:"active"`
//------------------The end of the database field, the custom field is written below---------------//
//If the query field is not found in the column tag, it will be mapped to the struct attribute based on the name (case insensitive, _ underscore to hump)
//Custom field Active
Active int
}
//GetTableName: Get the table name
func (entity *demoStruct) GetTableName() string {
return demoStructTableName
}
//GetPKColumnName: Get the primary key field name of the database table. Because it is compatible with Map, it can only be the field name of the database.
func (entity *demoStruct) GetPKColumnName() string {
return "id"
}
//newDemoStruct: Create a default object
func newDemoStruct() demoStruct {
demo := demoStruct{
// If Id=="",When saving, zorm will call zorm.Func Generate String ID(),
// the default UUID string, or you can define your own implementation,E.g: zorm.FuncGenerateStringID=funcmyId
Id: zorm.FuncGenerateStringID(),
UserName: "defaultUserName",
Password: "defaultPassword",
Active: 1,
CreateTime: time.Now(),
}
return demo
}
```
## Test cases are documents
```go
// testzorm: Use native SQL statements, no restrictions on SQL syntax. Statements use Finder as a carrier
// Use "?" as a placeholder. , Zorm automatically replaces placeholders based on the database type,
// such as "?" in a PostgreSQL database, Replaced with $1, $2...
// Zorm uses the ctx context. context parameter to propagate the transaction, and ctx is passed in from the web layer, such as gin's c.retest.context ().
// The transaction operation of zorm needs to be displayed using zorm.Transaction(ctx, func(ctx context.Context) (interface(), error) ()) to open
package testzorm
import (
"context"
"fmt"
"testing"
"time"
"gitee.com/chunanyong/zorm"
//00.Introduce database driver
_ "github.com/go-sql-driver/mysql"
)
//dbDao: Represents a database. If there are multiple databases, declare multiple DB Dao accordingly
var dbDao *zorm.DBDao
// ctx should be passed in by the web layer by default, such as gin's c.Request.Context(). This is just a simulation.
var ctx = context.Background()
//01.Initialize DB Dao
func init() {
//Custom zorm log output
//zorm.LogCallDepth = 4 //Level of log call
//zorm.FuncLogError = myFuncLogError //Function to record exception log.
//zorm.FuncLogPanic = myFuncLogPanic //Record panic log, use Zorm Error Log by default
//zorm.FuncPrintSQL = myFuncPrintSQL //A function that prints SQL
//Customize the log output format and re-assign the Func Print SQL functio.
//log.SetFlags(log.LstdFlags)
//zorm.FuncPrintSQL = zorm.FuncPrintSQL
//dbDaoConfig: Database configuration
dbDaoConfig := zorm.DataSourceConfig{
// DSN: Database connection string
DSN: "root:root@tcp(127.0.0.1:3306)/readygo?charset=utf8&parseTime=true",
// Database driver name: mysql, postgres, oci8, sqlserver, sqlite3,clickhouse,
// dm, kingbase and DBType correspond, there are multiple drivers for processing databases
DriverName: "mysql",
// Database type (based on dialect judgment): mysql, postgresql,oracle, mssql, sqlite, clickhouse,
// dm, kingbase and DriverName correspond to multiple drivers for processing databases
DBType: "mysql",
//MaxOpenConns: Maximum number of database connections Default 50
MaxOpenConns: 50,
//MaxIdleConns: The maximum number of free connections to the database default 50
MaxIdleConns: 50,
//ConnMaxLifetimeSecond: The connection survival time in seconds. The connection is destroyed and rebuilt after the default 600 (10 minutes).
//To prevent the database from actively disconnecting and causing dead connections. MySQL default wait_timeout 28800 seconds (8 hours)
ConnMaxLifetimeSecond: 600,
//PrintSQL: Print SQL. Func Print SQL will be used to record SQL
PrintSQL: true,
//DefaultTxOptions The default configuration of the transaction isolation level, the default is nil
//DefaultTxOptions: nil,
//如果是使用seata-golang分布式事务,建议使用默认配置
//DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false},
//FuncSeataGlobalTransaction seata-golang分布式的适配函数,返回ISeataGlobalTransaction接口的实现
//FuncSeataGlobalTransaction : MyFuncSeataGlobalTransaction,
}
// Create dbDao according to dbDaoConfig, a database is executed only once,
// the first executed database is defaultDao, and subsequent zorm.xxx methods, defaultDao is used by default.
dbDao, _ = zorm.NewDBDao(&dbDaoConfig)
}
//TestInsert: 02.Test save Struct object
func TestInsert(t *testing.T) {
//You need to manually start the transaction.
//If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//Create a demo object
demo := newDemoStruct()
// Save the object, the parameter is the object pointer.
// If the primary key is incremented, it will be assigned to the primary key attribute of the object
_, err := zorm.Insert(ctx, &demo)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
//Mark test failed.
if err != nil {
t.Errorf("err:%v", err)
}
}
//TestInsertSlice 03.Test the Slice that saves Struct objects in batches.
//If it is an auto-increasing primary key, you cannot assign a value to the primary key attribute in the Struct object.
func TestInsertSlice(t *testing.T) {
// You need to manually start the transaction.
// If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//The type stored by slice is zorm.I Entity Struct!!!, golang currently does not have generics,
//uses the I Entity Struct interface, and is compatible with the Struct entity class.
demoSlice := make([]zorm.IEntityStruct, 0)
//Create object 1
demo1 := newDemoStruct()
demo1.UserName = "demo1"
//Create object 2
demo2 := newDemoStruct()
demo2.UserName = "demo2"
demoSlice = append(demoSlice, &demo1, &demo2)
//To save objects in batches, if the primary key is auto-increment, the auto-increment ID cannot be saved in the object.
_, err := zorm.InsertSlice(ctx, demoSlice)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
//Mark test failed.
if err != nil {
t.Errorf("错误:%v", err)
}
}
//TestInsertEntityMap 04.Test to save the Entity Map object for scenarios where it is not convenient to use struct, using Map as a carrier
func TestInsertEntityMap(t *testing.T) {
// You need to manually start the transaction. If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//To create an Entity Map, you need to pass in the table name.
entityMap := zorm.NewEntityMap(demoStructTableName)
//Set the primary key name.
entityMap.PkColumnName = "id"
//If it is an auto-increasing sequence, set the value of the sequence.
//entityMap.PkSequence = "mySequence"
//Set Set the field value of the database
//If the primary key is auto-increment or sequence, don't entity Map.Set the value of the primary key.
entityMap.Set("id", zorm.FuncGenerateStringID())
entityMap.Set("userName", "entityMap-userName")
entityMap.Set("password", "entityMap-password")
entityMap.Set("createTime", time.Now())
entityMap.Set("active", 1)
//carried out
_, err := zorm.InsertEntityMap(ctx, entityMap)
//If the returned err is not nil, the transaction will be rolled back
return nil, err
})
//Mark test failed
if err != nil {
t.Errorf("error:%v", err)
}
}
//TestQueryRow 05.Test query a struct object
func TestQueryRow(t *testing.T) {
//Declare a pointer to an object to carry the returned data.
demo := &demoStruct{}
//Finder for constructing query.
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//finder = zorm.NewSelectFinder(demoStructTableName, "id,userName") // select id,userName from t_demo
//finder = zorm.NewFinder().Append("SELECT * FROM " + demoStructTableName) // select * from t_demo
// finder.Append The first parameter is the statement, and the following parameters are the corresponding values.
// The order of the values must be correct. Use the statement uniformly? Zorm will handle the difference in the database
finder.Append("WHERE id=? and active in(?)", "41b2aa4f-379a-4319-8af9-08472b6e514e", []int{0, 1})
//Execute query
has,err := zorm.QueryRow(ctx, finder, demo)
if err != nil { //Mark test failed
t.Errorf("error:%v", err)
}
if has { //数据库存在数据
//Print result
fmt.Println(demo)
}
}
//TestQueryRowMap 06.Test query map receiving results, used in scenarios that are not suitable for struct, more flexible
func TestQueryRowMap(t *testing.T) {
//Finder for constructing query.
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//finder.Append: The first parameter is the statement, and the following parameters are the corresponding values.
//The order of the values must be correct. Use the statement uniformly? Zorm will handle the difference in the database
finder.Append("WHERE id=? and active in(?)", "41b2aa4f-379a-4319-8af9-08472b6e514e", []int{0, 1})
//Execute query
resultMap, err := zorm.QueryRowMap(ctx, finder)
if err != nil { //Mark test failed
t.Errorf("error:%v", err)
}
//Print result
fmt.Println(resultMap)
}
//TestQuery 07.Test query object list
func TestQuery(t *testing.T) {
//Create a slice to receive the result
list := make([]*demoStruct, 0)
//Finder for constructing query
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//Create a paging object. After the query is completed, the page object can be directly used by the front-end paging component.
page := zorm.NewPage()
page.PageNo = 1 //Query page 1, default is 1
page.PageSize = 20 //20 items per page, the default is 20
//Execute query.如果想不分页,查询所有数据,page传入nil
err := zorm.Query(ctx, finder, &list, page)
if err != nil { //Mark test failed
t.Errorf("error:%v", err)
}
//Print result
fmt.Println("Total number:", page.TotalCount, " List:", list)
}
//TestQueryMap 08.Test query map list, used in scenarios where struct is not convenient, a record is a map object.
func TestQueryMap(t *testing.T) {
//Finder for constructing query.
finder := zorm.NewSelectFinder(demoStructTableName) // select * from t_demo
//Create a paging object. After the query is completed, the page object can be directly used by the front-end paging component。
page := zorm.NewPage()
//Execute query
listMap, err := zorm.QueryMap(ctx, finder, page)
if err != nil { //Mark test failed
t.Errorf("error:%v", err)
}
//Print result.如果不想分页,查询所有数据,page传入nil
fmt.Println("Total number:", page.TotalCount, " List:", listMap)
}
//TestUpdateNotZeroValue 09.Update the struct object, only update fields that are not zero. The primary key must have a value.
func TestUpdateNotZeroValue(t *testing.T) {
// You need to manually start the transaction. If the error returned by the anonymous function is not nil,
// the transaction will be rolled back.
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//Declare a pointer to an object to update data
demo := &demoStruct{}
demo.Id = "41b2aa4f-379a-4319-8af9-08472b6e514e"
demo.UserName = "UpdateNotZeroValue"
//Update "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["UpdateNotZeroValue","41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateNotZeroValue(ctx, demo)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
if err != nil {
//Mark test failed
t.Errorf("error:%v", err)
}
}
//TestUpdate 10.Update the struct object, update all fields. The primary key must have a value.
func TestUpdate(t *testing.T) {
// You need to manually start the transaction.
// If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//Declare a pointer to an object to update data.
demo := &demoStruct{}
demo.Id = "41b2aa4f-379a-4319-8af9-08472b6e514e"
demo.UserName = "TestUpdate"
_, err := zorm.Update(ctx, demo)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
if err != nil {
//Mark test failed
t.Errorf("error:%v", err)
}
}
//TestUpdateFinder 11.Through finder update, zorm is the most flexible way, you can write any update statement,
// or even manually write insert statement
func TestUpdateFinder(t *testing.T) {
//You need to manually start the transaction. If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
finder := zorm.NewUpdateFinder(demoStructTableName) // UPDATE t_demo SET
//finder = zorm.NewDeleteFinder(demoStructTableName) // DELETE FROM t_demo
//finder = zorm.NewFinder().Append("UPDATE").Append(demoStructTableName).Append("SET") // UPDATE t_demo SET
finder.Append("userName=?,active=?", "TestUpdateFinder", 1).Append("WHERE id=?", "41b2aa4f-379a-4319-8af9-08472b6e514e")
//Update "sql":"UPDATE t_demo SET userName=?,active=? WHERE id=?","args":["TestUpdateFinder",1,"41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateFinder(ctx, finder)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
if err != nil { //Mark test failed
t.Errorf("error:%v", err)
}
}
//TestUpdateEntityMap 12.Update an Entity Map, the primary key must have a value
func TestUpdateEntityMap(t *testing.T) {
//You need to manually start the transaction.
//If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
//To create an Entity Map, you need to pass in the table name.
entityMap := zorm.NewEntityMap(demoStructTableName)
//Set the primary key name.
entityMap.PkColumnName = "id"
//Set Set the field value of the database, the primary key must have a value.
entityMap.Set("id", "41b2aa4f-379a-4319-8af9-08472b6e514e")
entityMap.Set("userName", "TestUpdateEntityMap")
//Update "sql":"UPDATE t_demo SET userName=? WHERE id=?","args":["TestUpdateEntityMap","41b2aa4f-379a-4319-8af9-08472b6e514e"]
_, err := zorm.UpdateEntityMap(ctx, entityMap)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
if err != nil {
//Mark test failed
t.Errorf("error:%v", err)
}
}
//TestDelete 13.To delete a struct object, the primary key must have a value.
func TestDelete(t *testing.T) {
//You need to manually start the transaction. If the error returned by the anonymous function is not nil, the transaction will be rolled back.
//If the global DefaultTxOptions configuration does not meet the requirements, you can set the isolation level of the transaction before the zorm.Transaction transaction method, such as ctx, _ := dbDao.BindContextTxOptions(ctx, &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false}), if txOptions is nil , Use the global DefaultTxOptions
_, err := zorm.Transaction(ctx, func(ctx context.Context) (interface{}, error) {
demo := &demoStruct{}
demo.Id = "ae9987ac-0467-4fe2-a260-516c89292684"
//delete "sql":"DELETE FROM t_demo WHERE id=?","args":["ae9987ac-0467-4fe2-a260-516c89292684"]
_, err := zorm.Delete(ctx, demo)
//If the returned err is not nil, the transaction will be rolled back.
return nil, err
})
if err != nil {
//Mark test failed
t.Errorf("error:%v", err)
}
}
//TestProc 14.Test call stored procedure
func TestProc(t *testing.T) {
demo := &demoStruct{}
finder := zorm.NewFinder().Append("call testproc(?) ", "u_10001")
zorm.QueryRow(ctx, finder, demo)
fmt.Println(demo)
}
//TestFunc 15.Test call custom function.
func TestFunc(t *testing.T) {
userName := ""
finder := zorm.NewFinder().Append("select testfunc(?) ", "u_10001")
zorm.QueryRow(ctx, finder, &userName)
fmt.Println(userName)
}
//TestOther 16.Some other instructions. Thank you very much for seeing this line.
func TestOther(t *testing.T) {
//Scenario 1. Multiple databases. Through the db Dao of the corresponding database, call the Bind Context DB Connection function,
//bind the connection of this database to the returned ctx, and then pass the ctx to the zorm function.
newCtx, err := dbDao.BindContextDBConnection(ctx)
if err != nil {
//Mark test failed
t.Errorf("error:%v", err)
}
finder := zorm.NewSelectFinder(demoStructTableName)
//Pass the newly generated new Ctx to the function of zorm.
list, _ := zorm.QueryRowMap(newCtx, finder, nil)
fmt.Println(list)
//Scenario 2. Read-write separation of a single database.
//Set the strategy function for read-write separation.
zorm.FuncReadWriteStrategy = myReadWriteStrategy
//Scenario 3. If there are multiple databases,
//each database is also separated from reading and writing, and processed according to scenario 1.
}
//Strategies for the separation of read and write of a single database rwType=0 read,rwType=1 write
func myReadWriteStrategy(rwType int) *zorm.DBDao {
//According to your own business scenario, return the required read and write dao, and call this function every time you need a database connection
return dbDao
}
//---------------------------------//
//To implement the interface of CustomDriverValueConver,extend the custom type, such as text type of dm database, the mapped type is dm.DmClob type , cannot use string type to receive directly.
type CustomDMText struct{}
//GetDriverValue according to the database column type and entity class field type, return driver.Value Instance. If the return value is nil, no type replacement is performed and the default method is used.
func (dmtext CustomDMText) GetDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, finder *zorm.Finder) (driver.Value, error) {
return &dm.DmClob{}, nil
}
//ConverDriverValue database column type, entity class field type, GetDriverValue returned driver.Value New value, return the pointer according to the receiving type value, pointer, pointer!!!!
func (dmtext CustomDMText) ConverDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, tempDriverValue driver.Value, finder *zorm.Finder) (interface{}, error) {
//Type conversion
dmClob, isok := tempDriverValue.(*dm.DmClob)
if !isok {
return tempDriverValue, errors.New("Conversion to *dm.DmClob type failed")
}
//Get the length
dmlen, errLength := dmClob.GetLength()
if errLength != nil {
return dmClob, errLength
}
//Convert int64 to int type
strInt64 := strconv.FormatInt(dmlen, 10)
dmlenInt, errAtoi := strconv.Atoi(strInt64)
if errAtoi != nil {
return dmClob, errAtoi
}
//Read string
str, errReadString := dmClob.ReadString(1, dmlenInt)
return &str, errReadString
}
//zorm.CustomDriverValueMap for configuration driver.Value and the corresponding processing relationship, key is the string of drier.Value. For example *dm.DmClob
//It is usually added in the init method
zorm.CustomDriverValueMap["*dm.DmClob"] = CustomDMText{}
```
## Distributed transaction
Implement distributed transactions based on seata-golang.
### Proxy mode
```golang
//DataSourceConfig configuration DefaultTxOptions
//DefaultTxOptions: &sql.TxOptions{Isolation: sql.LevelDefault, ReadOnly: false},
// Introduce the dependency package of the V1 version, refer to the official example of V2
import (
"github.com/opentrx/mysql"
"github.com/transaction-wg/seata-golang/pkg/client"
"github.com/transaction-wg/seata-golang/pkg/client/config"
"github.com/transaction-wg/seata-golang/pkg/client/rm"
"github.com/transaction-wg/seata-golang/pkg/client/tm"
seataContext "github.com/transaction-wg/seata-golang/pkg/client/context"
)
//Configuration file path
var configPath = "./conf/client.yml"
func main() {
//Initial configuration
conf := config.InitConf(configPath)
//Initialize the RPC client
client.NewRpcClient()
//Register mysql driver
mysql.InitDataResourceManager()
mysql.RegisterResource(config.GetATConfig().DSN)
//sqlDB, err := sql.Open("mysql", config.GetATConfig().DSN)
//Subsequent normal initialization of zorm must be placed after the initialization of seata mysql!!!
//................//
//tm registration transaction service, refer to the official example. (Global hosting is mainly to remove the proxy, zero intrusion to the business)
tm.Implement(svc.ProxySvc)
//................//
//Get the rootContext of seata
rootContext := seataContext.NewRootContext(ctx)
//rootContext := ctx.(*seataContext.RootContext)
//Create seata transaction
seataTx := tm.GetCurrentOrCreate(rootContext)
//Start transaction
seataTx.BeginWithTimeoutAndName(int32(6000), "transaction name", rootContext)
//Get the XID after the transaction is opened. It can be passed through the header of gin, or passed in other ways
xid:=rootContext.GetXID()
// Accept the passed XID and bind it to the local ctx
ctx =context.WithValue(ctx,mysql.XID,xid)
}
```
### Global hosting mode
```golang
//Do not use proxy mode, global hosting, do not modify business code, zero intrusion to achieve distributed transactions
//tm.Implement(svc.ProxySvc)
// It is recommended to put the following code in a separate file
//................//
// ZormSeataGlobalTransaction wraps *tm.DefaultGlobalTransaction of seata, and implements the zorm.ISeataGlobalTransaction interface
type ZormSeataGlobalTransaction struct {
*tm.DefaultGlobalTransaction
}
// MyFuncSeataGlobalTransaction zorm adapts the function of seata distributed transaction, configure zorm.DataSourceConfig.FuncSeataGlobalTransaction=MyFuncSeataGlobalTransaction
func MyFuncSeataGlobalTransaction(ctx context.Context) (zorm.ISeataGlobalTransaction, context.Context, error) {
//Get the rootContext of seata
rootContext := seataContext.NewRootContext(ctx)
//Create seata transaction
seataTx := tm.GetCurrentOrCreate(rootContext)
//Use the zorm.ISeataGlobalTransaction interface object to wrap the seata transaction and isolate the seata-golang dependency
seataGlobalTransaction := ZormSeataGlobalTransaction{seataTx}
return seataGlobalTransaction, rootContext, nil
}
//Implement the zorm.ISeataGlobalTransaction interface
func (gtx ZormSeataGlobalTransaction) SeataBegin(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.BeginWithTimeout(int32(6000), rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataCommit(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.Commit(rootContext)
}
func (gtx ZormSeataGlobalTransaction) SeataRollback(ctx context.Context) error {
rootContext := ctx.(*seataContext.RootContext)
return gtx.Rollback(rootContext)
}
func (gtx ZormSeataGlobalTransaction) GetSeataXID(ctx context.Context) string {
rootContext := ctx.(*seataContext.RootContext)
return rootContext.GetXID()
}
//................//
```
## Performance stress test
Test code:https://github.com/alphayan/goormbenchmark
Index description
Total time, average number of nanoseconds per time, average memory allocated per time, average number of memory allocated per time.
The update performance of zorm, gorm, and xorm is equivalent. The read performance of zorm is twice as fast as that of gorm and xorm.
```
2000 times - Insert
zorm: 9.05s 4524909 ns/op 2146 B/op 33 allocs/op
gorm: 9.60s 4800617 ns/op 5407 B/op 119 allocs/op
xorm: 12.63s 6315205 ns/op 2365 B/op 56 allocs/op
2000 times - BulkInsert 100 row
xorm: 23.89s 11945333 ns/op 253812 B/op 4250 allocs/op
gorm: Don't support bulk insert - https://github.com/jinzhu/gorm/issues/255
zorm: Don't support bulk insert
2000 times - Update
xorm: 0.39s 195846 ns/op 2529 B/op 87 allocs/op
zorm: 0.51s 253577 ns/op 2232 B/op 32 allocs/op
gorm: 0.73s 366905 ns/op 9157 B/op 226 allocs/op
2000 times - Read
zorm: 0.28s 141890 ns/op 1616 B/op 43 allocs/op
gorm: 0.45s 223720 ns/op 5931 B/op 138 allocs/op
xorm: 0.55s 276055 ns/op 8648 B/op 227 allocs/op
2000 times - MultiRead limit 1000
zorm: 13.93s 6967146 ns/op 694286 B/op 23054 allocs/op
gorm: 26.40s 13201878 ns/op 2392826 B/op 57031 allocs/op
xorm: 30.77s 15382967 ns/op 1637098 B/op 72088 allocs/op
```

@ -0,0 +1,266 @@
package zorm
import (
"context"
"database/sql"
"errors"
"fmt"
"time"
)
// dataSorce对象,隔离sql原生对象
// dataSorce Isolate sql native objects
type dataSource struct {
*sql.DB
//config *DataSourceConfig
}
// DataSourceConfig 数据库连接池的配置
// DateSourceConfig Database connection pool configuration
type DataSourceConfig struct {
//DSN dataSourceName 连接字符串
//DSN DataSourceName Database connection string
DSN string
//数据库驱动名称:mysql,postgres,oci8,sqlserver,sqlite3,clickhouse,dm,kingbase 和DBType对应,处理数据库有多个驱动
//Database diver name:mysql,dm,postgres,opi8,sqlserver,sqlite3,clickhouse,kingbase corresponds to DBType,A database may have multiple drivers
DriverName string
//数据库类型(方言判断依据):mysql,postgresql,oracle,mssql,sqlite,clickhouse,dm,kingbase 和 DriverName 对应,处理数据库有多个驱动
//Database Type:mysql,postgresql,oracle,mssql,sqlite,clickhouse,dm,kingbase corresponds to DriverName,A database may have multiple drivers
DBType string
//PrintSQL 是否打印SQL语句.使用zorm.PrintSQL记录SQL
//PrintSQL Whether to print SQL, use zorm.PrintSQL record sql
PrintSQL bool
//MaxOpenConns 数据库最大连接数,默认50
//MaxOpenConns Maximum number of database connections, Default 50
MaxOpenConns int
//MaxIdleConns 数据库最大空闲连接数,默认50
//MaxIdleConns The maximum number of free connections to the database default 50
MaxIdleConns int
//ConnMaxLifetimeSecond 连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时)
//ConnMaxLifetimeSecond: (Connection survival time in seconds)Destroy and rebuild the connection after the default 600 seconds (10 minutes)
//Prevent the database from actively disconnecting and causing dead connections. MySQL Default wait_timeout 28800 seconds
ConnMaxLifetimeSecond int
//事务隔离级别的默认配置,默认为nil
DefaultTxOptions *sql.TxOptions
//全局禁用事务,默认false.为了处理某些数据库不支持事务,比如clickhouse
//禁用事务应该有驱动实现,不应该由orm实现
//DisableTransaction bool
//MockSQLDB 用于mock测试的入口,如果MockSQLDB不为nil,则不使用DSN,直接使用MockSQLDB
//db, mock, err := sqlmock.New()
//MockSQLDB *sql.DB
//FuncSeataGlobalTransaction seata-golang分布式的适配函数,返回ISeataGlobalTransaction接口的实现
FuncSeataGlobalTransaction func(ctx context.Context) (ISeataGlobalTransaction, context.Context, error)
}
// newDataSource 创建一个新的datasource,内部调用,避免外部直接使用datasource
// newDAtaSource Create a new datasource and call it internally to avoid direct external use of the datasource
func newDataSource(config *DataSourceConfig) (*dataSource, error) {
if config.DSN == "" {
return nil, errors.New("DSN cannot be empty")
}
if config.DriverName == "" {
return nil, errors.New("DriverName cannot be empty")
}
if config.DBType == "" {
return nil, errors.New("DBType cannot be empty")
}
var db *sql.DB
var errSQLOpen error
//if config.MockSQLDB == nil {
db, errSQLOpen = sql.Open(config.DriverName, config.DSN)
if errSQLOpen != nil {
errSQLOpen = fmt.Errorf("newDataSource-->open数据库打开失败:%w", errSQLOpen)
FuncLogError(errSQLOpen)
return nil, errSQLOpen
}
// } else {
// db = config.MockSQLDB
// }
if config.MaxOpenConns == 0 {
config.MaxOpenConns = 50
}
if config.MaxIdleConns == 0 {
config.MaxIdleConns = 50
}
if config.ConnMaxLifetimeSecond == 0 {
config.ConnMaxLifetimeSecond = 600
}
//设置数据库最大连接数
//Set the maximum number of database connections
db.SetMaxOpenConns(config.MaxOpenConns)
//设置数据库最大空闲连接数
//Set the maximum number of free connections to the database
db.SetMaxIdleConns(config.MaxIdleConns)
//连接存活秒时间. 默认600(10分钟)后连接被销毁重建.避免数据库主动断开连接,造成死连接.MySQL默认wait_timeout 28800秒(8小时)
//(Connection survival time in seconds) Destroy and rebuild the connection after the default 600 seconds (10 minutes)
//Prevent the database from actively disconnecting and causing dead connections. MySQL Default wait_timeout 28800 seconds
db.SetConnMaxLifetime(time.Second * time.Duration(config.ConnMaxLifetimeSecond))
//验证连接
if pingerr := db.Ping(); pingerr != nil {
pingerr = fmt.Errorf("newDataSource-->ping数据库失败:%w", pingerr)
FuncLogError(pingerr)
db.Close()
return nil, pingerr
}
return &dataSource{db}, nil
}
// 事务参照:https://www.jianshu.com/p/2a144332c3db
//Transaction reference: https://www.jianshu.com/p/2a144332c3db
// const beginStatus = 1
// dataBaseConnection 数据库dbConnection会话,可以原生查询或者事务
// dataBaseConnection Database session, native query or transaction.
type dataBaseConnection struct {
// 原生db
// native db
db *sql.DB
// 原生事务
// native Transaction
tx *sql.Tx
// 数据库配置
config *DataSourceConfig
//commitSign int8 // 提交标记,控制是否提交事务
//rollbackSign bool // 回滚标记,控制是否回滚事务
}
// beginTx 开启事务
// beginTx Open transaction
func (dbConnection *dataBaseConnection) beginTx(ctx context.Context) error {
//s.rollbackSign = true
if dbConnection.tx == nil {
//设置事务配置,主要是隔离级别
var txOptions *sql.TxOptions
contextTxOptions := ctx.Value(contextTxOptionsKey)
if contextTxOptions != nil {
txOptions, _ = contextTxOptions.(*sql.TxOptions)
} else {
txOptions = dbConnection.config.DefaultTxOptions
}
tx, err := dbConnection.db.BeginTx(ctx, txOptions)
if err != nil {
err = fmt.Errorf("beginTx事务开启失败:%w", err)
return err
}
dbConnection.tx = tx
//s.commitSign = beginStatus
return nil
}
//s.commitSign++
return nil
}
// rollback 回滚事务
// rollback Rollback transaction
func (dbConnection *dataBaseConnection) rollback() error {
//if s.tx != nil && s.rollbackSign == true {
if dbConnection.tx != nil {
err := dbConnection.tx.Rollback()
if err != nil {
err = fmt.Errorf("rollback事务回滚失败:%w", err)
return err
}
dbConnection.tx = nil
return nil
}
return nil
}
// commit 提交事务
// commit Commit transaction
func (dbConnection *dataBaseConnection) commit() error {
//s.rollbackSign = false
if dbConnection.tx == nil {
return errors.New("commit事务为空")
}
err := dbConnection.tx.Commit()
if err != nil {
err = fmt.Errorf("commit事务提交失败:%w", err)
return err
}
dbConnection.tx = nil
return nil
}
// execContext 执行sql语句,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行
// execContext Execute sql statement,If the transaction has been opened,it will be executed in transaction mode, if the transaction is not opened,it will be executed in non-transactional mode
func (dbConnection *dataBaseConnection) execContext(ctx context.Context, execsql *string, args []interface{}) (*sql.Result, error) {
//打印SQL
//print SQL
if dbConnection.config.PrintSQL {
//logger.Info("printSQL", logger.String("sql", execsql), logger.Any("args", args))
FuncPrintSQL(*execsql, args)
}
if dbConnection.tx != nil {
res, reserr := dbConnection.tx.ExecContext(ctx, *execsql, args...)
return &res, reserr
}
res, reserr := dbConnection.db.ExecContext(ctx, *execsql, args...)
return &res, reserr
}
// queryRowContext 如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行
func (dbConnection *dataBaseConnection) queryRowContext(ctx context.Context, query *string, args []interface{}) *sql.Row {
//打印SQL
if dbConnection.config.PrintSQL {
//logger.Info("printSQL", logger.String("sql", query), logger.Any("args", args))
FuncPrintSQL(*query, args)
}
if dbConnection.tx != nil {
return dbConnection.tx.QueryRowContext(ctx, *query, args...)
}
return dbConnection.db.QueryRowContext(ctx, *query, args...)
}
// queryContext 查询数据,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行
// queryRowContext Execute sql row statement,If the transaction has been opened,it will be executed in transaction mode, if the transaction is not opened,it will be executed in non-transactional mode
func (dbConnection *dataBaseConnection) queryContext(ctx context.Context, query *string, args []interface{}) (*sql.Rows, error) {
//打印SQL
if dbConnection.config.PrintSQL {
//logger.Info("printSQL", logger.String("sql", query), logger.Any("args", args))
FuncPrintSQL(*query, args)
}
if dbConnection.tx != nil {
return dbConnection.tx.QueryContext(ctx, *query, args...)
}
return dbConnection.db.QueryContext(ctx, *query, args...)
}
/*
// prepareContext 预执行,如果已经开启事务,就以事务方式执行,如果没有开启事务,就以非事务方式执行
// prepareContext Pre-execution,If the transaction has been opened,it will be executed in transaction mode,if the transaction is not opened,it will be executed in non-transactional mode
func (dbConnection *dataBaseConnection) prepareContext(ctx context.Context, query *string) (*sql.Stmt, error) {
//打印SQL
//print SQL
if dbConnection.config.PrintSQL {
//logger.Info("printSQL", logger.String("sql", query))
FuncPrintSQL(*query, nil)
}
if dbConnection.tx != nil {
return dbConnection.tx.PrepareContext(ctx, *query)
}
return dbConnection.db.PrepareContext(ctx, *query)
}
*/

@ -0,0 +1,415 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Multiprecision decimal numbers.
// For floating-point formatting only; not general purpose.
// Only operations are assign and (binary) left/right shift.
// Can do binary floating point in multiprecision decimal precisely
// because 2 divides 10; cannot do decimal floating point
// in multiprecision binary precisely.
package decimal
type decimal struct {
d [800]byte // digits, big-endian representation
nd int // number of digits used
dp int // decimal point
neg bool // negative flag
trunc bool // discarded nonzero digits beyond d[:nd]
}
func (a *decimal) String() string {
n := 10 + a.nd
if a.dp > 0 {
n += a.dp
}
if a.dp < 0 {
n += -a.dp
}
buf := make([]byte, n)
w := 0
switch {
case a.nd == 0:
return "0"
case a.dp <= 0:
// zeros fill space between decimal point and digits
buf[w] = '0'
w++
buf[w] = '.'
w++
w += digitZero(buf[w : w+-a.dp])
w += copy(buf[w:], a.d[0:a.nd])
case a.dp < a.nd:
// decimal point in middle of digits
w += copy(buf[w:], a.d[0:a.dp])
buf[w] = '.'
w++
w += copy(buf[w:], a.d[a.dp:a.nd])
default:
// zeros fill space between digits and decimal point
w += copy(buf[w:], a.d[0:a.nd])
w += digitZero(buf[w : w+a.dp-a.nd])
}
return string(buf[0:w])
}
func digitZero(dst []byte) int {
for i := range dst {
dst[i] = '0'
}
return len(dst)
}
// trim trailing zeros from number.
// (They are meaningless; the decimal point is tracked
// independent of the number of digits.)
func trim(a *decimal) {
for a.nd > 0 && a.d[a.nd-1] == '0' {
a.nd--
}
if a.nd == 0 {
a.dp = 0
}
}
// Assign v to a.
func (a *decimal) Assign(v uint64) {
var buf [24]byte
// Write reversed decimal in buf.
n := 0
for v > 0 {
v1 := v / 10
v -= 10 * v1
buf[n] = byte(v + '0')
n++
v = v1
}
// Reverse again to produce forward decimal in a.d.
a.nd = 0
for n--; n >= 0; n-- {
a.d[a.nd] = buf[n]
a.nd++
}
a.dp = a.nd
trim(a)
}
// Maximum shift that we can do in one pass without overflow.
// A uint has 32 or 64 bits, and we have to be able to accommodate 9<<k.
const uintSize = 32 << (^uint(0) >> 63)
const maxShift = uintSize - 4
// Binary shift right (/ 2) by k bits. k <= maxShift to avoid overflow.
func rightShift(a *decimal, k uint) {
r := 0 // read pointer
w := 0 // write pointer
// Pick up enough leading digits to cover first shift.
var n uint
for ; n>>k == 0; r++ {
if r >= a.nd {
if n == 0 {
// a == 0; shouldn't get here, but handle anyway.
a.nd = 0
return
}
for n>>k == 0 {
n = n * 10
r++
}
break
}
c := uint(a.d[r])
n = n*10 + c - '0'
}
a.dp -= r - 1
var mask uint = (1 << k) - 1
// Pick up a digit, put down a digit.
for ; r < a.nd; r++ {
c := uint(a.d[r])
dig := n >> k
n &= mask
a.d[w] = byte(dig + '0')
w++
n = n*10 + c - '0'
}
// Put down extra digits.
for n > 0 {
dig := n >> k
n &= mask
if w < len(a.d) {
a.d[w] = byte(dig + '0')
w++
} else if dig > 0 {
a.trunc = true
}
n = n * 10
}
a.nd = w
trim(a)
}
// Cheat sheet for left shift: table indexed by shift count giving
// number of new digits that will be introduced by that shift.
//
// For example, leftcheats[4] = {2, "625"}. That means that
// if we are shifting by 4 (multiplying by 16), it will add 2 digits
// when the string prefix is "625" through "999", and one fewer digit
// if the string prefix is "000" through "624".
//
// Credit for this trick goes to Ken.
type leftCheat struct {
delta int // number of new digits
cutoff string // minus one digit if original < a.
}
var leftcheats = []leftCheat{
// Leading digits of 1/2^i = 5^i.
// 5^23 is not an exact 64-bit floating point number,
// so have to use bc for the math.
// Go up to 60 to be large enough for 32bit and 64bit platforms.
/*
seq 60 | sed 's/^/5^/' | bc |
awk 'BEGIN{ print "\t{ 0, \"\" }," }
{
log2 = log(2)/log(10)
printf("\t{ %d, \"%s\" },\t// * %d\n",
int(log2*NR+1), $0, 2**NR)
}'
*/
{0, ""},
{1, "5"}, // * 2
{1, "25"}, // * 4
{1, "125"}, // * 8
{2, "625"}, // * 16
{2, "3125"}, // * 32
{2, "15625"}, // * 64
{3, "78125"}, // * 128
{3, "390625"}, // * 256
{3, "1953125"}, // * 512
{4, "9765625"}, // * 1024
{4, "48828125"}, // * 2048
{4, "244140625"}, // * 4096
{4, "1220703125"}, // * 8192
{5, "6103515625"}, // * 16384
{5, "30517578125"}, // * 32768
{5, "152587890625"}, // * 65536
{6, "762939453125"}, // * 131072
{6, "3814697265625"}, // * 262144
{6, "19073486328125"}, // * 524288
{7, "95367431640625"}, // * 1048576
{7, "476837158203125"}, // * 2097152
{7, "2384185791015625"}, // * 4194304
{7, "11920928955078125"}, // * 8388608
{8, "59604644775390625"}, // * 16777216
{8, "298023223876953125"}, // * 33554432
{8, "1490116119384765625"}, // * 67108864
{9, "7450580596923828125"}, // * 134217728
{9, "37252902984619140625"}, // * 268435456
{9, "186264514923095703125"}, // * 536870912
{10, "931322574615478515625"}, // * 1073741824
{10, "4656612873077392578125"}, // * 2147483648
{10, "23283064365386962890625"}, // * 4294967296
{10, "116415321826934814453125"}, // * 8589934592
{11, "582076609134674072265625"}, // * 17179869184
{11, "2910383045673370361328125"}, // * 34359738368
{11, "14551915228366851806640625"}, // * 68719476736
{12, "72759576141834259033203125"}, // * 137438953472
{12, "363797880709171295166015625"}, // * 274877906944
{12, "1818989403545856475830078125"}, // * 549755813888
{13, "9094947017729282379150390625"}, // * 1099511627776
{13, "45474735088646411895751953125"}, // * 2199023255552
{13, "227373675443232059478759765625"}, // * 4398046511104
{13, "1136868377216160297393798828125"}, // * 8796093022208
{14, "5684341886080801486968994140625"}, // * 17592186044416
{14, "28421709430404007434844970703125"}, // * 35184372088832
{14, "142108547152020037174224853515625"}, // * 70368744177664
{15, "710542735760100185871124267578125"}, // * 140737488355328
{15, "3552713678800500929355621337890625"}, // * 281474976710656
{15, "17763568394002504646778106689453125"}, // * 562949953421312
{16, "88817841970012523233890533447265625"}, // * 1125899906842624
{16, "444089209850062616169452667236328125"}, // * 2251799813685248
{16, "2220446049250313080847263336181640625"}, // * 4503599627370496
{16, "11102230246251565404236316680908203125"}, // * 9007199254740992
{17, "55511151231257827021181583404541015625"}, // * 18014398509481984
{17, "277555756156289135105907917022705078125"}, // * 36028797018963968
{17, "1387778780781445675529539585113525390625"}, // * 72057594037927936
{18, "6938893903907228377647697925567626953125"}, // * 144115188075855872
{18, "34694469519536141888238489627838134765625"}, // * 288230376151711744
{18, "173472347597680709441192448139190673828125"}, // * 576460752303423488
{19, "867361737988403547205962240695953369140625"}, // * 1152921504606846976
}
// Is the leading prefix of b lexicographically less than s?
func prefixIsLessThan(b []byte, s string) bool {
for i := 0; i < len(s); i++ {
if i >= len(b) {
return true
}
if b[i] != s[i] {
return b[i] < s[i]
}
}
return false
}
// Binary shift left (* 2) by k bits. k <= maxShift to avoid overflow.
func leftShift(a *decimal, k uint) {
delta := leftcheats[k].delta
if prefixIsLessThan(a.d[0:a.nd], leftcheats[k].cutoff) {
delta--
}
r := a.nd // read index
w := a.nd + delta // write index
// Pick up a digit, put down a digit.
var n uint
for r--; r >= 0; r-- {
n += (uint(a.d[r]) - '0') << k
quo := n / 10
rem := n - 10*quo
w--
if w < len(a.d) {
a.d[w] = byte(rem + '0')
} else if rem != 0 {
a.trunc = true
}
n = quo
}
// Put down extra digits.
for n > 0 {
quo := n / 10
rem := n - 10*quo
w--
if w < len(a.d) {
a.d[w] = byte(rem + '0')
} else if rem != 0 {
a.trunc = true
}
n = quo
}
a.nd += delta
if a.nd >= len(a.d) {
a.nd = len(a.d)
}
a.dp += delta
trim(a)
}
// Binary shift left (k > 0) or right (k < 0).
func (a *decimal) Shift(k int) {
switch {
case a.nd == 0:
// nothing to do: a == 0
case k > 0:
for k > maxShift {
leftShift(a, maxShift)
k -= maxShift
}
leftShift(a, uint(k))
case k < 0:
for k < -maxShift {
rightShift(a, maxShift)
k += maxShift
}
rightShift(a, uint(-k))
}
}
// If we chop a at nd digits, should we round up?
func shouldRoundUp(a *decimal, nd int) bool {
if nd < 0 || nd >= a.nd {
return false
}
if a.d[nd] == '5' && nd+1 == a.nd { // exactly halfway - round to even
// if we truncated, a little higher than what's recorded - always round up
if a.trunc {
return true
}
return nd > 0 && (a.d[nd-1]-'0')%2 != 0
}
// not halfway - digit tells all
return a.d[nd] >= '5'
}
// Round a to nd digits (or fewer).
// If nd is zero, it means we're rounding
// just to the left of the digits, as in
// 0.09 -> 0.1.
func (a *decimal) Round(nd int) {
if nd < 0 || nd >= a.nd {
return
}
if shouldRoundUp(a, nd) {
a.RoundUp(nd)
} else {
a.RoundDown(nd)
}
}
// Round a down to nd digits (or fewer).
func (a *decimal) RoundDown(nd int) {
if nd < 0 || nd >= a.nd {
return
}
a.nd = nd
trim(a)
}
// Round a up to nd digits (or fewer).
func (a *decimal) RoundUp(nd int) {
if nd < 0 || nd >= a.nd {
return
}
// round up
for i := nd - 1; i >= 0; i-- {
c := a.d[i]
if c < '9' { // can stop after this digit
a.d[i]++
a.nd = i + 1
return
}
}
// Number is all 9s.
// Change to single 1 with adjusted decimal point.
a.d[0] = '1'
a.nd = 1
a.dp++
}
// Extract integer part, rounded appropriately.
// No guarantees about overflow.
func (a *decimal) RoundedInteger() uint64 {
if a.dp > 20 {
return 0xFFFFFFFFFFFFFFFF
}
var i int
n := uint64(0)
for i = 0; i < a.dp && i < a.nd; i++ {
n = n*10 + uint64(a.d[i]-'0')
}
for ; i < a.dp; i++ {
n *= 10
}
if shouldRoundUp(a, a.dp) {
n++
}
return n
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,160 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Multiprecision decimal numbers.
// For floating-point formatting only; not general purpose.
// Only operations are assign and (binary) left/right shift.
// Can do binary floating point in multiprecision decimal precisely
// because 2 divides 10; cannot do decimal floating point
// in multiprecision binary precisely.
package decimal
type floatInfo struct {
mantbits uint
expbits uint
bias int
}
var float32info = floatInfo{23, 8, -127}
var float64info = floatInfo{52, 11, -1023}
// roundShortest rounds d (= mant * 2^exp) to the shortest number of digits
// that will let the original floating point value be precisely reconstructed.
func roundShortest(d *decimal, mant uint64, exp int, flt *floatInfo) {
// If mantissa is zero, the number is zero; stop now.
if mant == 0 {
d.nd = 0
return
}
// Compute upper and lower such that any decimal number
// between upper and lower (possibly inclusive)
// will round to the original floating point number.
// We may see at once that the number is already shortest.
//
// Suppose d is not denormal, so that 2^exp <= d < 10^dp.
// The closest shorter number is at least 10^(dp-nd) away.
// The lower/upper bounds computed below are at distance
// at most 2^(exp-mantbits).
//
// So the number is already shortest if 10^(dp-nd) > 2^(exp-mantbits),
// or equivalently log2(10)*(dp-nd) > exp-mantbits.
// It is true if 332/100*(dp-nd) >= exp-mantbits (log2(10) > 3.32).
minexp := flt.bias + 1 // minimum possible exponent
if exp > minexp && 332*(d.dp-d.nd) >= 100*(exp-int(flt.mantbits)) {
// The number is already shortest.
return
}
// d = mant << (exp - mantbits)
// Next highest floating point number is mant+1 << exp-mantbits.
// Our upper bound is halfway between, mant*2+1 << exp-mantbits-1.
upper := new(decimal)
upper.Assign(mant*2 + 1)
upper.Shift(exp - int(flt.mantbits) - 1)
// d = mant << (exp - mantbits)
// Next lowest floating point number is mant-1 << exp-mantbits,
// unless mant-1 drops the significant bit and exp is not the minimum exp,
// in which case the next lowest is mant*2-1 << exp-mantbits-1.
// Either way, call it mantlo << explo-mantbits.
// Our lower bound is halfway between, mantlo*2+1 << explo-mantbits-1.
var mantlo uint64
var explo int
if mant > 1<<flt.mantbits || exp == minexp {
mantlo = mant - 1
explo = exp
} else {
mantlo = mant*2 - 1
explo = exp - 1
}
lower := new(decimal)
lower.Assign(mantlo*2 + 1)
lower.Shift(explo - int(flt.mantbits) - 1)
// The upper and lower bounds are possible outputs only if
// the original mantissa is even, so that IEEE round-to-even
// would round to the original mantissa and not the neighbors.
inclusive := mant%2 == 0
// As we walk the digits we want to know whether rounding up would fall
// within the upper bound. This is tracked by upperdelta:
//
// If upperdelta == 0, the digits of d and upper are the same so far.
//
// If upperdelta == 1, we saw a difference of 1 between d and upper on a
// previous digit and subsequently only 9s for d and 0s for upper.
// (Thus rounding up may fall outside the bound, if it is exclusive.)
//
// If upperdelta == 2, then the difference is greater than 1
// and we know that rounding up falls within the bound.
var upperdelta uint8
// Now we can figure out the minimum number of digits required.
// Walk along until d has distinguished itself from upper and lower.
for ui := 0; ; ui++ {
// lower, d, and upper may have the decimal points at different
// places. In this case upper is the longest, so we iterate from
// ui==0 and start li and mi at (possibly) -1.
mi := ui - upper.dp + d.dp
if mi >= d.nd {
break
}
li := ui - upper.dp + lower.dp
l := byte('0') // lower digit
if li >= 0 && li < lower.nd {
l = lower.d[li]
}
m := byte('0') // middle digit
if mi >= 0 {
m = d.d[mi]
}
u := byte('0') // upper digit
if ui < upper.nd {
u = upper.d[ui]
}
// Okay to round down (truncate) if lower has a different digit
// or if lower is inclusive and is exactly the result of rounding
// down (i.e., and we have reached the final digit of lower).
okdown := l != m || inclusive && li+1 == lower.nd
switch {
case upperdelta == 0 && m+1 < u:
// Example:
// m = 12345xxx
// u = 12347xxx
upperdelta = 2
case upperdelta == 0 && m != u:
// Example:
// m = 12345xxx
// u = 12346xxx
upperdelta = 1
case upperdelta == 1 && (m != '9' || u != '0'):
// Example:
// m = 1234598x
// u = 1234600x
upperdelta = 2
}
// Okay to round up if upper has a different digit and either upper
// is inclusive or upper is bigger than the result of rounding up.
okup := upperdelta > 0 && (inclusive || upperdelta > 1 || ui+1 < upper.nd)
// If it's okay to do either, then round to the nearest one.
// If it's okay to do only one, do it.
switch {
case okdown && okup:
d.Round(mi + 1)
return
case okdown:
d.RoundDown(mi + 1)
return
case okup:
d.RoundUp(mi + 1)
return
}
}
}

@ -0,0 +1,784 @@
package zorm
import (
"crypto/rand"
"database/sql"
"errors"
"fmt"
"math/big"
"reflect"
"regexp"
"strconv"
"strings"
"time"
)
//wrapPageSQL 包装分页的SQL语句
//wrapPageSQL SQL statement for wrapping paging
func wrapPageSQL(dbType string, sqlstr string, page *Page) (string, error) {
//新的分页方法都已经不需要order by了,不再强制检查
//The new paging method does not require 'order by' anymore, no longer mandatory check.
//
/*
locOrderBy := findOrderByIndex(sqlstr)
if len(locOrderBy) <= 0 { //如果没有 order by
return "", errors.New("分页语句必须有 order by")
}
*/
var sqlbuilder strings.Builder
sqlbuilder.WriteString(sqlstr)
if dbType == "mysql" || dbType == "sqlite" || dbType == "dm" || dbType == "gbase" || dbType == "clickhouse" { //MySQL,sqlite3,dm数据库,南大通用,clickhouse
sqlbuilder.WriteString(" LIMIT ")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1)))
sqlbuilder.WriteString(",")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize))
} else if dbType == "postgresql" || dbType == "kingbase" || dbType == "shentong" { //postgresql,kingbase,神通数据库
sqlbuilder.WriteString(" LIMIT ")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize))
sqlbuilder.WriteString(" OFFSET ")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1)))
} else if dbType == "mssql" || dbType == "oracle" { //sqlserver 2012+,oracle 12c+
sqlbuilder.WriteString(" OFFSET ")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize * (page.PageNo - 1)))
sqlbuilder.WriteString(" ROWS FETCH NEXT ")
sqlbuilder.WriteString(strconv.Itoa(page.PageSize))
sqlbuilder.WriteString(" ROWS ONLY ")
} else {
return "", errors.New("wrapPageSQL()-->不支持的数据库类型:" + dbType)
}
sqlstr = sqlbuilder.String()
return reBindSQL(dbType, sqlstr)
}
//wrapInsertSQL 包装保存Struct语句.返回语句,是否自增,错误信息
//数组传递,如果外部方法有调用append的逻辑append会破坏指针引用所以传递指针
//wrapInsertSQL Pack and save 'Struct' statement. Return SQL statement, whether it is incremented, error message
//Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed
func wrapInsertSQL(dbType string, typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}) (string, int, string, error) {
sqlstr, autoIncrement, pktype, err := wrapInsertSQLNOreBuild(dbType, typeOf, entity, columns, values)
if err != nil {
return sqlstr, autoIncrement, pktype, err
}
savesql, err := reBindSQL(dbType, sqlstr)
return savesql, autoIncrement, pktype, err
}
//wrapInsertSQLNOreBuild 包装保存Struct语句.返回语句,没有rebuild,返回原始的SQL,是否自增,错误信息
//数组传递,如果外部方法有调用append的逻辑,传递指针,因为append会破坏指针引用
//Pack and save Struct statement. Return SQL statement, no rebuild, return original SQL, whether it is self-increment, error message
//Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed
func wrapInsertSQLNOreBuild(dbType string, typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}) (string, int, string, error) {
//自增类型 0(不自增),1(普通自增),2(序列自增),3(触发器自增)
//Self-increment type 0Not increase,1(Ordinary increment),2(Sequence increment),3(Trigger increment)
autoIncrement := 0
//主键类型
//Primary key type
pktype := ""
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("INSERT INTO ")
sqlBuilder.WriteString(entity.GetTableName())
sqlBuilder.WriteString("(")
//SQL语句中,VALUES(?,?,...)语句的构造器
//In the SQL statement, the constructor of the VALUES(?,?,...) statement
var valueSQLBuilder strings.Builder
valueSQLBuilder.WriteString(" VALUES (")
//主键的名称
//The name of the primary key.
pkFieldName, e := entityPKFieldName(entity, typeOf)
if e != nil {
return "", autoIncrement, pktype, e
}
var sequence string
var sequenceOK bool
if entity.GetPkSequence() != nil {
sequence, sequenceOK = entity.GetPkSequence()[dbType]
if sequenceOK { //存在序列 Existence sequence
if sequence == "" { //触发器自增,也兼容自增关键字 Auto-increment by trigger, also compatible with auto-increment keywords.
autoIncrement = 3
} else { //序列自增 Sequence increment
autoIncrement = 2
}
}
}
for i := 0; i < len(*columns); i++ {
field := (*columns)[i]
if field.Name == pkFieldName { //如果是主键 | If it is the primary key
//获取主键类型 | Get the primary key type.
pkKind := field.Type.Kind()
if pkKind == reflect.String {
pktype = "string"
} else if pkKind == reflect.Int || pkKind == reflect.Int32 || pkKind == reflect.Int16 || pkKind == reflect.Int8 {
pktype = "int"
} else if pkKind == reflect.Int64 {
pktype = "int64"
} else {
return "", autoIncrement, pktype, errors.New("wrapInsertSQLNOreBuild不支持的主键类型")
}
if autoIncrement == 3 {
//如果是后台触发器生成的主键值,sql语句中不再体现
//去掉这一列,后续不再处理
// If it is the primary key value generated by the background trigger, it is no longer reflected in the sql statement
//Remove this column and will not process it later.
*columns = append((*columns)[:i], (*columns)[i+1:]...)
*values = append((*values)[:i], (*values)[i+1:]...)
i = i - 1
continue
}
//主键的值
//The value of the primary key
pkValue := (*values)[i]
if autoIncrement == 2 { //如果是序列自增 | If it is a sequence increment
//拼接字符串 | Concatenated string
//sqlBuilder.WriteString(getStructFieldTagColumnValue(typeOf, field.Name))
//sqlBuilder.WriteString(field.Tag.Get(tagColumnName))
colName := getFieldTagName(dbType, &field)
sqlBuilder.WriteString(colName)
sqlBuilder.WriteString(",")
valueSQLBuilder.WriteString(sequence)
valueSQLBuilder.WriteString(",")
//去掉这一列,后续不再处理
//Remove this column and will not process it later.
*columns = append((*columns)[:i], (*columns)[i+1:]...)
*values = append((*values)[:i], (*values)[i+1:]...)
i = i - 1
continue
} else if (pktype == "string") && (pkValue.(string) == "") { //主键是字符串类型,并且值为"",赋值id
//生成主键字符串
//Generate primary key string
id := FuncGenerateStringID()
(*values)[i] = id
//给对象主键赋值
//Assign a value to the primary key of the object
v := reflect.ValueOf(entity).Elem()
v.FieldByName(field.Name).Set(reflect.ValueOf(id))
//如果是数字类型,并且值为0,认为是数据库自增,从数组中删除掉主键的信息,让数据库自己生成
//If it is a number type and the value is 0,
//it is considered to be a database self-increment,
//delete the primary key information from the array, and let the database generate itself.
} else if (pktype == "int" && pkValue.(int) == 0) || (pktype == "int64" && pkValue.(int64) == 0) {
//标记是自增主键
//Mark is auto-incrementing primary key
autoIncrement = 1
//去掉这一列,后续不再处理
//Remove this column and will not process it later.
*columns = append((*columns)[:i], (*columns)[i+1:]...)
*values = append((*values)[:i], (*values)[i+1:]...)
i = i - 1
continue
}
}
//拼接字符串
//Concatenated string.
//sqlBuilder.WriteString(getStructFieldTagColumnValue(typeOf, field.Name))
// sqlBuilder.WriteString(field.Tag.Get(tagColumnName))
colName := getFieldTagName(dbType, &field)
sqlBuilder.WriteString(colName)
sqlBuilder.WriteString(",")
valueSQLBuilder.WriteString("?,")
}
//去掉字符串最后的 ','
//Remove the',' at the end of the string
sqlstr := sqlBuilder.String()
if len(sqlstr) > 0 {
sqlstr = sqlstr[:len(sqlstr)-1]
}
valuestr := valueSQLBuilder.String()
if len(valuestr) > 0 {
valuestr = valuestr[:len(valuestr)-1]
}
sqlstr = sqlstr + ")" + valuestr + ")"
//savesql, err := wrapSQL(dbType, sqlstr)
return sqlstr, autoIncrement, pktype, nil
}
//wrapInsertSliceSQL 包装批量保存StructSlice语句.返回语句,是否自增,错误信息
//数组传递,如果外部方法有调用append的逻辑append会破坏指针引用所以传递指针
//wrapInsertSliceSQL Package and save Struct Slice statements in batches. Return SQL statement, whether it is incremented, error message
//Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed
func wrapInsertSliceSQL(dbType string, typeOf *reflect.Type, entityStructSlice []IEntityStruct, columns *[]reflect.StructField, values *[]interface{}) (string, int, error) {
sliceLen := len(entityStructSlice)
if entityStructSlice == nil || sliceLen < 1 {
return "", 0, errors.New("wrapInsertSliceSQL对象数组不能为空")
}
//第一个对象,获取第一个Struct对象,用于获取数据库字段,也获取了值
//The first object, get the first Struct object, used to get the database field, and also get the value
entity := entityStructSlice[0]
//先生成一条语句
//Generate a statement first
sqlstr, autoIncrement, _, firstErr := wrapInsertSQLNOreBuild(dbType, typeOf, entity, columns, values)
if firstErr != nil {
return "", autoIncrement, firstErr
}
//如果只有一个Struct对象
//If there is only one Struct object
if sliceLen == 1 {
sqlstr, _ = reBindSQL(dbType, sqlstr)
return sqlstr, autoIncrement, firstErr
}
//主键的名称
//The name of the primary key
pkFieldName, e := entityPKFieldName(entity, typeOf)
if e != nil {
return "", autoIncrement, e
}
//截取生成的SQL语句中 VALUES 后面的字符串值
//Intercept the string value after VALUES in the generated SQL statement
valueIndex := strings.Index(sqlstr, " VALUES (")
if valueIndex < 1 { //生成的语句异常
return "", autoIncrement, errors.New("wrapInsertSliceSQL生成的语句异常")
}
//value后面的字符串 例如 (?,?,?),用于循环拼接
//The string after the value, such as (?,?,?), is used for circular splicing
valuestr := sqlstr[valueIndex+8:]
//SQL语句的构造器
//SQL statement constructor
var insertSliceSQLBuilder strings.Builder
insertSliceSQLBuilder.WriteString(sqlstr)
for i := 1; i < sliceLen; i++ {
//拼接字符串
//Splicing string
insertSliceSQLBuilder.WriteString(",")
insertSliceSQLBuilder.WriteString(valuestr)
entityStruct := entityStructSlice[i]
for j := 0; j < len(*columns); j++ {
// 获取实体类的反射,指针下的struct
// Get the reflection of the entity class, the struct under the pointer
valueOf := reflect.ValueOf(entityStruct).Elem()
field := (*columns)[j]
if field.Name == pkFieldName { //如果是主键 If it is the primary key
pkKind := field.Type.Kind()
//主键的值
//The value of the primary key
pkValue := valueOf.FieldByName(field.Name).Interface()
//只处理字符串类型的主键,其他类型,columns中并不包含
//Only handle primary keys of string type, other types, not included in columns
if (pkKind == reflect.String) && (pkValue.(string) == "") {
//主键是字符串类型,并且值为"",赋值'id'
//生成主键字符串
//The primary key is a string type, and the value is "", assigned the value'id'
//Generate primary key string
id := FuncGenerateStringID()
*values = append(*values, id)
//给对象主键赋值
//Assign a value to the primary key of the object
valueOf.FieldByName(field.Name).Set(reflect.ValueOf(id))
continue
}
}
//给字段赋值
//Assign a value to the field.
*values = append(*values, valueOf.FieldByName(field.Name).Interface())
}
}
//包装sql
//Wrap sql
savesql, err := reBindSQL(dbType, insertSliceSQLBuilder.String())
return savesql, autoIncrement, err
}
//wrapUpdateSQL 包装更新Struct语句
//数组传递,如果外部方法有调用append的逻辑append会破坏指针引用所以传递指针
//wrapUpdateSQL Package update Struct statement
//Array transfer, if the external method has logic to call append, append will destroy the pointer reference, so the pointer is passed
func wrapUpdateSQL(dbType string, typeOf *reflect.Type, entity IEntityStruct, columns *[]reflect.StructField, values *[]interface{}, onlyUpdateNotZero bool) (string, error) {
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("UPDATE ")
sqlBuilder.WriteString(entity.GetTableName())
sqlBuilder.WriteString(" SET ")
//主键的值
//The value of the primary key
var pkValue interface{}
//主键的名称
//The name of the primary key
pkFieldName, e := entityPKFieldName(entity, typeOf)
if e != nil {
return "", e
}
for i := 0; i < len(*columns); i++ {
field := (*columns)[i]
if field.Name == pkFieldName {
//如果是主键
//If it is the primary key.
pkValue = (*values)[i]
//去掉这一列,最后处理主键
//Remove this column, and finally process the primary key
*columns = append((*columns)[:i], (*columns)[i+1:]...)
*values = append((*values)[:i], (*values)[i+1:]...)
i = i - 1
continue
}
//如果是默认值字段,删除掉,不更新
//If it is the default value field, delete it and do not update
if onlyUpdateNotZero && (reflect.ValueOf((*values)[i]).IsZero()) {
//去掉这一列,不再处理
//Remove this column and no longer process
*columns = append((*columns)[:i], (*columns)[i+1:]...)
*values = append((*values)[:i], (*values)[i+1:]...)
i = i - 1
continue
}
//sqlBuilder.WriteString(getStructFieldTagColumnValue(typeOf, field.Name))
// sqlBuilder.WriteString(field.Tag.Get(tagColumnName))
colName := getFieldTagName(dbType, &field)
sqlBuilder.WriteString(colName)
sqlBuilder.WriteString("=?,")
}
//主键的值是最后一个
//The value of the primary key is the last
*values = append(*values, pkValue)
//去掉字符串最后的 ','
//Remove the',' at the end of the string
sqlstr := sqlBuilder.String()
sqlstr = sqlstr[:len(sqlstr)-1]
sqlstr = sqlstr + " WHERE " + entity.GetPKColumnName() + "=?"
return reBindSQL(dbType, sqlstr)
}
//wrapDeleteSQL 包装删除Struct语句
//wrapDeleteSQL Package delete Struct statement
func wrapDeleteSQL(dbType string, entity IEntityStruct) (string, error) {
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("DELETE FROM ")
sqlBuilder.WriteString(entity.GetTableName())
sqlBuilder.WriteString(" WHERE ")
sqlBuilder.WriteString(entity.GetPKColumnName())
sqlBuilder.WriteString("=?")
sqlstr := sqlBuilder.String()
return reBindSQL(dbType, sqlstr)
}
//wrapInsertEntityMapSQL 包装保存Map语句,Map因为没有字段属性,无法完成Id的类型判断和赋值,需要确保Map的值是完整的
//wrapInsertEntityMapSQL Pack and save the Map statement. Because Map does not have field attributes,
//it cannot complete the type judgment and assignment of Id. It is necessary to ensure that the value of Map is complete
func wrapInsertEntityMapSQL(dbType string, entity IEntityMap) (string, []interface{}, bool, error) {
//是否自增,默认false
autoIncrement := false
dbFieldMap := entity.GetDBFieldMap()
if len(dbFieldMap) < 1 {
return "", nil, autoIncrement, errors.New("wrapInsertEntityMapSQL-->GetDBFieldMap返回值不能为空")
}
//SQL对应的参数
//SQL corresponding parameters
values := []interface{}{}
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("INSERT INTO ")
sqlBuilder.WriteString(entity.GetTableName())
sqlBuilder.WriteString("(")
//SQL语句中,VALUES(?,?,...)语句的构造器
//In the SQL statement, the constructor of the VALUES(?,?,...) statement.
var valueSQLBuilder strings.Builder
valueSQLBuilder.WriteString(" VALUES (")
//是否Set了主键
//Whether the primary key is set.
_, hasPK := dbFieldMap[entity.GetPKColumnName()]
if !hasPK { //如果没有设置主键,认为是自增或者序列 | If the primary key is not set, it is considered to be auto-increment or sequence
autoIncrement = true
if sequence, ok := entity.GetPkSequence()[dbType]; ok { //如果是序列 | If it is a sequence.
sqlBuilder.WriteString(entity.GetPKColumnName())
sqlBuilder.WriteString(",")
valueSQLBuilder.WriteString(sequence)
valueSQLBuilder.WriteString(",")
}
}
for k, v := range dbFieldMap {
//拼接字符串
//Concatenated string
sqlBuilder.WriteString(k)
sqlBuilder.WriteString(",")
valueSQLBuilder.WriteString("?,")
values = append(values, v)
}
//去掉字符串最后的 ','
//Remove the',' at the end of the string
sqlstr := sqlBuilder.String()
if len(sqlstr) > 0 {
sqlstr = sqlstr[:len(sqlstr)-1]
}
valuestr := valueSQLBuilder.String()
if len(valuestr) > 0 {
valuestr = valuestr[:len(valuestr)-1]
}
sqlstr = sqlstr + ")" + valuestr + ")"
var e error
sqlstr, e = reBindSQL(dbType, sqlstr)
if e != nil {
return "", nil, autoIncrement, e
}
return sqlstr, values, autoIncrement, nil
}
//wrapUpdateEntityMapSQL 包装Map更新语句,Map因为没有字段属性,无法完成Id的类型判断和赋值,需要确保Map的值是完整的
//wrapUpdateEntityMapSQL Wrap the Map update statement. Because Map does not have field attributes,
//it cannot complete the type judgment and assignment of Id. It is necessary to ensure that the value of Map is complete
func wrapUpdateEntityMapSQL(dbType string, entity IEntityMap) (string, []interface{}, error) {
dbFieldMap := entity.GetDBFieldMap()
if len(dbFieldMap) < 1 {
return "", nil, errors.New("wrapUpdateEntityMapSQL-->GetDBFieldMap返回值不能为空")
}
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("UPDATE ")
sqlBuilder.WriteString(entity.GetTableName())
sqlBuilder.WriteString(" SET ")
//SQL对应的参数
//SQL corresponding parameters
values := []interface{}{}
//主键名称
//Primary key name
var pkValue interface{}
for k, v := range dbFieldMap {
if k == entity.GetPKColumnName() { //如果是主键 | If it is the primary key
pkValue = v
continue
}
//拼接字符串 | Splicing string.
sqlBuilder.WriteString(k)
sqlBuilder.WriteString("=?,")
values = append(values, v)
}
//主键的值是最后一个
//The value of the primary key is the last
values = append(values, pkValue)
//去掉字符串最后的 ','
//Remove the',' at the end of the string
sqlstr := sqlBuilder.String()
sqlstr = sqlstr[:len(sqlstr)-1]
sqlstr = sqlstr + " WHERE " + entity.GetPKColumnName() + "=?"
var e error
sqlstr, e = reBindSQL(dbType, sqlstr)
if e != nil {
return "", nil, e
}
return sqlstr, values, nil
}
//wrapQuerySQL 封装查询语句
//wrapQuerySQL Encapsulated query statement
func wrapQuerySQL(dbType string, finder *Finder, page *Page) (string, error) {
//获取到没有page的sql的语句
//Get the SQL statement without page.
sqlstr, err := finder.GetSQL()
if err != nil {
return "", err
}
if page == nil {
sqlstr, err = reBindSQL(dbType, sqlstr)
} else {
sqlstr, err = wrapPageSQL(dbType, sqlstr, page)
}
if err != nil {
return "", err
}
return sqlstr, err
}
//reBindSQL 包装基础的SQL语句,根据数据库类型,调整SQL变量符号,例如?,? $1,$2这样的
//reBindSQL Pack basic SQL statements, adjust the SQL variable symbols according to the database type, such as?,? $1,$2
func reBindSQL(dbType string, sqlstr string) (string, error) {
if dbType == "mysql" || dbType == "sqlite" || dbType == "dm" || dbType == "gbase" || dbType == "clickhouse" {
return sqlstr, nil
}
strs := strings.Split(sqlstr, "?")
if len(strs) < 1 {
return sqlstr, nil
}
var sqlBuilder strings.Builder
sqlBuilder.WriteString(strs[0])
for i := 1; i < len(strs); i++ {
if dbType == "postgresql" || dbType == "kingbase" { //postgresql,kingbase
sqlBuilder.WriteString("$")
sqlBuilder.WriteString(strconv.Itoa(i))
} else if dbType == "mssql" { //mssql
sqlBuilder.WriteString("@p")
sqlBuilder.WriteString(strconv.Itoa(i))
} else if dbType == "oracle" || dbType == "shentong" { //oracle,神州通用
sqlBuilder.WriteString(":")
sqlBuilder.WriteString(strconv.Itoa(i))
} else { //其他情况,还是使用 '?' | In other cases, or use'?'
sqlBuilder.WriteString("?")
}
sqlBuilder.WriteString(strs[i])
}
return sqlBuilder.String(), nil
}
//reUpdateFinderSQL 根据数据类型更新 手动编写的 UpdateFinder的语句,用于处理数据库兼容,例如 clickhouse的 UPDATE 和 DELETE
func reUpdateFinderSQL(dbType string, sqlstr *string) (*string, error) {
//处理clickhouse的特殊更新语法
if dbType == "clickhouse" {
//SQL语句的构造器
//SQL statement constructor
var sqlBuilder strings.Builder
sqlBuilder.WriteString("ALTER TABLE ")
sqls := findUpdateTableName(sqlstr)
if len(sqls) >= 2 { //如果是更新语句
sqlBuilder.WriteString(sqls[1])
sqlBuilder.WriteString(" UPDATE ")
} else { //如果不是更新语句
sqls = findDeleteTableName(sqlstr)
if len(sqls) < 2 { //如果也不是删除语句
return sqlstr, nil
}
sqlBuilder.WriteString(sqls[1])
sqlBuilder.WriteString(" DELETE WHERE ")
}
//截取字符串
content := (*sqlstr)[len(sqls[0]):]
sqlBuilder.WriteString(content)
sql := sqlBuilder.String()
return &sql, nil
}
return sqlstr, nil
}
//查询'order by'在sql中出现的开始位置和结束位置
//Query the start position and end position of'order by' in SQL
var orderByExpr = "(?i)\\s(order)\\s+by\\s"
var orderByRegexp, _ = regexp.Compile(orderByExpr)
//findOrderByIndex 查询order by在sql中出现的开始位置和结束位置
// findOrderByIndex Query the start position and end position of'order by' in SQL
func findOrderByIndex(strsql string) []int {
loc := orderByRegexp.FindStringIndex(strsql)
return loc
}
//查询'group by'在sql中出现的开始位置和结束位置
//Query the start position and end position of'group by' in sql。
var groupByExpr = "(?i)\\s(group)\\s+by\\s"
var groupByRegexp, _ = regexp.Compile(groupByExpr)
//findGroupByIndex 查询group by在sql中出现的开始位置和结束位置
//findGroupByIndex Query the start position and end position of'group by' in sql
func findGroupByIndex(strsql string) []int {
loc := groupByRegexp.FindStringIndex(strsql)
return loc
}
//查询 from 在sql中出现的开始位置和结束位置
//Query the start position and end position of 'from' in sql
//var fromExpr = "(?i)(^\\s*select)(.+?\\(.+?\\))*.*?(from)"
//感谢奔跑(@zeqjone)提供的正则,排除不在括号内的from,已经满足绝大部分场景,
//select id1,(select (id2) from t1 where id=2) _s FROM table select的子查询 _s中的 id2还有括号,才会出现问题,建议使用CountFinder处理分页语句
var fromExpr = "(?i)(^\\s*select)(\\(.*?\\)|[^()]+)*?(from)"
var fromRegexp, _ = regexp.Compile(fromExpr)
//findFromIndexa 查询from在sql中出现的开始位置和结束位置
//findSelectFromIndex Query the start position and end position of 'from' in sql
func findSelectFromIndex(strsql string) []int {
//匹配出来的是完整的字符串,用最后的FROM即可
loc := fromRegexp.FindStringIndex(strsql)
if len(loc) < 2 {
return loc
}
//最后的FROM前推4位字符串
loc[0] = loc[1] - 4
return loc
}
/*
var fromExpr = `\(([\s\S]+?)\)`
var fromRegexp, _ = regexp.Compile(fromExpr)
//查询 from 在sql中出现的开始位置
//Query the start position of 'from' in sql
func findSelectFromIndex(strsql string) int {
sql := strings.ToLower(strsql)
m := fromRegexp.FindAllString(sql, -1)
for i := 0; i < len(m); i++ {
str := m[i]
strnofrom := strings.ReplaceAll(str, " from ", " zorm ")
sql = strings.ReplaceAll(sql, str, strnofrom)
}
fromIndex := strings.LastIndex(sql, " from ")
if fromIndex < 0 {
return fromIndex
}
//补上一个空格
fromIndex = fromIndex + 1
return fromIndex
}
*/
// 从更新语句中获取表名
//update\\s(.+)set\\s.*
var updateExper = "(?i)^\\s*update\\s+(\\w+)\\s+set\\s"
var updateRegexp, _ = regexp.Compile(updateExper)
// findUpdateTableName 获取语句中表名
// 第一个是符合的整体数据,第二个是表名
func findUpdateTableName(strsql *string) []string {
matchs := updateRegexp.FindStringSubmatch(*strsql)
return matchs
}
// 从删除语句中获取表名
//delete\\sfrom\\s(.+)where\\s(.*)
var deleteExper = "(?i)^\\s*delete\\s+from\\s+(\\w+)\\s+where\\s"
var deleteRegexp, _ = regexp.Compile(deleteExper)
// findDeleteTableName 获取语句中表名
// 第一个是符合的整体数据,第二个是表名
func findDeleteTableName(strsql *string) []string {
matchs := deleteRegexp.FindStringSubmatch(*strsql)
return matchs
}
//converValueColumnType 根据数据库的字段类型,转化成golang的类型,不处理sql.Nullxxx类型
//converValueColumnType According to the field type of the database, it is converted to the type of golang, and the sql.Nullxxx type is not processed
func converValueColumnType(v interface{}, columnType *sql.ColumnType) interface{} {
if v == nil {
return nil
}
//如果是字节数组
//If it is a byte array
value, ok := v.([]byte)
if !ok { //转化失败,不是字节数组,例如:string,直接返回值
return v
}
if len(value) < 1 { //值为空,为nil
return value
}
//获取数据库类型,自己对应golang的基础类型值,不处理sql.Nullxxx类型
//Get the database type, corresponding to the basic type value of golang, and do not process the sql.Nullxxx type.
databaseTypeName := strings.ToUpper(columnType.DatabaseTypeName())
switch databaseTypeName {
case "CHAR", "NCHAR", "VARCHAR", "NVARCHAR", "VARCHAR2", "NVARCHAR2", "TINYTEXT", "MEDIUMTEXT", "TEXT", "NTEXT", "LONGTEXT", "LONG":
return typeConvertString(v)
case "INT", "INT4", "INTEGER", "SERIAL", "TINYINT", "BIT", "SMALLINT", "SMALLSERIAL", "INT2":
return typeConvertInt(v)
case "BIGINT", "BIGSERIAL", "INT8":
return typeConvertInt64(v)
case "FLOAT", "REAL":
return typeConvertFloat32(v)
case "DOUBLE":
return typeConvertFloat64(v)
case "DECIMAL", "NUMBER", "NUMERIC", "DEC":
return typeConvertDecimal(v)
case "DATE":
return typeConvertTime(v, "2006-01-02", time.Local)
case "TIME":
return typeConvertTime(v, "15:04:05", time.Local)
case "DATETIME":
return typeConvertTime(v, "2006-01-02 15:04:05", time.Local)
case "TIMESTAMP":
return typeConvertTime(v, "2006-01-02 15:04:05.000", time.Local)
case "BOOLEAN", "BOOL":
return typeConvertBool(v)
}
//其他类型以后再写.....
//Other types will be written later...
return v
}
//FuncGenerateStringID 默认生成字符串ID的函数.方便自定义扩展
//FuncGenerateStringID Function to generate string ID by default. Convenient for custom extension
var FuncGenerateStringID func() string = generateStringID
//generateStringID 生成主键字符串
//generateStringID Generate primary key string
func generateStringID() string {
// 使用 crypto/rand 真随机9位数
randNum, randErr := rand.Int(rand.Reader, big.NewInt(1000000000))
if randErr != nil {
return ""
}
//获取9位数,前置补0,确保9位数
rand9 := fmt.Sprintf("%09d", randNum)
//获取纳秒 按照 年月日时分秒毫秒微秒纳秒 拼接为长度23位的字符串
pk := time.Now().Format("2006.01.02.15.04.05.000000000")
pk = strings.ReplaceAll(pk, ".", "")
//23位字符串+9位随机数=32位字符串,这样的好处就是可以使用ID进行排序
pk = pk + rand9
return pk
}
//generateStringID 生成主键字符串
//generateStringID Generate primary key string
/*
func generateStringID() string {
//pk := strconv.FormatInt(time.Now().UnixNano(), 10)
pk, errUUID := gouuid.NewV4()
if errUUID != nil {
return ""
}
return pk.String()
}
*/
// getFieldTagName 获取模型中定义的数据库的 column tag
func getFieldTagName(dbType string, field *reflect.StructField) string {
colName := field.Tag.Get(tagColumnName)
if dbType == "kingbase" {
// kingbase R3 驱动大小写敏感,通常是大写。数据库全的列名部换成双引号括住的大写字符,避免与数据库内置关键词冲突时报错
colName = strings.ReplaceAll(colName, "\"", "")
colName = fmt.Sprintf(`"%s"`, strings.ToUpper(colName))
}
return colName
}

@ -0,0 +1,762 @@
package zorm
import (
"database/sql"
"database/sql/driver"
"errors"
"fmt"
"go/ast"
"reflect"
"strings"
"sync"
)
//allowBaseTypeMap 允许基础类型查询,用于查询单个基础类型字段,例如 select id from t_user 查询返回的是字符串类型
/*
var allowBaseTypeMap = map[reflect.Kind]bool{
reflect.String: true,
reflect.Int: true,
reflect.Int8: true,
reflect.Int16: true,
reflect.Int32: true,
reflect.Int64: true,
reflect.Uint: true,
reflect.Uint8: true,
reflect.Uint16: true,
reflect.Uint32: true,
reflect.Uint64: true,
reflect.Float32: true,
reflect.Float64: true,
}
*/
const (
//tag标签的名称
tagColumnName = "column"
//输出字段 缓存的前缀
exportPrefix = "_exportStructFields_"
//私有字段 缓存的前缀
privatePrefix = "_privateStructFields_"
//数据库列名 缓存的前缀
dbColumnNamePrefix = "_dbColumnName_"
//field对应的column的tag值 缓存的前缀
//structFieldTagPrefix = "_structFieldTag_"
//数据库主键 缓存的前缀
//dbPKNamePrefix = "_dbPKName_"
)
//cacheStructFieldInfoMap 用于缓存反射的信息,sync.Map内部处理了并发锁
//var cacheStructFieldInfoMap *sync.Map = &sync.Map{}
var cacheStructFieldInfoMap = make(map[string]map[string]reflect.StructField)
//用于缓存field对应的column的tag值
//var cacheStructFieldTagInfoMap = make(map[string]map[string]string)
//structFieldInfo 获取StructField的信息.只对struct或者*struct判断,如果是指针,返回指针下实际的struct类型
//第一个返回值是可以输出的字段(首字母大写),第二个是不能输出的字段(首字母小写)
func structFieldInfo(typeOf *reflect.Type) error {
if typeOf == nil {
return errors.New("structFieldInfo数据为空")
}
entityName := (*typeOf).String()
//缓存的key
//所有输出的属性,包含数据库字段,key是struct属性的名称,不区分大小写
exportCacheKey := exportPrefix + entityName
//所有私有变量的属性,key是struct属性的名称,不区分大小写
privateCacheKey := privatePrefix + entityName
//所有数据库的属性,key是数据库的字段名称,不区分大小写
dbColumnCacheKey := dbColumnNamePrefix + entityName
//structFieldTagCacheKey := structFieldTagPrefix + entityName
//dbPKNameCacheKey := dbPKNamePrefix + entityName
//缓存的数据库主键值
//_, exportOk := cacheStructFieldInfoMap.Load(exportCacheKey)
_, exportOk := cacheStructFieldInfoMap[exportCacheKey]
//如果存在值,认为缓存中有所有的信息,不再处理
if exportOk {
return nil
}
//获取字段长度
fieldNum := (*typeOf).NumField()
//如果没有字段
if fieldNum < 1 {
return errors.New("structFieldInfo-->NumField entity没有属性")
}
// 声明所有字段的载体
var allFieldMap *sync.Map = &sync.Map{}
anonymous := make([]reflect.StructField, 0)
//遍历所有字段,记录匿名属性
for i := 0; i < fieldNum; i++ {
field := (*typeOf).Field(i)
if _, ok := allFieldMap.Load(field.Name); !ok {
allFieldMap.Store(field.Name, field)
}
if field.Anonymous { //如果是匿名的
anonymous = append(anonymous, field)
}
}
//调用匿名struct的递归方法
recursiveAnonymousStruct(allFieldMap, anonymous)
//缓存的数据
exportStructFieldMap := make(map[string]reflect.StructField)
privateStructFieldMap := make(map[string]reflect.StructField)
dbColumnFieldMap := make(map[string]reflect.StructField)
//structFieldTagMap := make(map[string]string)
//遍历sync.Map,要求输入一个func作为参数
//这个函数的入参、出参的类型都已经固定,不能修改
//可以在函数体内编写自己的代码,调用map中的k,v
f := func(k, v interface{}) bool {
// fmt.Println(k, ":", v)
field := v.(reflect.StructField)
fieldName := field.Name
if ast.IsExported(fieldName) { //如果是可以输出的,不区分大小写
exportStructFieldMap[strings.ToLower(fieldName)] = field
//如果是数据库字段
tagColumnValue := field.Tag.Get(tagColumnName)
if len(tagColumnValue) > 0 {
//dbColumnFieldMap[tagColumnValue] = field
//使用数据库字段的小写,处理oracle和达梦数据库的sql返回值大写
dbColumnFieldMap[strings.ToLower(tagColumnValue)] = field
//structFieldTagMap[fieldName] = tagColumnValue
}
} else { //私有属性
privateStructFieldMap[strings.ToLower(fieldName)] = field
}
return true
}
allFieldMap.Range(f)
//加入缓存
//cacheStructFieldInfoMap.Store(exportCacheKey, exportStructFieldMap)
//cacheStructFieldInfoMap.Store(privateCacheKey, privateStructFieldMap)
//cacheStructFieldInfoMap.Store(dbColumnCacheKey, dbColumnFieldMap)
cacheStructFieldInfoMap[exportCacheKey] = exportStructFieldMap
cacheStructFieldInfoMap[privateCacheKey] = privateStructFieldMap
cacheStructFieldInfoMap[dbColumnCacheKey] = dbColumnFieldMap
//cacheStructFieldTagInfoMap[structFieldTagCacheKey] = structFieldTagMap
return nil
}
//recursiveAnonymousStruct 递归调用struct的匿名属性,就近覆盖属性
func recursiveAnonymousStruct(allFieldMap *sync.Map, anonymous []reflect.StructField) {
for i := 0; i < len(anonymous); i++ {
field := anonymous[i]
typeOf := field.Type
if typeOf.Kind() == reflect.Ptr {
//获取指针下的Struct类型
typeOf = typeOf.Elem()
}
//只处理Struct类型
if typeOf.Kind() != reflect.Struct {
continue
}
//获取字段长度
fieldNum := typeOf.NumField()
//如果没有字段
if fieldNum < 1 {
continue
}
// 匿名struct里自身又有匿名struct
anonymousField := make([]reflect.StructField, 0)
//遍历所有字段
for i := 0; i < fieldNum; i++ {
field := typeOf.Field(i)
if _, ok := allFieldMap.Load(field.Name); ok { //如果存在属性名
continue
} else { //不存在属性名,加入到allFieldMap
allFieldMap.Store(field.Name, field)
}
if field.Anonymous { //匿名struct里自身又有匿名struct
anonymousField = append(anonymousField, field)
}
}
//递归调用匿名struct
recursiveAnonymousStruct(allFieldMap, anonymousField)
}
}
//setFieldValueByColumnName 根据数据库的字段名,找到struct映射的字段,并赋值
func setFieldValueByColumnName(entity interface{}, columnName string, value interface{}) error {
//先从本地缓存中查找
typeOf := reflect.TypeOf(entity)
valueOf := reflect.ValueOf(entity)
if typeOf.Kind() == reflect.Ptr { //如果是指针
typeOf = typeOf.Elem()
valueOf = valueOf.Elem()
}
dbMap, err := getDBColumnFieldMap(&typeOf)
if err != nil {
return err
}
f, ok := dbMap[strings.ToLower(columnName)]
if ok { //给主键赋值
valueOf.FieldByName(f.Name).Set(reflect.ValueOf(value))
}
return nil
}
//structFieldValue 获取指定字段的值
func structFieldValue(s interface{}, fieldName string) (interface{}, error) {
if s == nil || len(fieldName) < 1 {
return nil, errors.New("structFieldValue数据为空")
}
//entity的s类型
valueOf := reflect.ValueOf(s)
kind := valueOf.Kind()
if !(kind == reflect.Ptr || kind == reflect.Struct) {
return nil, errors.New("structFieldValue必须是Struct或者*Struct类型")
}
if kind == reflect.Ptr {
//获取指针下的Struct类型
valueOf = valueOf.Elem()
if valueOf.Kind() != reflect.Struct {
return nil, errors.New("structFieldValue必须是Struct或者*Struct类型")
}
}
//FieldByName方法返回的是reflect.Value类型,调用Interface()方法,返回原始类型的数据值
value := valueOf.FieldByName(fieldName).Interface()
return value, nil
}
/*
//deepCopy 深度拷贝对象.golang没有构造函数,反射复制对象时,对象中struct类型的属性无法初始化,指针属性也会收到影响.使用深度对象拷贝
func deepCopy(dst, src interface{}) error {
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(src); err != nil {
return err
}
return gob.NewDecoder(bytes.NewBuffer(buf.Bytes())).Decode(dst)
}
*/
//getDBColumnExportFieldMap 获取实体类的数据库字段,key是数据库的字段名称.同时返回所有的字段属性的map,key是实体类的属性.不区分大小写
func getDBColumnExportFieldMap(typeOf *reflect.Type) (map[string]reflect.StructField, map[string]reflect.StructField, error) {
dbColumnFieldMap, err := getCacheStructFieldInfoMap(typeOf, dbColumnNamePrefix)
if err != nil {
return nil, nil, err
}
exportFieldMap, err := getCacheStructFieldInfoMap(typeOf, exportPrefix)
return dbColumnFieldMap, exportFieldMap, err
}
//getDBColumnFieldMap 获取实体类的数据库字段,key是数据库的字段名称.不区分大小写
func getDBColumnFieldMap(typeOf *reflect.Type) (map[string]reflect.StructField, error) {
return getCacheStructFieldInfoMap(typeOf, dbColumnNamePrefix)
}
//getCacheStructFieldInfoMap 根据类型和key,获取缓存的字段信息
func getCacheStructFieldInfoMap(typeOf *reflect.Type, keyPrefix string) (map[string]reflect.StructField, error) {
if typeOf == nil {
return nil, errors.New("getCacheStructFieldInfoMap-->typeOf不能为空")
}
key := keyPrefix + (*typeOf).String()
//dbColumnFieldMap, dbOk := cacheStructFieldInfoMap.Load(key)
dbColumnFieldMap, dbOk := cacheStructFieldInfoMap[key]
if !dbOk { //缓存不存在
//获取实体类的输出字段和私有 字段
err := structFieldInfo(typeOf)
if err != nil {
return nil, err
}
//dbColumnFieldMap, dbOk = cacheStructFieldInfoMap.Load(key)
dbColumnFieldMap, dbOk = cacheStructFieldInfoMap[key]
if !dbOk {
return dbColumnFieldMap, errors.New("getCacheStructFieldInfoMap()-->获取数据库字段dbColumnFieldMap异常")
}
}
/*
dbMap, efOK := dbColumnFieldMap.(map[string]reflect.StructField)
if !efOK {
return nil, errors.New("缓存数据库字段异常")
}
return dbMap, nil
*/
return dbColumnFieldMap, nil
}
/*
//获取 fileName 属性 中 tag column的值
func getStructFieldTagColumnValue(typeOf reflect.Type, fieldName string) string {
entityName := typeOf.String()
structFieldTagMap, dbOk := cacheStructFieldTagInfoMap[structFieldTagPrefix+entityName]
if !dbOk { //缓存不存在
//获取实体类的输出字段和私有 字段
err := structFieldInfo(typeOf)
if err != nil {
return ""
}
structFieldTagMap, dbOk = cacheStructFieldTagInfoMap[structFieldTagPrefix+entityName]
}
return structFieldTagMap[fieldName]
}
*/
//columnAndValue 根据保存的对象,返回插入的语句,需要插入的字段,字段的值
func columnAndValue(entity interface{}) (reflect.Type, []reflect.StructField, []interface{}, error) {
typeOf, checkerr := checkEntityKind(entity)
if checkerr != nil {
return typeOf, nil, nil, checkerr
}
// 获取实体类的反射,指针下的struct
valueOf := reflect.ValueOf(entity).Elem()
//reflect.Indirect
//先从本地缓存中查找
//typeOf := reflect.TypeOf(entity).Elem()
dbMap, err := getDBColumnFieldMap(&typeOf)
if err != nil {
return typeOf, nil, nil, err
}
//实体类公开字段的长度
fLen := len(dbMap)
//接收列的数组,这里是做一个副本,避免外部更改掉原始的列信息
columns := make([]reflect.StructField, 0, fLen)
//接收值的数组
values := make([]interface{}, 0, fLen)
//遍历所有数据库属性
for _, field := range dbMap {
//获取字段类型的Kind
// fieldKind := field.Type.Kind()
//if !allowTypeMap[fieldKind] { //不允许的类型
// continue
//}
columns = append(columns, field)
//FieldByName方法返回的是reflect.Value类型,调用Interface()方法,返回原始类型的数据值.字段不会重名,不使用FieldByIndex()函数
value := valueOf.FieldByName(field.Name).Interface()
/*
if value != nil { //如果不是nil
timeValue, ok := value.(time.Time)
if ok && timeValue.IsZero() { //如果是日期零时,需要设置一个初始值1970-01-01 00:00:01,兼容数据库
value = defaultZeroTime
}
}
*/
//添加到记录值的数组
values = append(values, value)
}
//缓存数据库的列
return typeOf, columns, values, nil
}
//entityPKFieldName 获取实体类主键属性名称
func entityPKFieldName(entity IEntityStruct, typeOf *reflect.Type) (string, error) {
//检查是否是指针对象
//typeOf, checkerr := checkEntityKind(entity)
//if checkerr != nil {
// return "", checkerr
//}
//缓存的key,TypeOf和ValueOf的String()方法,返回值不一样
//typeOf := reflect.TypeOf(entity).Elem()
dbMap, err := getDBColumnFieldMap(typeOf)
if err != nil {
return "", err
}
field := dbMap[strings.ToLower(entity.GetPKColumnName())]
return field.Name, nil
}
//checkEntityKind 检查entity类型必须是*struct类型或者基础类型的指针
func checkEntityKind(entity interface{}) (reflect.Type, error) {
if entity == nil {
return nil, errors.New("checkEntityKind参数不能为空,必须是*struct类型或者基础类型的指针")
}
typeOf := reflect.TypeOf(entity)
if typeOf.Kind() != reflect.Ptr { //如果不是指针
return nil, errors.New("checkEntityKind必须是*struct类型或者基础类型的指针")
}
typeOf = typeOf.Elem()
//if !(typeOf.Kind() == reflect.Struct || allowBaseTypeMap[typeOf.Kind()]) { //如果不是指针
// return nil, errors.New("checkEntityKind必须是*struct类型或者基础类型的指针")
//}
return typeOf, nil
}
// sqlRowsValues 包装接收sqlRows的Values数组,反射rows屏蔽数据库null值
// fix:converting NULL to int is unsupported
// 当读取数据库的值为NULL时,由于基本类型不支持为NULL,通过反射将未知driver.Value改为interface{},不再映射到struct实体类
// 感谢@fastabler提交的pr
func sqlRowsValues(rows *sql.Rows, driverValue *reflect.Value, columnTypes []*sql.ColumnType, dbColumnFieldMap map[string]reflect.StructField, exportFieldMap map[string]reflect.StructField, valueOf *reflect.Value, finder *Finder, cdvMapHasBool bool) error {
//声明载体数组,用于存放struct的属性指针
//Declare a carrier array to store the attribute pointer of the struct
values := make([]interface{}, len(columnTypes))
//记录需要类型转换的字段信息
var fieldTempDriverValueMap map[reflect.Value]*driverValueInfo
if cdvMapHasBool {
fieldTempDriverValueMap = make(map[reflect.Value]*driverValueInfo)
}
//反射获取 []driver.Value的值
//driverValue := reflect.Indirect(reflect.ValueOf(rows))
//driverValue = driverValue.FieldByName("lastcols")
//遍历数据库的列名
//Traverse the database column names
for i, columnType := range columnTypes {
columnName := strings.ToLower(columnType.Name())
//从缓存中获取列名的field字段
//Get the field field of the column name from the cache
field, fok := dbColumnFieldMap[columnName]
//如果列名不存在,就尝试去获取实体类属性,兼容处理临时字段,如果还找不到,就初始化为空指
//If the column name does not exist, initialize a null value
if !fok {
field, fok = exportFieldMap[columnName]
if !fok {
//尝试驼峰
cname := strings.ReplaceAll(columnName, "_", "")
field, fok = exportFieldMap[cname]
if !fok {
values[i] = new(interface{})
continue
}
}
}
dv := driverValue.Index(i)
if dv.IsValid() && dv.InterfaceData()[0] == 0 { // 该字段的数据库值是null,取默认值
values[i] = new(interface{})
} else {
//字段的反射值
fieldValue := valueOf.FieldByName(field.Name)
//根据接收的类型,获取到类型转换的接口实现
var converFunc CustomDriverValueConver
var converOK = false
if cdvMapHasBool {
converFunc, converOK = CustomDriverValueMap[dv.Elem().Type().String()]
}
//类型转换的临时值
var tempDriverValue driver.Value
var errGetDriverValue error
//如果需要类型转换
if converOK {
//获取需要转换的临时值
typeOf := fieldValue.Type()
tempDriverValue, errGetDriverValue = converFunc.GetDriverValue(columnType, &typeOf, finder)
if errGetDriverValue != nil {
errGetDriverValue = fmt.Errorf("sqlRowsValues-->conver.GetDriverValue异常:%w", errGetDriverValue)
FuncLogError(errGetDriverValue)
return errGetDriverValue
}
//如果需要类型转换
if tempDriverValue != nil {
values[i] = tempDriverValue
dvinfo := driverValueInfo{}
dvinfo.converFunc = converFunc
dvinfo.columnType = columnType
dvinfo.tempDriverValue = tempDriverValue
fieldTempDriverValueMap[fieldValue] = &dvinfo
continue
}
}
//不需要类型转换,正常逻辑
//获取struct的属性值的指针地址,字段不会重名,不使用FieldByIndex()函数
//Get the pointer address of the attribute value of the struct,the field will not have the same name, and the Field By Index() function is not used
value := fieldValue.Addr().Interface()
//把指针地址放到数组
//Put the pointer address into the array
values[i] = value
}
}
//scan赋值.是一个指针数组,已经根据struct的属性类型初始化了,sql驱动能感知到参数类型,所以可以直接赋值给struct的指针.这样struct的属性就有值了
//Scan assignment. It is an array of pointers that has been initialized according to the attribute type of the struct.The sql driver can perceive the parameter type,so it can be directly assigned to the pointer of the struct. In this way, the attributes of the struct have values
scanerr := rows.Scan(values...)
if scanerr != nil {
return scanerr
}
//循环需要替换的值
for fieldValue, driverValueInfo := range fieldTempDriverValueMap {
//根据列名,字段类型,新值 返回符合接收类型值的指针,返回值是个指针,指针,指针!!!!
typeOf := fieldValue.Type()
rightValue, errConverDriverValue := driverValueInfo.converFunc.ConverDriverValue(driverValueInfo.columnType, &typeOf, driverValueInfo.tempDriverValue, finder)
if errConverDriverValue != nil {
errConverDriverValue = fmt.Errorf("sqlRowsValues-->conver.ConverDriverValue异常:%w", errConverDriverValue)
FuncLogError(errConverDriverValue)
return errConverDriverValue
}
//给字段赋值
fieldValue.Set(reflect.ValueOf(rightValue).Elem())
}
return scanerr
}
/*
// sqlRowsValuesFast 包装接收sqlRows的Values数组,快速模式,数据库表不能有null值
// Deprecated: 暂时不用
func sqlRowsValuesFast(rows *sql.Rows, columns []string, dbColumnFieldMap map[string]reflect.StructField, valueOf reflect.Value) error {
//声明载体数组,用于存放struct的属性指针
values := make([]interface{}, len(columns))
//遍历数据库的列名
for i, column := range columns {
//从缓存中获取列名的field字段
field, fok := dbColumnFieldMap[column]
if !fok { //如果列名不存在,就初始化一个空值
values[i] = new(interface{})
continue
}
//获取struct的属性值的指针地址,字段不会重名,不使用FieldByIndex()函数
value := valueOf.FieldByName(field.Name).Addr().Interface()
//把指针地址放到数组
values[i] = value
}
//scan赋值.是一个指针数组,已经根据struct的属性类型初始化了,sql驱动能感知到参数类型,所以可以直接赋值给struct的指针.这样struct的属性就有值了
scanerr := rows.Scan(values...)
if scanerr != nil {
scanerr = fmt.Errorf("rows.Scan异常:%w", scanerr)
FuncLogError(scanerr)
return scanerr
}
return nil
}
// sqlRowsValues 包装接收sqlRows的Values数组
// 基础类型使用sql.Nullxxx替换,放到sqlNullMap[field.name]*sql.Nullxxx,用于接受数据库的值,用于处理数据库为null的情况,然后再重新替换回去
// Deprecated: 暂时不用
func sqlRowsValues2(rows *sql.Rows, columns []string, dbColumnFieldMap map[string]reflect.StructField, valueOf reflect.Value) error {
//声明载体数组,用于存放struct的属性指针
values := make([]interface{}, len(columns))
//基础类型使用sql.Nullxxx替换,放到sqlNullMap[field.name]*sql.Nullxxx,用于接受数据库的值,用于处理数据库为null的情况,然后再重新替换回去
sqlNullMap := make(map[string]interface{})
//遍历数据库的列名
for i, column := range columns {
//从缓存中获取列名的field字段
field, fok := dbColumnFieldMap[column]
if !fok { //如果列名不存在,就初始化一个空值
values[i] = new(interface{})
continue
}
//values 中的值
var value interface{}
//struct中的字段属性名称
//fieldName := field.Name
//fmt.Println(fieldName, "----", field.Type, "--------", field.Type.Kind(), "++++", field.Type.String())
ftypeString := field.Type.String()
fkind := field.Type.Kind()
//判断字段类型
switch fkind {
case reflect.String:
value = &sql.NullString{}
sqlNullMap[column] = value
case reflect.Int8, reflect.Int16, reflect.Int, reflect.Int32:
value = &sql.NullInt32{}
sqlNullMap[column] = value
case reflect.Int64:
value = &sql.NullInt64{}
sqlNullMap[column] = value
case reflect.Float32, reflect.Float64:
value = &sql.NullFloat64{}
sqlNullMap[column] = value
case reflect.Bool:
value = &sql.NullBool{}
sqlNullMap[column] = value
case reflect.Struct:
if ftypeString == "time.Time" {
value = &sql.NullTime{}
sqlNullMap[column] = value
} else if ftypeString == "decimal.Decimal" {
value = &decimal.NullDecimal{}
sqlNullMap[column] = value
} else {
//获取struct的属性值的指针地址,字段不会重名,不使用FieldByIndex()函数
value = valueOf.FieldByName(field.Name).Addr().Interface()
}
sqlNullMap[column] = value
default:
//获取struct的属性值的指针地址,字段不会重名,不使用FieldByIndex()函数
value = valueOf.FieldByName(field.Name).Addr().Interface()
}
//把指针地址放到数组
values[i] = value
}
//scan赋值.是一个指针数组,已经根据struct的属性类型初始化了,sql驱动能感知到参数类型,所以可以直接赋值给struct的指针.这样struct的属性就有值了
scanerr := rows.Scan(values...)
if scanerr != nil {
return scanerr
}
//循环需要处理的字段
for column, value := range sqlNullMap {
//从缓存中获取列名的field字段
field, fok := dbColumnFieldMap[column]
if !fok { //如果列名不存在,就初始化一个空值
continue
}
ftypeString := field.Type.String()
fkind := field.Type.Kind()
fieldName := field.Name
//判断字段类型
switch fkind {
case reflect.String:
vptr, ok := value.(*sql.NullString)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).SetString(vptr.String)
}
case reflect.Int8, reflect.Int16, reflect.Int, reflect.Int32:
vptr, ok := value.(*sql.NullInt32)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).Set(reflect.ValueOf(int(vptr.Int32)))
}
case reflect.Int64:
vptr, ok := value.(*sql.NullInt64)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).SetInt(vptr.Int64)
}
case reflect.Float32:
vptr, ok := value.(*sql.NullFloat64)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).Set(reflect.ValueOf(float32(vptr.Float64)))
}
case reflect.Float64:
vptr, ok := value.(*sql.NullFloat64)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).SetFloat(vptr.Float64)
}
case reflect.Bool:
vptr, ok := value.(*sql.NullBool)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).SetBool(vptr.Bool)
}
case reflect.Struct:
if ftypeString == "time.Time" {
vptr, ok := value.(*sql.NullTime)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).Set(reflect.ValueOf(vptr.Time))
}
} else if ftypeString == "decimal.Decimal" {
vptr, ok := value.(*decimal.NullDecimal)
if vptr.Valid && ok {
valueOf.FieldByName(fieldName).Set(reflect.ValueOf(vptr.Decimal))
}
}
}
}
return scanerr
}
*/
//CustomDriverValueMap 用于配置driver.Value和对应的处理关系,key是 drier.Value 的字符串,例如 *dm.DmClob
//一般是放到init方法里进行添加
var CustomDriverValueMap = make(map[string]CustomDriverValueConver)
//CustomDriverValueConver 自定义类型转化接口,用于解决 类似达梦 text --> dm.DmClob --> string类型接收的问题
type CustomDriverValueConver interface {
//GetDriverValue 根据数据库列类型,实体类属性类型,Finder对象,返回driver.Value的实例
//如果无法获取到structFieldType,例如Map查询,会传入nil
//如果返回值为nil,接口扩展逻辑无效,使用原生的方式接收数据库字段值
GetDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, finder *Finder) (driver.Value, error)
//ConverDriverValue 数据库列类型,实体类属性类型,GetDriverValue返回的driver.Value的临时接收值,Finder对象
//如果无法获取到structFieldType,例如Map查询,会传入nil
//返回符合接收类型值的指针,指针,指针!!!!
ConverDriverValue(columnType *sql.ColumnType, structFieldType *reflect.Type, tempDriverValue driver.Value, finder *Finder) (interface{}, error)
}
type driverValueInfo struct {
converFunc CustomDriverValueConver
columnType *sql.ColumnType
tempDriverValue interface{}
}
/**
//实现CustomDriverValueConver接口,扩展自定义类型,例如 达梦数据库text类型,映射出来的是dm.DmClob类型,无法使用string类型直接接收
type CustomDMText struct{}
//GetDriverValue 根据数据库列类型,实体类属性类型,Finder对象,返回driver.Value的实例
//如果无法获取到structFieldType,例如Map查询,会传入nil
//如果返回值为nil,接口扩展逻辑无效,使用原生的方式接收数据库字段值
func (dmtext CustomDMText) GetDriverValue(columnType *sql.ColumnType, structFieldType reflect.Type, finder *zorm.Finder) (driver.Value, error) {
return &dm.DmClob{}, nil
}
//ConverDriverValue 数据库列类型,实体类属性类型,GetDriverValue返回的driver.Value的临时接收值,Finder对象
//如果无法获取到structFieldType,例如Map查询,会传入nil
//返回符合接收类型值的指针,指针,指针!!!!
func (dmtext CustomDMText) ConverDriverValue(columnType *sql.ColumnType, structFieldType reflect.Type, tempDriverValue driver.Value, finder *zorm.Finder) (interface{}, error) {
//类型转换
dmClob, isok := tempDriverValue.(*dm.DmClob)
if !isok {
return tempDriverValue, errors.New("转换至*dm.DmClob类型失败")
}
//获取长度
dmlen, errLength := dmClob.GetLength()
if errLength != nil {
return dmClob, errLength
}
//int64转成int类型
strInt64 := strconv.FormatInt(dmlen, 10)
dmlenInt, errAtoi := strconv.Atoi(strInt64)
if errAtoi != nil {
return dmClob, errAtoi
}
//读取字符串
str, errReadString := dmClob.ReadString(1, dmlenInt)
return &str, errReadString
}
//CustomDriverValueMap 用于配置driver.Value和对应的处理关系,key是 drier.Value 的字符串,例如 *dm.DmClob
//一般是放到init方法里进行添加
zorm.CustomDriverValueMap["*dm.DmClob"] = CustomDMText{}
**/

@ -0,0 +1,471 @@
package zorm
import (
"fmt"
"strconv"
"time"
"gitee.com/chunanyong/zorm/decimal"
)
func typeConvertFloat32(i interface{}) float32 {
if i == nil {
return 0
}
if v, ok := i.(float32); ok {
return v
}
v, _ := strconv.ParseFloat(typeConvertString(i), 32)
return float32(v)
}
func typeConvertFloat64(i interface{}) float64 {
if i == nil {
return 0
}
if v, ok := i.(float64); ok {
return v
}
v, _ := strconv.ParseFloat(typeConvertString(i), 64)
return v
}
func typeConvertDecimal(i interface{}) decimal.Decimal {
if i == nil {
return decimal.Zero
}
if v, ok := i.(decimal.Decimal); ok {
return v
}
v, _ := decimal.NewFromString(typeConvertString(i))
return v
}
func typeConvertInt64toInt(from int64) (int, error) {
//int64 转 int
strInt64 := strconv.FormatInt(from, 10)
to, err := strconv.Atoi(strInt64)
if err != nil {
return -1, err
}
return to, err
}
func typeConvertTime(i interface{}, format string, TZLocation ...*time.Location) time.Time {
s := typeConvertString(i)
t, _ := typeConvertStrToTime(s, format, TZLocation...)
return t
}
func typeConvertStrToTime(str string, format string, TZLocation ...*time.Location) (time.Time, error) {
if len(TZLocation) > 0 {
t, err := time.ParseInLocation(format, str, TZLocation[0])
if err == nil {
return t, nil
}
return time.Time{}, err
}
t, err := time.ParseInLocation(format, str, time.Local)
if err == nil {
return t, nil
}
return time.Time{}, err
}
func typeConvertInt64(i interface{}) int64 {
if i == nil {
return 0
}
if v, ok := i.(int64); ok {
return v
}
return int64(typeConvertInt(i))
}
func typeConvertString(i interface{}) string {
if i == nil {
return ""
}
switch value := i.(type) {
case int:
return strconv.Itoa(value)
case int8:
return strconv.Itoa(int(value))
case int16:
return strconv.Itoa(int(value))
case int32:
return strconv.Itoa(int(value))
case int64:
return strconv.Itoa(int(value))
case uint:
return strconv.FormatUint(uint64(value), 10)
case uint8:
return strconv.FormatUint(uint64(value), 10)
case uint16:
return strconv.FormatUint(uint64(value), 10)
case uint32:
return strconv.FormatUint(uint64(value), 10)
case uint64:
return strconv.FormatUint(uint64(value), 10)
case float32:
return strconv.FormatFloat(float64(value), 'f', -1, 32)
case float64:
return strconv.FormatFloat(value, 'f', -1, 64)
case bool:
return strconv.FormatBool(value)
case string:
return value
case []byte:
return string(value)
default:
return fmt.Sprintf("%v", value)
}
}
//false: "", 0, false, off
func typeConvertBool(i interface{}) bool {
if i == nil {
return false
}
if v, ok := i.(bool); ok {
return v
}
if s := typeConvertString(i); s != "" && s != "0" && s != "false" && s != "off" {
return true
}
return false
}
func typeConvertInt(i interface{}) int {
if i == nil {
return 0
}
switch value := i.(type) {
case int:
return value
case int8:
return int(value)
case int16:
return int(value)
case int32:
return int(value)
case int64:
return int(value)
case uint:
return int(value)
case uint8:
return int(value)
case uint16:
return int(value)
case uint32:
return int(value)
case uint64:
return int(value)
case float32:
return int(value)
case float64:
return int(value)
case bool:
if value {
return 1
}
return 0
default:
v, _ := strconv.Atoi(typeConvertString(value))
return v
}
}
/*
func encodeString(s string) []byte {
return []byte(s)
}
func decodeToString(b []byte) string {
return string(b)
}
func encodeBool(b bool) []byte {
if b {
return []byte{1}
}
return []byte{0}
}
func encodeInt(i int) []byte {
if i <= math.MaxInt8 {
return encodeInt8(int8(i))
} else if i <= math.MaxInt16 {
return encodeInt16(int16(i))
} else if i <= math.MaxInt32 {
return encodeInt32(int32(i))
} else {
return encodeInt64(int64(i))
}
}
func encodeUint(i uint) []byte {
if i <= math.MaxUint8 {
return encodeUint8(uint8(i))
} else if i <= math.MaxUint16 {
return encodeUint16(uint16(i))
} else if i <= math.MaxUint32 {
return encodeUint32(uint32(i))
} else {
return encodeUint64(uint64(i))
}
}
func encodeInt8(i int8) []byte {
return []byte{byte(i)}
}
func encodeUint8(i uint8) []byte {
return []byte{byte(i)}
}
func encodeInt16(i int16) []byte {
bytes := make([]byte, 2)
binary.LittleEndian.PutUint16(bytes, uint16(i))
return bytes
}
func encodeUint16(i uint16) []byte {
bytes := make([]byte, 2)
binary.LittleEndian.PutUint16(bytes, i)
return bytes
}
func encodeInt32(i int32) []byte {
bytes := make([]byte, 4)
binary.LittleEndian.PutUint32(bytes, uint32(i))
return bytes
}
func encodeUint32(i uint32) []byte {
bytes := make([]byte, 4)
binary.LittleEndian.PutUint32(bytes, i)
return bytes
}
func encodeInt64(i int64) []byte {
bytes := make([]byte, 8)
binary.LittleEndian.PutUint64(bytes, uint64(i))
return bytes
}
func encodeUint64(i uint64) []byte {
bytes := make([]byte, 8)
binary.LittleEndian.PutUint64(bytes, i)
return bytes
}
func encodeFloat32(f float32) []byte {
bits := math.Float32bits(f)
bytes := make([]byte, 4)
binary.LittleEndian.PutUint32(bytes, bits)
return bytes
}
func encodeFloat64(f float64) []byte {
bits := math.Float64bits(f)
bytes := make([]byte, 8)
binary.LittleEndian.PutUint64(bytes, bits)
return bytes
}
func encode(vs ...interface{}) []byte {
buf := new(bytes.Buffer)
for i := 0; i < len(vs); i++ {
switch value := vs[i].(type) {
case int:
buf.Write(encodeInt(value))
case int8:
buf.Write(encodeInt8(value))
case int16:
buf.Write(encodeInt16(value))
case int32:
buf.Write(encodeInt32(value))
case int64:
buf.Write(encodeInt64(value))
case uint:
buf.Write(encodeUint(value))
case uint8:
buf.Write(encodeUint8(value))
case uint16:
buf.Write(encodeUint16(value))
case uint32:
buf.Write(encodeUint32(value))
case uint64:
buf.Write(encodeUint64(value))
case bool:
buf.Write(encodeBool(value))
case string:
buf.Write(encodeString(value))
case []byte:
buf.Write(value)
case float32:
buf.Write(encodeFloat32(value))
case float64:
buf.Write(encodeFloat64(value))
default:
if err := binary.Write(buf, binary.LittleEndian, value); err != nil {
buf.Write(encodeString(fmt.Sprintf("%v", value)))
}
}
}
return buf.Bytes()
}
func isNumeric(s string) bool {
for i := 0; i < len(s); i++ {
if s[i] < byte('0') || s[i] > byte('9') {
return false
}
}
return true
}
func typeConvertTimeDuration(i interface{}) time.Duration {
return time.Duration(typeConvertInt64(i))
}
func typeConvertBytes(i interface{}) []byte {
if i == nil {
return nil
}
if r, ok := i.([]byte); ok {
return r
}
return encode(i)
}
func typeConvertStrings(i interface{}) []string {
if i == nil {
return nil
}
if r, ok := i.([]string); ok {
return r
} else if r, ok := i.([]interface{}); ok {
strs := make([]string, len(r))
for k, v := range r {
strs[k] = typeConvertString(v)
}
return strs
}
return []string{fmt.Sprintf("%v", i)}
}
func typeConvertInt8(i interface{}) int8 {
if i == nil {
return 0
}
if v, ok := i.(int8); ok {
return v
}
return int8(typeConvertInt(i))
}
func typeConvertInt16(i interface{}) int16 {
if i == nil {
return 0
}
if v, ok := i.(int16); ok {
return v
}
return int16(typeConvertInt(i))
}
func typeConvertInt32(i interface{}) int32 {
if i == nil {
return 0
}
if v, ok := i.(int32); ok {
return v
}
return int32(typeConvertInt(i))
}
func typeConvertUint(i interface{}) uint {
if i == nil {
return 0
}
switch value := i.(type) {
case int:
return uint(value)
case int8:
return uint(value)
case int16:
return uint(value)
case int32:
return uint(value)
case int64:
return uint(value)
case uint:
return value
case uint8:
return uint(value)
case uint16:
return uint(value)
case uint32:
return uint(value)
case uint64:
return uint(value)
case float32:
return uint(value)
case float64:
return uint(value)
case bool:
if value {
return 1
}
return 0
default:
v, _ := strconv.ParseUint(typeConvertString(value), 10, 64)
return uint(v)
}
}
func typeConvertUint8(i interface{}) uint8 {
if i == nil {
return 0
}
if v, ok := i.(uint8); ok {
return v
}
return uint8(typeConvertUint(i))
}
func typeConvertUint16(i interface{}) uint16 {
if i == nil {
return 0
}
if v, ok := i.(uint16); ok {
return v
}
return uint16(typeConvertUint(i))
}
func typeConvertUint32(i interface{}) uint32 {
if i == nil {
return 0
}
if v, ok := i.(uint32); ok {
return v
}
return uint32(typeConvertUint(i))
}
func typeConvertUint64(i interface{}) uint64 {
if i == nil {
return 0
}
if v, ok := i.(uint64); ok {
return v
}
return uint64(typeConvertUint(i))
}
*/

@ -0,0 +1,14 @@
Copyright (c) 2015 aliyun.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

@ -0,0 +1,345 @@
package oss
import (
"bytes"
"crypto/hmac"
"crypto/sha1"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"fmt"
"hash"
"io"
"net/http"
"sort"
"strconv"
"strings"
"time"
)
// headerSorter defines the key-value structure for storing the sorted data in signHeader.
type headerSorter struct {
Keys []string
Vals []string
}
// getAdditionalHeaderKeys get exist key in http header
func (conn Conn) getAdditionalHeaderKeys(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
keysMap[strings.ToLower(v)] = ""
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// getAdditionalHeaderKeysV4 get exist key in http header
func (conn Conn) getAdditionalHeaderKeysV4(req *http.Request) ([]string, map[string]string) {
var keysList []string
keysMap := make(map[string]string)
srcKeys := make(map[string]string)
for k := range req.Header {
srcKeys[strings.ToLower(k)] = ""
}
for _, v := range conn.config.AdditionalHeaders {
if _, ok := srcKeys[strings.ToLower(v)]; ok {
if !strings.EqualFold(v, HTTPHeaderContentMD5) && !strings.EqualFold(v, HTTPHeaderContentType) {
keysMap[strings.ToLower(v)] = ""
}
}
}
for k := range keysMap {
keysList = append(keysList, k)
}
sort.Strings(keysList)
return keysList, keysMap
}
// signHeader signs the header and sets it as the authorization header.
func (conn Conn) signHeader(req *http.Request, canonicalizedResource string) {
akIf := conn.config.GetCredentials()
authorizationStr := ""
if conn.config.AuthVersion == AuthV4 {
strDay := ""
strDate := req.Header.Get(HttpHeaderOssDate)
if strDate == "" {
strDate = req.Header.Get(HTTPHeaderDate)
t, _ := time.Parse(http.TimeFormat, strDate)
strDay = t.Format("20060102")
} else {
t, _ := time.Parse(iso8601DateFormatSecond, strDate)
strDay = t.Format("20060102")
}
signHeaderProduct := conn.config.GetSignProduct()
signHeaderRegion := conn.config.GetSignRegion()
additionalList, _ := conn.getAdditionalHeaderKeysV4(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,AdditionalHeaders=%v,Signature=%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, additionnalHeadersStr, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret()))
} else {
authorizationFmt := "OSS4-HMAC-SHA256 Credential=%v/%v/%v/" + signHeaderProduct + "/aliyun_v4_request,Signature=%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), strDay, signHeaderRegion, conn.getSignedStrV4(req, canonicalizedResource, akIf.GetAccessKeySecret()))
}
} else if conn.config.AuthVersion == AuthV2 {
additionalList, _ := conn.getAdditionalHeaderKeys(req)
if len(additionalList) > 0 {
authorizationFmt := "OSS2 AccessKeyId:%v,AdditionalHeaders:%v,Signature:%v"
additionnalHeadersStr := strings.Join(additionalList, ";")
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), additionnalHeadersStr, conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
} else {
authorizationFmt := "OSS2 AccessKeyId:%v,Signature:%v"
authorizationStr = fmt.Sprintf(authorizationFmt, akIf.GetAccessKeyID(), conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret()))
}
} else {
// Get the final authorization string
authorizationStr = "OSS " + akIf.GetAccessKeyID() + ":" + conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret())
}
// Give the parameter "Authorization" value
req.Header.Set(HTTPHeaderAuthorization, authorizationStr)
}
func (conn Conn) getSignedStr(req *http.Request, canonicalizedResource string, keySecret string) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeys(req)
for k, v := range req.Header {
if strings.HasPrefix(strings.ToLower(k), "x-oss-") {
ossHeadersMap[strings.ToLower(k)] = v[0]
} else if conn.config.AuthVersion == AuthV2 {
if _, ok := additionalMap[strings.ToLower(k)]; ok {
ossHeadersMap[strings.ToLower(k)] = v[0]
}
}
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
// Give other parameters values
// when sign URL, date is expires
date := req.Header.Get(HTTPHeaderDate)
contentType := req.Header.Get(HTTPHeaderContentType)
contentMd5 := req.Header.Get(HTTPHeaderContentMD5)
// default is v1 signature
signStr := req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + canonicalizedResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
// v2 signature
if conn.config.AuthVersion == AuthV2 {
signStr = req.Method + "\n" + contentMd5 + "\n" + contentType + "\n" + date + "\n" + canonicalizedOSSHeaders + strings.Join(additionalList, ";") + "\n" + canonicalizedResource
h = hmac.New(func() hash.Hash { return sha256.New() }, []byte(keySecret))
}
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
func (conn Conn) getSignedStrV4(req *http.Request, canonicalizedResource string, keySecret string) string {
// Find out the "x-oss-"'s address in header of the request
ossHeadersMap := make(map[string]string)
additionalList, additionalMap := conn.getAdditionalHeaderKeysV4(req)
for k, v := range req.Header {
if strings.HasPrefix(strings.ToLower(k), "x-oss-") {
ossHeadersMap[strings.ToLower(k)] = strings.Trim(v[0], " ")
} else {
if _, ok := additionalMap[strings.ToLower(k)]; ok {
ossHeadersMap[strings.ToLower(k)] = strings.Trim(v[0], " ")
}
}
}
// Required parameters
signDate := ""
dateFormat := ""
date := req.Header.Get(HTTPHeaderDate)
if date != "" {
signDate = date
dateFormat = http.TimeFormat
}
ossDate := req.Header.Get(HttpHeaderOssDate)
_, ok := ossHeadersMap[strings.ToLower(HttpHeaderOssDate)]
if ossDate != "" {
signDate = ossDate
dateFormat = iso8601DateFormatSecond
if !ok {
ossHeadersMap[strings.ToLower(HttpHeaderOssDate)] = strings.Trim(ossDate, " ")
}
}
contentType := req.Header.Get(HTTPHeaderContentType)
_, ok = ossHeadersMap[strings.ToLower(HTTPHeaderContentType)]
if contentType != "" && !ok {
ossHeadersMap[strings.ToLower(HTTPHeaderContentType)] = strings.Trim(contentType, " ")
}
contentMd5 := req.Header.Get(HTTPHeaderContentMD5)
_, ok = ossHeadersMap[strings.ToLower(HTTPHeaderContentMD5)]
if contentMd5 != "" && !ok {
ossHeadersMap[strings.ToLower(HTTPHeaderContentMD5)] = strings.Trim(contentMd5, " ")
}
hs := newHeaderSorter(ossHeadersMap)
// Sort the ossHeadersMap by the ascending order
hs.Sort()
// Get the canonicalizedOSSHeaders
canonicalizedOSSHeaders := ""
for i := range hs.Keys {
canonicalizedOSSHeaders += hs.Keys[i] + ":" + hs.Vals[i] + "\n"
}
signStr := ""
// v4 signature
hashedPayload := req.Header.Get(HttpHeaderOssContentSha256)
// subResource
resource := canonicalizedResource
subResource := ""
subPos := strings.LastIndex(canonicalizedResource, "?")
if subPos != -1 {
subResource = canonicalizedResource[subPos+1:]
resource = canonicalizedResource[0:subPos]
}
// get canonical request
canonicalReuqest := req.Method + "\n" + resource + "\n" + subResource + "\n" + canonicalizedOSSHeaders + "\n" + strings.Join(additionalList, ";") + "\n" + hashedPayload
rh := sha256.New()
io.WriteString(rh, canonicalReuqest)
hashedRequest := hex.EncodeToString(rh.Sum(nil))
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(canonicalReuqest))
}
// get day,eg 20210914
t, _ := time.Parse(dateFormat, signDate)
strDay := t.Format("20060102")
signedStrV4Product := conn.config.GetSignProduct()
signedStrV4Region := conn.config.GetSignRegion()
signStr = "OSS4-HMAC-SHA256" + "\n" + signDate + "\n" + strDay + "/" + signedStrV4Region + "/" + signedStrV4Product + "/aliyun_v4_request" + "\n" + hashedRequest
if conn.config.LogLevel >= Debug {
conn.config.WriteLog(Debug, "[Req:%p]signStr:%s\n", req, EscapeLFString(signStr))
}
h1 := hmac.New(func() hash.Hash { return sha256.New() }, []byte("aliyun_v4"+keySecret))
io.WriteString(h1, strDay)
h1Key := h1.Sum(nil)
h2 := hmac.New(func() hash.Hash { return sha256.New() }, h1Key)
io.WriteString(h2, signedStrV4Region)
h2Key := h2.Sum(nil)
h3 := hmac.New(func() hash.Hash { return sha256.New() }, h2Key)
io.WriteString(h3, signedStrV4Product)
h3Key := h3.Sum(nil)
h4 := hmac.New(func() hash.Hash { return sha256.New() }, h3Key)
io.WriteString(h4, "aliyun_v4_request")
h4Key := h4.Sum(nil)
h := hmac.New(func() hash.Hash { return sha256.New() }, h4Key)
io.WriteString(h, signStr)
return fmt.Sprintf("%x", h.Sum(nil))
}
func (conn Conn) getRtmpSignedStr(bucketName, channelName, playlistName string, expiration int64, keySecret string, params map[string]interface{}) string {
if params[HTTPParamAccessKeyID] == nil {
return ""
}
canonResource := fmt.Sprintf("/%s/%s", bucketName, channelName)
canonParamsKeys := []string{}
for key := range params {
if key != HTTPParamAccessKeyID && key != HTTPParamSignature && key != HTTPParamExpires && key != HTTPParamSecurityToken {
canonParamsKeys = append(canonParamsKeys, key)
}
}
sort.Strings(canonParamsKeys)
canonParamsStr := ""
for _, key := range canonParamsKeys {
canonParamsStr = fmt.Sprintf("%s%s:%s\n", canonParamsStr, key, params[key].(string))
}
expireStr := strconv.FormatInt(expiration, 10)
signStr := expireStr + "\n" + canonParamsStr + canonResource
h := hmac.New(func() hash.Hash { return sha1.New() }, []byte(keySecret))
io.WriteString(h, signStr)
signedStr := base64.StdEncoding.EncodeToString(h.Sum(nil))
return signedStr
}
// newHeaderSorter is an additional function for function SignHeader.
func newHeaderSorter(m map[string]string) *headerSorter {
hs := &headerSorter{
Keys: make([]string, 0, len(m)),
Vals: make([]string, 0, len(m)),
}
for k, v := range m {
hs.Keys = append(hs.Keys, k)
hs.Vals = append(hs.Vals, v)
}
return hs
}
// Sort is an additional function for function SignHeader.
func (hs *headerSorter) Sort() {
sort.Sort(hs)
}
// Len is an additional function for function SignHeader.
func (hs *headerSorter) Len() int {
return len(hs.Vals)
}
// Less is an additional function for function SignHeader.
func (hs *headerSorter) Less(i, j int) bool {
return bytes.Compare([]byte(hs.Keys[i]), []byte(hs.Keys[j])) < 0
}
// Swap is an additional function for function SignHeader.
func (hs *headerSorter) Swap(i, j int) {
hs.Vals[i], hs.Vals[j] = hs.Vals[j], hs.Vals[i]
hs.Keys[i], hs.Keys[j] = hs.Keys[j], hs.Keys[i]
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

@ -0,0 +1,229 @@
package oss
import (
"bytes"
"fmt"
"log"
"net"
"os"
"time"
)
// Define the level of the output log
const (
LogOff = iota
Error
Warn
Info
Debug
)
// LogTag Tag for each level of log
var LogTag = []string{"[error]", "[warn]", "[info]", "[debug]"}
// HTTPTimeout defines HTTP timeout.
type HTTPTimeout struct {
ConnectTimeout time.Duration
ReadWriteTimeout time.Duration
HeaderTimeout time.Duration
LongTimeout time.Duration
IdleConnTimeout time.Duration
}
// HTTPMaxConns defines max idle connections and max idle connections per host
type HTTPMaxConns struct {
MaxIdleConns int
MaxIdleConnsPerHost int
MaxConnsPerHost int
}
// CredentialInf is interface for get AccessKeyID,AccessKeySecret,SecurityToken
type Credentials interface {
GetAccessKeyID() string
GetAccessKeySecret() string
GetSecurityToken() string
}
// CredentialInfBuild is interface for get CredentialInf
type CredentialsProvider interface {
GetCredentials() Credentials
}
type defaultCredentials struct {
config *Config
}
func (defCre *defaultCredentials) GetAccessKeyID() string {
return defCre.config.AccessKeyID
}
func (defCre *defaultCredentials) GetAccessKeySecret() string {
return defCre.config.AccessKeySecret
}
func (defCre *defaultCredentials) GetSecurityToken() string {
return defCre.config.SecurityToken
}
type defaultCredentialsProvider struct {
config *Config
}
func (defBuild *defaultCredentialsProvider) GetCredentials() Credentials {
return &defaultCredentials{config: defBuild.config}
}
// Config defines oss configuration
type Config struct {
Endpoint string // OSS endpoint
AccessKeyID string // AccessId
AccessKeySecret string // AccessKey
RetryTimes uint // Retry count by default it's 5.
UserAgent string // SDK name/version/system information
IsDebug bool // Enable debug mode. Default is false.
Timeout uint // Timeout in seconds. By default it's 60.
SecurityToken string // STS Token
IsCname bool // If cname is in the endpoint.
HTTPTimeout HTTPTimeout // HTTP timeout
HTTPMaxConns HTTPMaxConns // Http max connections
IsUseProxy bool // Flag of using proxy.
ProxyHost string // Flag of using proxy host.
IsAuthProxy bool // Flag of needing authentication.
ProxyUser string // Proxy user
ProxyPassword string // Proxy password
IsEnableMD5 bool // Flag of enabling MD5 for upload.
MD5Threshold int64 // Memory footprint threshold for each MD5 computation (16MB is the default), in byte. When the data is more than that, temp file is used.
IsEnableCRC bool // Flag of enabling CRC for upload.
LogLevel int // Log level
Logger *log.Logger // For write log
UploadLimitSpeed int // Upload limit speed:KB/s, 0 is unlimited
UploadLimiter *OssLimiter // Bandwidth limit reader for upload
DownloadLimitSpeed int // Download limit speed:KB/s, 0 is unlimited
DownloadLimiter *OssLimiter // Bandwidth limit reader for download
CredentialsProvider CredentialsProvider // User provides interface to get AccessKeyID, AccessKeySecret, SecurityToken
LocalAddr net.Addr // local client host info
UserSetUa bool // UserAgent is set by user or not
AuthVersion AuthVersionType // v1 or v2, v4 signature,default is v1
AdditionalHeaders []string // special http headers needed to be sign
RedirectEnabled bool // only effective from go1.7 onward, enable http redirect or not
InsecureSkipVerify bool // for https, Whether to skip verifying the server certificate file
Region string // such as cn-hangzhou
CloudBoxId string //
Product string // oss or oss-cloudbox, default is oss
}
// LimitUploadSpeed uploadSpeed:KB/s, 0 is unlimited,default is 0
func (config *Config) LimitUploadSpeed(uploadSpeed int) error {
if uploadSpeed < 0 {
return fmt.Errorf("invalid argument, the value of uploadSpeed is less than 0")
} else if uploadSpeed == 0 {
config.UploadLimitSpeed = 0
config.UploadLimiter = nil
return nil
}
var err error
config.UploadLimiter, err = GetOssLimiter(uploadSpeed)
if err == nil {
config.UploadLimitSpeed = uploadSpeed
}
return err
}
// LimitDownLoadSpeed downloadSpeed:KB/s, 0 is unlimited,default is 0
func (config *Config) LimitDownloadSpeed(downloadSpeed int) error {
if downloadSpeed < 0 {
return fmt.Errorf("invalid argument, the value of downloadSpeed is less than 0")
} else if downloadSpeed == 0 {
config.DownloadLimitSpeed = 0
config.DownloadLimiter = nil
return nil
}
var err error
config.DownloadLimiter, err = GetOssLimiter(downloadSpeed)
if err == nil {
config.DownloadLimitSpeed = downloadSpeed
}
return err
}
// WriteLog output log function
func (config *Config) WriteLog(LogLevel int, format string, a ...interface{}) {
if config.LogLevel < LogLevel || config.Logger == nil {
return
}
var logBuffer bytes.Buffer
logBuffer.WriteString(LogTag[LogLevel-1])
logBuffer.WriteString(fmt.Sprintf(format, a...))
config.Logger.Printf("%s", logBuffer.String())
}
// for get Credentials
func (config *Config) GetCredentials() Credentials {
return config.CredentialsProvider.GetCredentials()
}
// for get Sign Product
func (config *Config) GetSignProduct() string {
if config.CloudBoxId != "" {
return "oss-cloudbox"
}
return "oss"
}
// for get Sign Region
func (config *Config) GetSignRegion() string {
if config.CloudBoxId != "" {
return config.CloudBoxId
}
return config.Region
}
// getDefaultOssConfig gets the default configuration.
func getDefaultOssConfig() *Config {
config := Config{}
config.Endpoint = ""
config.AccessKeyID = ""
config.AccessKeySecret = ""
config.RetryTimes = 5
config.IsDebug = false
config.UserAgent = userAgent()
config.Timeout = 60 // Seconds
config.SecurityToken = ""
config.IsCname = false
config.HTTPTimeout.ConnectTimeout = time.Second * 30 // 30s
config.HTTPTimeout.ReadWriteTimeout = time.Second * 60 // 60s
config.HTTPTimeout.HeaderTimeout = time.Second * 60 // 60s
config.HTTPTimeout.LongTimeout = time.Second * 300 // 300s
config.HTTPTimeout.IdleConnTimeout = time.Second * 50 // 50s
config.HTTPMaxConns.MaxIdleConns = 100
config.HTTPMaxConns.MaxIdleConnsPerHost = 100
config.IsUseProxy = false
config.ProxyHost = ""
config.IsAuthProxy = false
config.ProxyUser = ""
config.ProxyPassword = ""
config.MD5Threshold = 16 * 1024 * 1024 // 16MB
config.IsEnableMD5 = false
config.IsEnableCRC = true
config.LogLevel = LogOff
config.Logger = log.New(os.Stdout, "", log.LstdFlags)
provider := &defaultCredentialsProvider{config: &config}
config.CredentialsProvider = provider
config.AuthVersion = AuthV1
config.RedirectEnabled = true
config.InsecureSkipVerify = false
config.Product = "oss"
return &config
}

@ -0,0 +1,916 @@
package oss
import (
"bytes"
"crypto/md5"
"encoding/base64"
"encoding/json"
"encoding/xml"
"fmt"
"hash"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
"os"
"sort"
"strconv"
"strings"
"time"
)
// Conn defines OSS Conn
type Conn struct {
config *Config
url *urlMaker
client *http.Client
}
var signKeyList = []string{"acl", "uploads", "location", "cors",
"logging", "website", "referer", "lifecycle",
"delete", "append", "tagging", "objectMeta",
"uploadId", "partNumber", "security-token",
"position", "img", "style", "styleName",
"replication", "replicationProgress",
"replicationLocation", "cname", "bucketInfo",
"comp", "qos", "live", "status", "vod",
"startTime", "endTime", "symlink",
"x-oss-process", "response-content-type", "x-oss-traffic-limit",
"response-content-language", "response-expires",
"response-cache-control", "response-content-disposition",
"response-content-encoding", "udf", "udfName", "udfImage",
"udfId", "udfImageDesc", "udfApplication", "comp",
"udfApplicationLog", "restore", "callback", "callback-var", "qosInfo",
"policy", "stat", "encryption", "versions", "versioning", "versionId", "requestPayment",
"x-oss-request-payer", "sequential",
"inventory", "inventoryId", "continuation-token", "asyncFetch",
"worm", "wormId", "wormExtend", "withHashContext",
"x-oss-enable-md5", "x-oss-enable-sha1", "x-oss-enable-sha256",
"x-oss-hash-ctx", "x-oss-md5-ctx", "transferAcceleration",
"regionList", "cloudboxes",
}
// init initializes Conn
func (conn *Conn) init(config *Config, urlMaker *urlMaker, client *http.Client) error {
if client == nil {
// New transport
transport := newTransport(conn, config)
// Proxy
if conn.config.IsUseProxy {
proxyURL, err := url.Parse(config.ProxyHost)
if err != nil {
return err
}
if config.IsAuthProxy {
if config.ProxyPassword != "" {
proxyURL.User = url.UserPassword(config.ProxyUser, config.ProxyPassword)
} else {
proxyURL.User = url.User(config.ProxyUser)
}
}
transport.Proxy = http.ProxyURL(proxyURL)
}
client = &http.Client{Transport: transport}
if !config.RedirectEnabled {
disableHTTPRedirect(client)
}
}
conn.config = config
conn.url = urlMaker
conn.client = client
return nil
}
// Do sends request and returns the response
func (conn Conn) Do(method, bucketName, objectName string, params map[string]interface{}, headers map[string]string,
data io.Reader, initCRC uint64, listener ProgressListener) (*Response, error) {
urlParams := conn.getURLParams(params)
subResource := conn.getSubResource(params)
uri := conn.url.getURL(bucketName, objectName, urlParams)
resource := ""
if conn.config.AuthVersion != AuthV4 {
resource = conn.getResource(bucketName, objectName, subResource)
} else {
resource = conn.getResourceV4(bucketName, objectName, subResource)
}
return conn.doRequest(method, uri, resource, headers, data, initCRC, listener)
}
// DoURL sends the request with signed URL and returns the response result.
func (conn Conn) DoURL(method HTTPMethod, signedURL string, headers map[string]string,
data io.Reader, initCRC uint64, listener ProgressListener) (*Response, error) {
// Get URI from signedURL
uri, err := url.ParseRequestURI(signedURL)
if err != nil {
return nil, err
}
m := strings.ToUpper(string(method))
req := &http.Request{
Method: m,
URL: uri,
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
Header: make(http.Header),
Host: uri.Host,
}
tracker := &readerTracker{completedBytes: 0}
fd, crc := conn.handleBody(req, data, initCRC, listener, tracker)
if fd != nil {
defer func() {
fd.Close()
os.Remove(fd.Name())
}()
}
if conn.config.IsAuthProxy {
auth := conn.config.ProxyUser + ":" + conn.config.ProxyPassword
basic := "Basic " + base64.StdEncoding.EncodeToString([]byte(auth))
req.Header.Set("Proxy-Authorization", basic)
}
req.Header.Set(HTTPHeaderHost, req.Host)
req.Header.Set(HTTPHeaderUserAgent, conn.config.UserAgent)
if headers != nil {
for k, v := range headers {
req.Header.Set(k, v)
}
}
// Transfer started
event := newProgressEvent(TransferStartedEvent, 0, req.ContentLength, 0)
publishProgress(listener, event)
if conn.config.LogLevel >= Debug {
conn.LoggerHTTPReq(req)
}
resp, err := conn.client.Do(req)
if err != nil {
// Transfer failed
event = newProgressEvent(TransferFailedEvent, tracker.completedBytes, req.ContentLength, 0)
publishProgress(listener, event)
conn.config.WriteLog(Debug, "[Resp:%p]http error:%s\n", req, err.Error())
return nil, err
}
if conn.config.LogLevel >= Debug {
//print out http resp
conn.LoggerHTTPResp(req, resp)
}
// Transfer completed
event = newProgressEvent(TransferCompletedEvent, tracker.completedBytes, req.ContentLength, 0)
publishProgress(listener, event)
return conn.handleResponse(resp, crc)
}
func (conn Conn) getURLParams(params map[string]interface{}) string {
// Sort
keys := make([]string, 0, len(params))
for k := range params {
keys = append(keys, k)
}
sort.Strings(keys)
// Serialize
var buf bytes.Buffer
for _, k := range keys {
if buf.Len() > 0 {
buf.WriteByte('&')
}
buf.WriteString(url.QueryEscape(k))
if params[k] != nil && params[k].(string) != "" {
buf.WriteString("=" + strings.Replace(url.QueryEscape(params[k].(string)), "+", "%20", -1))
}
}
return buf.String()
}
func (conn Conn) getSubResource(params map[string]interface{}) string {
// Sort
keys := make([]string, 0, len(params))
signParams := make(map[string]string)
for k := range params {
if conn.config.AuthVersion == AuthV2 || conn.config.AuthVersion == AuthV4 {
encodedKey := url.QueryEscape(k)
keys = append(keys, encodedKey)
if params[k] != nil && params[k] != "" {
signParams[encodedKey] = strings.Replace(url.QueryEscape(params[k].(string)), "+", "%20", -1)
}
} else if conn.isParamSign(k) {
keys = append(keys, k)
if params[k] != nil {
signParams[k] = params[k].(string)
}
}
}
sort.Strings(keys)
// Serialize
var buf bytes.Buffer
for _, k := range keys {
if buf.Len() > 0 {
buf.WriteByte('&')
}
buf.WriteString(k)
if _, ok := signParams[k]; ok {
if signParams[k] != "" {
buf.WriteString("=" + signParams[k])
}
}
}
return buf.String()
}
func (conn Conn) isParamSign(paramKey string) bool {
for _, k := range signKeyList {
if paramKey == k {
return true
}
}
return false
}
// getResource gets canonicalized resource
func (conn Conn) getResource(bucketName, objectName, subResource string) string {
if subResource != "" {
subResource = "?" + subResource
}
if bucketName == "" {
if conn.config.AuthVersion == AuthV2 {
return url.QueryEscape("/") + subResource
}
return fmt.Sprintf("/%s%s", bucketName, subResource)
}
if conn.config.AuthVersion == AuthV2 {
return url.QueryEscape("/"+bucketName+"/") + strings.Replace(url.QueryEscape(objectName), "+", "%20", -1) + subResource
}
return fmt.Sprintf("/%s/%s%s", bucketName, objectName, subResource)
}
// getResource gets canonicalized resource
func (conn Conn) getResourceV4(bucketName, objectName, subResource string) string {
if subResource != "" {
subResource = "?" + subResource
}
if bucketName == "" {
return fmt.Sprintf("/%s", subResource)
}
if objectName != "" {
objectName = url.QueryEscape(objectName)
objectName = strings.Replace(objectName, "+", "%20", -1)
objectName = strings.Replace(objectName, "%2F", "/", -1)
return fmt.Sprintf("/%s/%s%s", bucketName, objectName, subResource)
}
return fmt.Sprintf("/%s/%s", bucketName, subResource)
}
func (conn Conn) doRequest(method string, uri *url.URL, canonicalizedResource string, headers map[string]string,
data io.Reader, initCRC uint64, listener ProgressListener) (*Response, error) {
method = strings.ToUpper(method)
req := &http.Request{
Method: method,
URL: uri,
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
Header: make(http.Header),
Host: uri.Host,
}
tracker := &readerTracker{completedBytes: 0}
fd, crc := conn.handleBody(req, data, initCRC, listener, tracker)
if fd != nil {
defer func() {
fd.Close()
os.Remove(fd.Name())
}()
}
if conn.config.IsAuthProxy {
auth := conn.config.ProxyUser + ":" + conn.config.ProxyPassword
basic := "Basic " + base64.StdEncoding.EncodeToString([]byte(auth))
req.Header.Set("Proxy-Authorization", basic)
}
stNow := time.Now().UTC()
req.Header.Set(HTTPHeaderDate, stNow.Format(http.TimeFormat))
req.Header.Set(HTTPHeaderHost, req.Host)
req.Header.Set(HTTPHeaderUserAgent, conn.config.UserAgent)
if conn.config.AuthVersion == AuthV4 {
req.Header.Set(HttpHeaderOssContentSha256, DefaultContentSha256)
}
akIf := conn.config.GetCredentials()
if akIf.GetSecurityToken() != "" {
req.Header.Set(HTTPHeaderOssSecurityToken, akIf.GetSecurityToken())
}
if headers != nil {
for k, v := range headers {
req.Header.Set(k, v)
}
}
conn.signHeader(req, canonicalizedResource)
// Transfer started
event := newProgressEvent(TransferStartedEvent, 0, req.ContentLength, 0)
publishProgress(listener, event)
if conn.config.LogLevel >= Debug {
conn.LoggerHTTPReq(req)
}
resp, err := conn.client.Do(req)
if err != nil {
// Transfer failed
event = newProgressEvent(TransferFailedEvent, tracker.completedBytes, req.ContentLength, 0)
publishProgress(listener, event)
conn.config.WriteLog(Debug, "[Resp:%p]http error:%s\n", req, err.Error())
return nil, err
}
if conn.config.LogLevel >= Debug {
//print out http resp
conn.LoggerHTTPResp(req, resp)
}
// Transfer completed
event = newProgressEvent(TransferCompletedEvent, tracker.completedBytes, req.ContentLength, 0)
publishProgress(listener, event)
return conn.handleResponse(resp, crc)
}
func (conn Conn) signURL(method HTTPMethod, bucketName, objectName string, expiration int64, params map[string]interface{}, headers map[string]string) string {
akIf := conn.config.GetCredentials()
if akIf.GetSecurityToken() != "" {
params[HTTPParamSecurityToken] = akIf.GetSecurityToken()
}
m := strings.ToUpper(string(method))
req := &http.Request{
Method: m,
Header: make(http.Header),
}
if conn.config.IsAuthProxy {
auth := conn.config.ProxyUser + ":" + conn.config.ProxyPassword
basic := "Basic " + base64.StdEncoding.EncodeToString([]byte(auth))
req.Header.Set("Proxy-Authorization", basic)
}
req.Header.Set(HTTPHeaderDate, strconv.FormatInt(expiration, 10))
req.Header.Set(HTTPHeaderUserAgent, conn.config.UserAgent)
if headers != nil {
for k, v := range headers {
req.Header.Set(k, v)
}
}
if conn.config.AuthVersion == AuthV2 {
params[HTTPParamSignatureVersion] = "OSS2"
params[HTTPParamExpiresV2] = strconv.FormatInt(expiration, 10)
params[HTTPParamAccessKeyIDV2] = conn.config.AccessKeyID
additionalList, _ := conn.getAdditionalHeaderKeys(req)
if len(additionalList) > 0 {
params[HTTPParamAdditionalHeadersV2] = strings.Join(additionalList, ";")
}
}
subResource := conn.getSubResource(params)
canonicalizedResource := conn.getResource(bucketName, objectName, subResource)
signedStr := conn.getSignedStr(req, canonicalizedResource, akIf.GetAccessKeySecret())
if conn.config.AuthVersion == AuthV1 {
params[HTTPParamExpires] = strconv.FormatInt(expiration, 10)
params[HTTPParamAccessKeyID] = akIf.GetAccessKeyID()
params[HTTPParamSignature] = signedStr
} else if conn.config.AuthVersion == AuthV2 {
params[HTTPParamSignatureV2] = signedStr
}
urlParams := conn.getURLParams(params)
return conn.url.getSignURL(bucketName, objectName, urlParams)
}
func (conn Conn) signRtmpURL(bucketName, channelName, playlistName string, expiration int64) string {
params := map[string]interface{}{}
if playlistName != "" {
params[HTTPParamPlaylistName] = playlistName
}
expireStr := strconv.FormatInt(expiration, 10)
params[HTTPParamExpires] = expireStr
akIf := conn.config.GetCredentials()
if akIf.GetAccessKeyID() != "" {
params[HTTPParamAccessKeyID] = akIf.GetAccessKeyID()
if akIf.GetSecurityToken() != "" {
params[HTTPParamSecurityToken] = akIf.GetSecurityToken()
}
signedStr := conn.getRtmpSignedStr(bucketName, channelName, playlistName, expiration, akIf.GetAccessKeySecret(), params)
params[HTTPParamSignature] = signedStr
}
urlParams := conn.getURLParams(params)
return conn.url.getSignRtmpURL(bucketName, channelName, urlParams)
}
// handleBody handles request body
func (conn Conn) handleBody(req *http.Request, body io.Reader, initCRC uint64,
listener ProgressListener, tracker *readerTracker) (*os.File, hash.Hash64) {
var file *os.File
var crc hash.Hash64
reader := body
readerLen, err := GetReaderLen(reader)
if err == nil {
req.ContentLength = readerLen
}
req.Header.Set(HTTPHeaderContentLength, strconv.FormatInt(req.ContentLength, 10))
// MD5
if body != nil && conn.config.IsEnableMD5 && req.Header.Get(HTTPHeaderContentMD5) == "" {
md5 := ""
reader, md5, file, _ = calcMD5(body, req.ContentLength, conn.config.MD5Threshold)
req.Header.Set(HTTPHeaderContentMD5, md5)
}
// CRC
if reader != nil && conn.config.IsEnableCRC {
crc = NewCRC(CrcTable(), initCRC)
reader = TeeReader(reader, crc, req.ContentLength, listener, tracker)
}
// HTTP body
rc, ok := reader.(io.ReadCloser)
if !ok && reader != nil {
rc = ioutil.NopCloser(reader)
}
if conn.isUploadLimitReq(req) {
limitReader := &LimitSpeedReader{
reader: rc,
ossLimiter: conn.config.UploadLimiter,
}
req.Body = limitReader
} else {
req.Body = rc
}
return file, crc
}
// isUploadLimitReq: judge limit upload speed or not
func (conn Conn) isUploadLimitReq(req *http.Request) bool {
if conn.config.UploadLimitSpeed == 0 || conn.config.UploadLimiter == nil {
return false
}
if req.Method != "GET" && req.Method != "DELETE" && req.Method != "HEAD" {
if req.ContentLength > 0 {
return true
}
}
return false
}
func tryGetFileSize(f *os.File) int64 {
fInfo, _ := f.Stat()
return fInfo.Size()
}
// handleResponse handles response
func (conn Conn) handleResponse(resp *http.Response, crc hash.Hash64) (*Response, error) {
var cliCRC uint64
var srvCRC uint64
statusCode := resp.StatusCode
if statusCode/100 != 2 {
if statusCode >= 400 && statusCode <= 505 {
// 4xx and 5xx indicate that the operation has error occurred
var respBody []byte
respBody, err := readResponseBody(resp)
if err != nil {
return nil, err
}
if len(respBody) == 0 {
err = ServiceError{
StatusCode: statusCode,
RequestID: resp.Header.Get(HTTPHeaderOssRequestID),
}
} else {
// Response contains storage service error object, unmarshal
srvErr, errIn := serviceErrFromXML(respBody, resp.StatusCode,
resp.Header.Get(HTTPHeaderOssRequestID))
if errIn != nil { // error unmarshaling the error response
err = fmt.Errorf("oss: service returned invalid response body, status = %s, RequestId = %s", resp.Status, resp.Header.Get(HTTPHeaderOssRequestID))
} else {
err = srvErr
}
}
return &Response{
StatusCode: resp.StatusCode,
Headers: resp.Header,
Body: ioutil.NopCloser(bytes.NewReader(respBody)), // restore the body
}, err
} else if statusCode >= 300 && statusCode <= 307 {
// OSS use 3xx, but response has no body
err := fmt.Errorf("oss: service returned %d,%s", resp.StatusCode, resp.Status)
return &Response{
StatusCode: resp.StatusCode,
Headers: resp.Header,
Body: resp.Body,
}, err
} else {
// (0,300) [308,400) [506,)
// Other extended http StatusCode
var respBody []byte
respBody, err := readResponseBody(resp)
if err != nil {
return &Response{StatusCode: resp.StatusCode, Headers: resp.Header, Body: ioutil.NopCloser(bytes.NewReader(respBody))}, err
}
if len(respBody) == 0 {
err = ServiceError{
StatusCode: statusCode,
RequestID: resp.Header.Get(HTTPHeaderOssRequestID),
}
} else {
// Response contains storage service error object, unmarshal
srvErr, errIn := serviceErrFromXML(respBody, resp.StatusCode,
resp.Header.Get(HTTPHeaderOssRequestID))
if errIn != nil { // error unmarshaling the error response
err = fmt.Errorf("unkown response body, status = %s, RequestId = %s", resp.Status, resp.Header.Get(HTTPHeaderOssRequestID))
} else {
err = srvErr
}
}
return &Response{
StatusCode: resp.StatusCode,
Headers: resp.Header,
Body: ioutil.NopCloser(bytes.NewReader(respBody)), // restore the body
}, err
}
} else {
if conn.config.IsEnableCRC && crc != nil {
cliCRC = crc.Sum64()
}
srvCRC, _ = strconv.ParseUint(resp.Header.Get(HTTPHeaderOssCRC64), 10, 64)
realBody := resp.Body
if conn.isDownloadLimitResponse(resp) {
limitReader := &LimitSpeedReader{
reader: realBody,
ossLimiter: conn.config.DownloadLimiter,
}
realBody = limitReader
}
// 2xx, successful
return &Response{
StatusCode: resp.StatusCode,
Headers: resp.Header,
Body: realBody,
ClientCRC: cliCRC,
ServerCRC: srvCRC,
}, nil
}
}
// isUploadLimitReq: judge limit upload speed or not
func (conn Conn) isDownloadLimitResponse(resp *http.Response) bool {
if resp == nil || conn.config.DownloadLimitSpeed == 0 || conn.config.DownloadLimiter == nil {
return false
}
if strings.EqualFold(resp.Request.Method, "GET") {
return true
}
return false
}
// LoggerHTTPReq Print the header information of the http request
func (conn Conn) LoggerHTTPReq(req *http.Request) {
var logBuffer bytes.Buffer
logBuffer.WriteString(fmt.Sprintf("[Req:%p]Method:%s\t", req, req.Method))
logBuffer.WriteString(fmt.Sprintf("Host:%s\t", req.URL.Host))
logBuffer.WriteString(fmt.Sprintf("Path:%s\t", req.URL.Path))
logBuffer.WriteString(fmt.Sprintf("Query:%s\t", req.URL.RawQuery))
logBuffer.WriteString(fmt.Sprintf("Header info:"))
for k, v := range req.Header {
var valueBuffer bytes.Buffer
for j := 0; j < len(v); j++ {
if j > 0 {
valueBuffer.WriteString(" ")
}
valueBuffer.WriteString(v[j])
}
logBuffer.WriteString(fmt.Sprintf("\t%s:%s", k, valueBuffer.String()))
}
conn.config.WriteLog(Debug, "%s\n", logBuffer.String())
}
// LoggerHTTPResp Print Response to http request
func (conn Conn) LoggerHTTPResp(req *http.Request, resp *http.Response) {
var logBuffer bytes.Buffer
logBuffer.WriteString(fmt.Sprintf("[Resp:%p]StatusCode:%d\t", req, resp.StatusCode))
logBuffer.WriteString(fmt.Sprintf("Header info:"))
for k, v := range resp.Header {
var valueBuffer bytes.Buffer
for j := 0; j < len(v); j++ {
if j > 0 {
valueBuffer.WriteString(" ")
}
valueBuffer.WriteString(v[j])
}
logBuffer.WriteString(fmt.Sprintf("\t%s:%s", k, valueBuffer.String()))
}
conn.config.WriteLog(Debug, "%s\n", logBuffer.String())
}
func calcMD5(body io.Reader, contentLen, md5Threshold int64) (reader io.Reader, b64 string, tempFile *os.File, err error) {
if contentLen == 0 || contentLen > md5Threshold {
// Huge body, use temporary file
tempFile, err = ioutil.TempFile(os.TempDir(), TempFilePrefix)
if tempFile != nil {
io.Copy(tempFile, body)
tempFile.Seek(0, os.SEEK_SET)
md5 := md5.New()
io.Copy(md5, tempFile)
sum := md5.Sum(nil)
b64 = base64.StdEncoding.EncodeToString(sum[:])
tempFile.Seek(0, os.SEEK_SET)
reader = tempFile
}
} else {
// Small body, use memory
buf, _ := ioutil.ReadAll(body)
sum := md5.Sum(buf)
b64 = base64.StdEncoding.EncodeToString(sum[:])
reader = bytes.NewReader(buf)
}
return
}
func readResponseBody(resp *http.Response) ([]byte, error) {
defer resp.Body.Close()
out, err := ioutil.ReadAll(resp.Body)
if err == io.EOF {
err = nil
}
return out, err
}
func serviceErrFromXML(body []byte, statusCode int, requestID string) (ServiceError, error) {
var storageErr ServiceError
if err := xml.Unmarshal(body, &storageErr); err != nil {
return storageErr, err
}
storageErr.StatusCode = statusCode
storageErr.RequestID = requestID
storageErr.RawMessage = string(body)
return storageErr, nil
}
func xmlUnmarshal(body io.Reader, v interface{}) error {
data, err := ioutil.ReadAll(body)
if err != nil {
return err
}
return xml.Unmarshal(data, v)
}
func jsonUnmarshal(body io.Reader, v interface{}) error {
data, err := ioutil.ReadAll(body)
if err != nil {
return err
}
return json.Unmarshal(data, v)
}
// timeoutConn handles HTTP timeout
type timeoutConn struct {
conn net.Conn
timeout time.Duration
longTimeout time.Duration
}
func newTimeoutConn(conn net.Conn, timeout time.Duration, longTimeout time.Duration) *timeoutConn {
conn.SetReadDeadline(time.Now().Add(longTimeout))
return &timeoutConn{
conn: conn,
timeout: timeout,
longTimeout: longTimeout,
}
}
func (c *timeoutConn) Read(b []byte) (n int, err error) {
c.SetReadDeadline(time.Now().Add(c.timeout))
n, err = c.conn.Read(b)
c.SetReadDeadline(time.Now().Add(c.longTimeout))
return n, err
}
func (c *timeoutConn) Write(b []byte) (n int, err error) {
c.SetWriteDeadline(time.Now().Add(c.timeout))
n, err = c.conn.Write(b)
c.SetReadDeadline(time.Now().Add(c.longTimeout))
return n, err
}
func (c *timeoutConn) Close() error {
return c.conn.Close()
}
func (c *timeoutConn) LocalAddr() net.Addr {
return c.conn.LocalAddr()
}
func (c *timeoutConn) RemoteAddr() net.Addr {
return c.conn.RemoteAddr()
}
func (c *timeoutConn) SetDeadline(t time.Time) error {
return c.conn.SetDeadline(t)
}
func (c *timeoutConn) SetReadDeadline(t time.Time) error {
return c.conn.SetReadDeadline(t)
}
func (c *timeoutConn) SetWriteDeadline(t time.Time) error {
return c.conn.SetWriteDeadline(t)
}
// UrlMaker builds URL and resource
const (
urlTypeCname = 1
urlTypeIP = 2
urlTypeAliyun = 3
)
type urlMaker struct {
Scheme string // HTTP or HTTPS
NetLoc string // Host or IP
Type int // 1 CNAME, 2 IP, 3 ALIYUN
IsProxy bool // Proxy
}
// Init parses endpoint
func (um *urlMaker) Init(endpoint string, isCname bool, isProxy bool) error {
if strings.HasPrefix(endpoint, "http://") {
um.Scheme = "http"
um.NetLoc = endpoint[len("http://"):]
} else if strings.HasPrefix(endpoint, "https://") {
um.Scheme = "https"
um.NetLoc = endpoint[len("https://"):]
} else {
um.Scheme = "http"
um.NetLoc = endpoint
}
//use url.Parse() to get real host
strUrl := um.Scheme + "://" + um.NetLoc
url, err := url.Parse(strUrl)
if err != nil {
return err
}
um.NetLoc = url.Host
host, _, err := net.SplitHostPort(um.NetLoc)
if err != nil {
host = um.NetLoc
if host[0] == '[' && host[len(host)-1] == ']' {
host = host[1 : len(host)-1]
}
}
ip := net.ParseIP(host)
if ip != nil {
um.Type = urlTypeIP
} else if isCname {
um.Type = urlTypeCname
} else {
um.Type = urlTypeAliyun
}
um.IsProxy = isProxy
return nil
}
// getURL gets URL
func (um urlMaker) getURL(bucket, object, params string) *url.URL {
host, path := um.buildURL(bucket, object)
addr := ""
if params == "" {
addr = fmt.Sprintf("%s://%s%s", um.Scheme, host, path)
} else {
addr = fmt.Sprintf("%s://%s%s?%s", um.Scheme, host, path, params)
}
uri, _ := url.ParseRequestURI(addr)
return uri
}
// getSignURL gets sign URL
func (um urlMaker) getSignURL(bucket, object, params string) string {
host, path := um.buildURL(bucket, object)
return fmt.Sprintf("%s://%s%s?%s", um.Scheme, host, path, params)
}
// getSignRtmpURL Build Sign Rtmp URL
func (um urlMaker) getSignRtmpURL(bucket, channelName, params string) string {
host, path := um.buildURL(bucket, "live")
channelName = url.QueryEscape(channelName)
channelName = strings.Replace(channelName, "+", "%20", -1)
return fmt.Sprintf("rtmp://%s%s/%s?%s", host, path, channelName, params)
}
// buildURL builds URL
func (um urlMaker) buildURL(bucket, object string) (string, string) {
var host = ""
var path = ""
object = url.QueryEscape(object)
object = strings.Replace(object, "+", "%20", -1)
if um.Type == urlTypeCname {
host = um.NetLoc
path = "/" + object
} else if um.Type == urlTypeIP {
if bucket == "" {
host = um.NetLoc
path = "/"
} else {
host = um.NetLoc
path = fmt.Sprintf("/%s/%s", bucket, object)
}
} else {
if bucket == "" {
host = um.NetLoc
path = "/"
} else {
host = bucket + "." + um.NetLoc
path = "/" + object
}
}
return host, path
}
// buildURL builds URL
func (um urlMaker) buildURLV4(bucket, object string) (string, string) {
var host = ""
var path = ""
object = url.QueryEscape(object)
object = strings.Replace(object, "+", "%20", -1)
// no escape /
object = strings.Replace(object, "%2F", "/", -1)
if um.Type == urlTypeCname {
host = um.NetLoc
path = "/" + object
} else if um.Type == urlTypeIP {
if bucket == "" {
host = um.NetLoc
path = "/"
} else {
host = um.NetLoc
path = fmt.Sprintf("/%s/%s", bucket, object)
}
} else {
if bucket == "" {
host = um.NetLoc
path = "/"
} else {
host = bucket + "." + um.NetLoc
path = fmt.Sprintf("/%s/%s", bucket, object)
}
}
return host, path
}

@ -0,0 +1,264 @@
package oss
import "os"
// ACLType bucket/object ACL
type ACLType string
const (
// ACLPrivate definition : private read and write
ACLPrivate ACLType = "private"
// ACLPublicRead definition : public read and private write
ACLPublicRead ACLType = "public-read"
// ACLPublicReadWrite definition : public read and public write
ACLPublicReadWrite ACLType = "public-read-write"
// ACLDefault Object. It's only applicable for object.
ACLDefault ACLType = "default"
)
// bucket versioning status
type VersioningStatus string
const (
// Versioning Status definition: Enabled
VersionEnabled VersioningStatus = "Enabled"
// Versioning Status definition: Suspended
VersionSuspended VersioningStatus = "Suspended"
)
// MetadataDirectiveType specifying whether use the metadata of source object when copying object.
type MetadataDirectiveType string
const (
// MetaCopy the target object's metadata is copied from the source one
MetaCopy MetadataDirectiveType = "COPY"
// MetaReplace the target object's metadata is created as part of the copy request (not same as the source one)
MetaReplace MetadataDirectiveType = "REPLACE"
)
// TaggingDirectiveType specifying whether use the tagging of source object when copying object.
type TaggingDirectiveType string
const (
// TaggingCopy the target object's tagging is copied from the source one
TaggingCopy TaggingDirectiveType = "COPY"
// TaggingReplace the target object's tagging is created as part of the copy request (not same as the source one)
TaggingReplace TaggingDirectiveType = "REPLACE"
)
// AlgorithmType specifying the server side encryption algorithm name
type AlgorithmType string
const (
KMSAlgorithm AlgorithmType = "KMS"
AESAlgorithm AlgorithmType = "AES256"
SM4Algorithm AlgorithmType = "SM4"
)
// StorageClassType bucket storage type
type StorageClassType string
const (
// StorageStandard standard
StorageStandard StorageClassType = "Standard"
// StorageIA infrequent access
StorageIA StorageClassType = "IA"
// StorageArchive archive
StorageArchive StorageClassType = "Archive"
// StorageColdArchive cold archive
StorageColdArchive StorageClassType = "ColdArchive"
)
//RedundancyType bucket data Redundancy type
type DataRedundancyType string
const (
// RedundancyLRS Local redundancy, default value
RedundancyLRS DataRedundancyType = "LRS"
// RedundancyZRS Same city redundancy
RedundancyZRS DataRedundancyType = "ZRS"
)
//ObjecthashFuncType
type ObjecthashFuncType string
const (
HashFuncSha1 ObjecthashFuncType = "SHA-1"
HashFuncSha256 ObjecthashFuncType = "SHA-256"
)
// PayerType the type of request payer
type PayerType string
const (
// Requester the requester who send the request
Requester PayerType = "Requester"
// BucketOwner the requester who send the request
BucketOwner PayerType = "BucketOwner"
)
//RestoreMode the restore mode for coldArchive object
type RestoreMode string
const (
//RestoreExpedited object will be restored in 1 hour
RestoreExpedited RestoreMode = "Expedited"
//RestoreStandard object will be restored in 2-5 hours
RestoreStandard RestoreMode = "Standard"
//RestoreBulk object will be restored in 5-10 hours
RestoreBulk RestoreMode = "Bulk"
)
// HTTPMethod HTTP request method
type HTTPMethod string
const (
// HTTPGet HTTP GET
HTTPGet HTTPMethod = "GET"
// HTTPPut HTTP PUT
HTTPPut HTTPMethod = "PUT"
// HTTPHead HTTP HEAD
HTTPHead HTTPMethod = "HEAD"
// HTTPPost HTTP POST
HTTPPost HTTPMethod = "POST"
// HTTPDelete HTTP DELETE
HTTPDelete HTTPMethod = "DELETE"
)
// HTTP headers
const (
HTTPHeaderAcceptEncoding string = "Accept-Encoding"
HTTPHeaderAuthorization = "Authorization"
HTTPHeaderCacheControl = "Cache-Control"
HTTPHeaderContentDisposition = "Content-Disposition"
HTTPHeaderContentEncoding = "Content-Encoding"
HTTPHeaderContentLength = "Content-Length"
HTTPHeaderContentMD5 = "Content-MD5"
HTTPHeaderContentType = "Content-Type"
HTTPHeaderContentLanguage = "Content-Language"
HTTPHeaderDate = "Date"
HTTPHeaderEtag = "ETag"
HTTPHeaderExpires = "Expires"
HTTPHeaderHost = "Host"
HTTPHeaderLastModified = "Last-Modified"
HTTPHeaderRange = "Range"
HTTPHeaderLocation = "Location"
HTTPHeaderOrigin = "Origin"
HTTPHeaderServer = "Server"
HTTPHeaderUserAgent = "User-Agent"
HTTPHeaderIfModifiedSince = "If-Modified-Since"
HTTPHeaderIfUnmodifiedSince = "If-Unmodified-Since"
HTTPHeaderIfMatch = "If-Match"
HTTPHeaderIfNoneMatch = "If-None-Match"
HTTPHeaderACReqMethod = "Access-Control-Request-Method"
HTTPHeaderACReqHeaders = "Access-Control-Request-Headers"
HTTPHeaderOssACL = "X-Oss-Acl"
HTTPHeaderOssMetaPrefix = "X-Oss-Meta-"
HTTPHeaderOssObjectACL = "X-Oss-Object-Acl"
HTTPHeaderOssSecurityToken = "X-Oss-Security-Token"
HTTPHeaderOssServerSideEncryption = "X-Oss-Server-Side-Encryption"
HTTPHeaderOssServerSideEncryptionKeyID = "X-Oss-Server-Side-Encryption-Key-Id"
HTTPHeaderOssServerSideDataEncryption = "X-Oss-Server-Side-Data-Encryption"
HTTPHeaderSSECAlgorithm = "X-Oss-Server-Side-Encryption-Customer-Algorithm"
HTTPHeaderSSECKey = "X-Oss-Server-Side-Encryption-Customer-Key"
HTTPHeaderSSECKeyMd5 = "X-Oss-Server-Side-Encryption-Customer-Key-MD5"
HTTPHeaderOssCopySource = "X-Oss-Copy-Source"
HTTPHeaderOssCopySourceRange = "X-Oss-Copy-Source-Range"
HTTPHeaderOssCopySourceIfMatch = "X-Oss-Copy-Source-If-Match"
HTTPHeaderOssCopySourceIfNoneMatch = "X-Oss-Copy-Source-If-None-Match"
HTTPHeaderOssCopySourceIfModifiedSince = "X-Oss-Copy-Source-If-Modified-Since"
HTTPHeaderOssCopySourceIfUnmodifiedSince = "X-Oss-Copy-Source-If-Unmodified-Since"
HTTPHeaderOssMetadataDirective = "X-Oss-Metadata-Directive"
HTTPHeaderOssNextAppendPosition = "X-Oss-Next-Append-Position"
HTTPHeaderOssRequestID = "X-Oss-Request-Id"
HTTPHeaderOssCRC64 = "X-Oss-Hash-Crc64ecma"
HTTPHeaderOssSymlinkTarget = "X-Oss-Symlink-Target"
HTTPHeaderOssStorageClass = "X-Oss-Storage-Class"
HTTPHeaderOssCallback = "X-Oss-Callback"
HTTPHeaderOssCallbackVar = "X-Oss-Callback-Var"
HTTPHeaderOssRequester = "X-Oss-Request-Payer"
HTTPHeaderOssTagging = "X-Oss-Tagging"
HTTPHeaderOssTaggingDirective = "X-Oss-Tagging-Directive"
HTTPHeaderOssTrafficLimit = "X-Oss-Traffic-Limit"
HTTPHeaderOssForbidOverWrite = "X-Oss-Forbid-Overwrite"
HTTPHeaderOssRangeBehavior = "X-Oss-Range-Behavior"
HTTPHeaderOssTaskID = "X-Oss-Task-Id"
HTTPHeaderOssHashCtx = "X-Oss-Hash-Ctx"
HTTPHeaderOssMd5Ctx = "X-Oss-Md5-Ctx"
HTTPHeaderAllowSameActionOverLap = "X-Oss-Allow-Same-Action-Overlap"
HttpHeaderOssDate = "X-Oss-Date"
HttpHeaderOssContentSha256 = "X-Oss-Content-Sha256"
)
// HTTP Param
const (
HTTPParamExpires = "Expires"
HTTPParamAccessKeyID = "OSSAccessKeyId"
HTTPParamSignature = "Signature"
HTTPParamSecurityToken = "security-token"
HTTPParamPlaylistName = "playlistName"
HTTPParamSignatureVersion = "x-oss-signature-version"
HTTPParamExpiresV2 = "x-oss-expires"
HTTPParamAccessKeyIDV2 = "x-oss-access-key-id"
HTTPParamSignatureV2 = "x-oss-signature"
HTTPParamAdditionalHeadersV2 = "x-oss-additional-headers"
)
// Other constants
const (
MaxPartSize = 5 * 1024 * 1024 * 1024 // Max part size, 5GB
MinPartSize = 100 * 1024 // Min part size, 100KB
FilePermMode = os.FileMode(0664) // Default file permission
TempFilePrefix = "oss-go-temp-" // Temp file prefix
TempFileSuffix = ".temp" // Temp file suffix
CheckpointFileSuffix = ".cp" // Checkpoint file suffix
NullVersion = "null"
DefaultContentSha256 = "UNSIGNED-PAYLOAD" // for v4 signature
Version = "v2.2.4" // Go SDK version
)
// FrameType
const (
DataFrameType = 8388609
ContinuousFrameType = 8388612
EndFrameType = 8388613
MetaEndFrameCSVType = 8388614
MetaEndFrameJSONType = 8388615
)
// AuthVersion the version of auth
type AuthVersionType string
const (
// AuthV1 v1
AuthV1 AuthVersionType = "v1"
// AuthV2 v2
AuthV2 AuthVersionType = "v2"
// AuthV4 v4
AuthV4 AuthVersionType = "v4"
)

@ -0,0 +1,123 @@
package oss
import (
"hash"
"hash/crc64"
)
// digest represents the partial evaluation of a checksum.
type digest struct {
crc uint64
tab *crc64.Table
}
// NewCRC creates a new hash.Hash64 computing the CRC64 checksum
// using the polynomial represented by the Table.
func NewCRC(tab *crc64.Table, init uint64) hash.Hash64 { return &digest{init, tab} }
// Size returns the number of bytes sum will return.
func (d *digest) Size() int { return crc64.Size }
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (d *digest) BlockSize() int { return 1 }
// Reset resets the hash to its initial state.
func (d *digest) Reset() { d.crc = 0 }
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
func (d *digest) Write(p []byte) (n int, err error) {
d.crc = crc64.Update(d.crc, d.tab, p)
return len(p), nil
}
// Sum64 returns CRC64 value.
func (d *digest) Sum64() uint64 { return d.crc }
// Sum returns hash value.
func (d *digest) Sum(in []byte) []byte {
s := d.Sum64()
return append(in, byte(s>>56), byte(s>>48), byte(s>>40), byte(s>>32), byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
}
// gf2Dim dimension of GF(2) vectors (length of CRC)
const gf2Dim int = 64
func gf2MatrixTimes(mat []uint64, vec uint64) uint64 {
var sum uint64
for i := 0; vec != 0; i++ {
if vec&1 != 0 {
sum ^= mat[i]
}
vec >>= 1
}
return sum
}
func gf2MatrixSquare(square []uint64, mat []uint64) {
for n := 0; n < gf2Dim; n++ {
square[n] = gf2MatrixTimes(mat, mat[n])
}
}
// CRC64Combine combines CRC64
func CRC64Combine(crc1 uint64, crc2 uint64, len2 uint64) uint64 {
var even [gf2Dim]uint64 // Even-power-of-two zeros operator
var odd [gf2Dim]uint64 // Odd-power-of-two zeros operator
// Degenerate case
if len2 == 0 {
return crc1
}
// Put operator for one zero bit in odd
odd[0] = crc64.ECMA // CRC64 polynomial
var row uint64 = 1
for n := 1; n < gf2Dim; n++ {
odd[n] = row
row <<= 1
}
// Put operator for two zero bits in even
gf2MatrixSquare(even[:], odd[:])
// Put operator for four zero bits in odd
gf2MatrixSquare(odd[:], even[:])
// Apply len2 zeros to crc1, first square will put the operator for one zero byte, eight zero bits, in even
for {
// Apply zeros operator for this bit of len2
gf2MatrixSquare(even[:], odd[:])
if len2&1 != 0 {
crc1 = gf2MatrixTimes(even[:], crc1)
}
len2 >>= 1
// If no more bits set, then done
if len2 == 0 {
break
}
// Another iteration of the loop with odd and even swapped
gf2MatrixSquare(odd[:], even[:])
if len2&1 != 0 {
crc1 = gf2MatrixTimes(odd[:], crc1)
}
len2 >>= 1
// If no more bits set, then done
if len2 == 0 {
break
}
}
// Return combined CRC
crc1 ^= crc2
return crc1
}

@ -0,0 +1,567 @@
package oss
import (
"crypto/md5"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"hash"
"hash/crc64"
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strconv"
"time"
)
// DownloadFile downloads files with multipart download.
//
// objectKey the object key.
// filePath the local file to download from objectKey in OSS.
// partSize the part size in bytes.
// options object's constraints, check out GetObject for the reference.
//
// error it's nil when the call succeeds, otherwise it's an error object.
//
func (bucket Bucket) DownloadFile(objectKey, filePath string, partSize int64, options ...Option) error {
if partSize < 1 {
return errors.New("oss: part size smaller than 1")
}
uRange, err := GetRangeConfig(options)
if err != nil {
return err
}
cpConf := getCpConfig(options)
routines := getRoutines(options)
var strVersionId string
versionId, _ := FindOption(options, "versionId", nil)
if versionId != nil {
strVersionId = versionId.(string)
}
if cpConf != nil && cpConf.IsEnable {
cpFilePath := getDownloadCpFilePath(cpConf, bucket.BucketName, objectKey, strVersionId, filePath)
if cpFilePath != "" {
return bucket.downloadFileWithCp(objectKey, filePath, partSize, options, cpFilePath, routines, uRange)
}
}
return bucket.downloadFile(objectKey, filePath, partSize, options, routines, uRange)
}
func getDownloadCpFilePath(cpConf *cpConfig, srcBucket, srcObject, versionId, destFile string) string {
if cpConf.FilePath == "" && cpConf.DirPath != "" {
src := fmt.Sprintf("oss://%v/%v", srcBucket, srcObject)
absPath, _ := filepath.Abs(destFile)
cpFileName := getCpFileName(src, absPath, versionId)
cpConf.FilePath = cpConf.DirPath + string(os.PathSeparator) + cpFileName
}
return cpConf.FilePath
}
// downloadWorkerArg is download worker's parameters
type downloadWorkerArg struct {
bucket *Bucket
key string
filePath string
options []Option
hook downloadPartHook
enableCRC bool
}
// downloadPartHook is hook for test
type downloadPartHook func(part downloadPart) error
var downloadPartHooker downloadPartHook = defaultDownloadPartHook
func defaultDownloadPartHook(part downloadPart) error {
return nil
}
// defaultDownloadProgressListener defines default ProgressListener, shields the ProgressListener in options of GetObject.
type defaultDownloadProgressListener struct {
}
// ProgressChanged no-ops
func (listener *defaultDownloadProgressListener) ProgressChanged(event *ProgressEvent) {
}
// downloadWorker
func downloadWorker(id int, arg downloadWorkerArg, jobs <-chan downloadPart, results chan<- downloadPart, failed chan<- error, die <-chan bool) {
for part := range jobs {
if err := arg.hook(part); err != nil {
failed <- err
break
}
// Resolve options
r := Range(part.Start, part.End)
p := Progress(&defaultDownloadProgressListener{})
var respHeader http.Header
opts := make([]Option, len(arg.options)+3)
// Append orderly, can not be reversed!
opts = append(opts, arg.options...)
opts = append(opts, r, p, GetResponseHeader(&respHeader))
rd, err := arg.bucket.GetObject(arg.key, opts...)
if err != nil {
failed <- err
break
}
defer rd.Close()
var crcCalc hash.Hash64
if arg.enableCRC {
crcCalc = crc64.New(CrcTable())
contentLen := part.End - part.Start + 1
rd = ioutil.NopCloser(TeeReader(rd, crcCalc, contentLen, nil, nil))
}
defer rd.Close()
select {
case <-die:
return
default:
}
fd, err := os.OpenFile(arg.filePath, os.O_WRONLY, FilePermMode)
if err != nil {
failed <- err
break
}
_, err = fd.Seek(part.Start-part.Offset, os.SEEK_SET)
if err != nil {
fd.Close()
failed <- err
break
}
startT := time.Now().UnixNano() / 1000 / 1000 / 1000
_, err = io.Copy(fd, rd)
endT := time.Now().UnixNano() / 1000 / 1000 / 1000
if err != nil {
arg.bucket.Client.Config.WriteLog(Debug, "download part error,cost:%d second,part number:%d,request id:%s,error:%s.\n", endT-startT, part.Index, GetRequestId(respHeader), err.Error())
fd.Close()
failed <- err
break
}
if arg.enableCRC {
part.CRC64 = crcCalc.Sum64()
}
fd.Close()
results <- part
}
}
// downloadScheduler
func downloadScheduler(jobs chan downloadPart, parts []downloadPart) {
for _, part := range parts {
jobs <- part
}
close(jobs)
}
// downloadPart defines download part
type downloadPart struct {
Index int // Part number, starting from 0
Start int64 // Start index
End int64 // End index
Offset int64 // Offset
CRC64 uint64 // CRC check value of part
}
// getDownloadParts gets download parts
func getDownloadParts(objectSize, partSize int64, uRange *UnpackedRange) []downloadPart {
parts := []downloadPart{}
part := downloadPart{}
i := 0
start, end := AdjustRange(uRange, objectSize)
for offset := start; offset < end; offset += partSize {
part.Index = i
part.Start = offset
part.End = GetPartEnd(offset, end, partSize)
part.Offset = start
part.CRC64 = 0
parts = append(parts, part)
i++
}
return parts
}
// getObjectBytes gets object bytes length
func getObjectBytes(parts []downloadPart) int64 {
var ob int64
for _, part := range parts {
ob += (part.End - part.Start + 1)
}
return ob
}
// combineCRCInParts caculates the total CRC of continuous parts
func combineCRCInParts(dps []downloadPart) uint64 {
if dps == nil || len(dps) == 0 {
return 0
}
crc := dps[0].CRC64
for i := 1; i < len(dps); i++ {
crc = CRC64Combine(crc, dps[i].CRC64, (uint64)(dps[i].End-dps[i].Start+1))
}
return crc
}
// downloadFile downloads file concurrently without checkpoint.
func (bucket Bucket) downloadFile(objectKey, filePath string, partSize int64, options []Option, routines int, uRange *UnpackedRange) error {
tempFilePath := filePath + TempFileSuffix
listener := GetProgressListener(options)
// If the file does not exist, create one. If exists, the download will overwrite it.
fd, err := os.OpenFile(tempFilePath, os.O_WRONLY|os.O_CREATE, FilePermMode)
if err != nil {
return err
}
fd.Close()
// Get the object detailed meta for object whole size
// must delete header:range to get whole object size
skipOptions := DeleteOption(options, HTTPHeaderRange)
meta, err := bucket.GetObjectDetailedMeta(objectKey, skipOptions...)
if err != nil {
return err
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return err
}
enableCRC := false
expectedCRC := (uint64)(0)
if bucket.GetConfig().IsEnableCRC && meta.Get(HTTPHeaderOssCRC64) != "" {
if uRange == nil || (!uRange.HasStart && !uRange.HasEnd) {
enableCRC = true
expectedCRC, _ = strconv.ParseUint(meta.Get(HTTPHeaderOssCRC64), 10, 64)
}
}
// Get the parts of the file
parts := getDownloadParts(objectSize, partSize, uRange)
jobs := make(chan downloadPart, len(parts))
results := make(chan downloadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
var completedBytes int64
totalBytes := getObjectBytes(parts)
event := newProgressEvent(TransferStartedEvent, 0, totalBytes, 0)
publishProgress(listener, event)
// Start the download workers
arg := downloadWorkerArg{&bucket, objectKey, tempFilePath, options, downloadPartHooker, enableCRC}
for w := 1; w <= routines; w++ {
go downloadWorker(w, arg, jobs, results, failed, die)
}
// Download parts concurrently
go downloadScheduler(jobs, parts)
// Waiting for parts download finished
completed := 0
for completed < len(parts) {
select {
case part := <-results:
completed++
downBytes := (part.End - part.Start + 1)
completedBytes += downBytes
parts[part.Index].CRC64 = part.CRC64
event = newProgressEvent(TransferDataEvent, completedBytes, totalBytes, downBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
if enableCRC {
actualCRC := combineCRCInParts(parts)
err = CheckDownloadCRC(actualCRC, expectedCRC)
if err != nil {
return err
}
}
return os.Rename(tempFilePath, filePath)
}
// ----- Concurrent download with chcekpoint -----
const downloadCpMagic = "92611BED-89E2-46B6-89E5-72F273D4B0A3"
type downloadCheckpoint struct {
Magic string // Magic
MD5 string // Checkpoint content MD5
FilePath string // Local file
Object string // Key
ObjStat objectStat // Object status
Parts []downloadPart // All download parts
PartStat []bool // Parts' download status
Start int64 // Start point of the file
End int64 // End point of the file
enableCRC bool // Whether has CRC check
CRC uint64 // CRC check value
}
type objectStat struct {
Size int64 // Object size
LastModified string // Last modified time
Etag string // Etag
}
// isValid flags of checkpoint data is valid. It returns true when the data is valid and the checkpoint is valid and the object is not updated.
func (cp downloadCheckpoint) isValid(meta http.Header, uRange *UnpackedRange) (bool, error) {
// Compare the CP's Magic and the MD5
cpb := cp
cpb.MD5 = ""
js, _ := json.Marshal(cpb)
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
if cp.Magic != downloadCpMagic || b64 != cp.MD5 {
return false, nil
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return false, err
}
// Compare the object size, last modified time and etag
if cp.ObjStat.Size != objectSize ||
cp.ObjStat.LastModified != meta.Get(HTTPHeaderLastModified) ||
cp.ObjStat.Etag != meta.Get(HTTPHeaderEtag) {
return false, nil
}
// Check the download range
if uRange != nil {
start, end := AdjustRange(uRange, objectSize)
if start != cp.Start || end != cp.End {
return false, nil
}
}
return true, nil
}
// load checkpoint from local file
func (cp *downloadCheckpoint) load(filePath string) error {
contents, err := ioutil.ReadFile(filePath)
if err != nil {
return err
}
err = json.Unmarshal(contents, cp)
return err
}
// dump funciton dumps to file
func (cp *downloadCheckpoint) dump(filePath string) error {
bcp := *cp
// Calculate MD5
bcp.MD5 = ""
js, err := json.Marshal(bcp)
if err != nil {
return err
}
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
bcp.MD5 = b64
// Serialize
js, err = json.Marshal(bcp)
if err != nil {
return err
}
// Dump
return ioutil.WriteFile(filePath, js, FilePermMode)
}
// todoParts gets unfinished parts
func (cp downloadCheckpoint) todoParts() []downloadPart {
dps := []downloadPart{}
for i, ps := range cp.PartStat {
if !ps {
dps = append(dps, cp.Parts[i])
}
}
return dps
}
// getCompletedBytes gets completed size
func (cp downloadCheckpoint) getCompletedBytes() int64 {
var completedBytes int64
for i, part := range cp.Parts {
if cp.PartStat[i] {
completedBytes += (part.End - part.Start + 1)
}
}
return completedBytes
}
// prepare initiates download tasks
func (cp *downloadCheckpoint) prepare(meta http.Header, bucket *Bucket, objectKey, filePath string, partSize int64, uRange *UnpackedRange) error {
// CP
cp.Magic = downloadCpMagic
cp.FilePath = filePath
cp.Object = objectKey
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return err
}
cp.ObjStat.Size = objectSize
cp.ObjStat.LastModified = meta.Get(HTTPHeaderLastModified)
cp.ObjStat.Etag = meta.Get(HTTPHeaderEtag)
if bucket.GetConfig().IsEnableCRC && meta.Get(HTTPHeaderOssCRC64) != "" {
if uRange == nil || (!uRange.HasStart && !uRange.HasEnd) {
cp.enableCRC = true
cp.CRC, _ = strconv.ParseUint(meta.Get(HTTPHeaderOssCRC64), 10, 64)
}
}
// Parts
cp.Parts = getDownloadParts(objectSize, partSize, uRange)
cp.PartStat = make([]bool, len(cp.Parts))
for i := range cp.PartStat {
cp.PartStat[i] = false
}
return nil
}
func (cp *downloadCheckpoint) complete(cpFilePath, downFilepath string) error {
err := os.Rename(downFilepath, cp.FilePath)
if err != nil {
return err
}
return os.Remove(cpFilePath)
}
// downloadFileWithCp downloads files with checkpoint.
func (bucket Bucket) downloadFileWithCp(objectKey, filePath string, partSize int64, options []Option, cpFilePath string, routines int, uRange *UnpackedRange) error {
tempFilePath := filePath + TempFileSuffix
listener := GetProgressListener(options)
// Load checkpoint data.
dcp := downloadCheckpoint{}
err := dcp.load(cpFilePath)
if err != nil {
os.Remove(cpFilePath)
}
// Get the object detailed meta for object whole size
// must delete header:range to get whole object size
skipOptions := DeleteOption(options, HTTPHeaderRange)
meta, err := bucket.GetObjectDetailedMeta(objectKey, skipOptions...)
if err != nil {
return err
}
// Load error or data invalid. Re-initialize the download.
valid, err := dcp.isValid(meta, uRange)
if err != nil || !valid {
if err = dcp.prepare(meta, &bucket, objectKey, filePath, partSize, uRange); err != nil {
return err
}
os.Remove(cpFilePath)
}
// Create the file if not exists. Otherwise the parts download will overwrite it.
fd, err := os.OpenFile(tempFilePath, os.O_WRONLY|os.O_CREATE, FilePermMode)
if err != nil {
return err
}
fd.Close()
// Unfinished parts
parts := dcp.todoParts()
jobs := make(chan downloadPart, len(parts))
results := make(chan downloadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
completedBytes := dcp.getCompletedBytes()
event := newProgressEvent(TransferStartedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
// Start the download workers routine
arg := downloadWorkerArg{&bucket, objectKey, tempFilePath, options, downloadPartHooker, dcp.enableCRC}
for w := 1; w <= routines; w++ {
go downloadWorker(w, arg, jobs, results, failed, die)
}
// Concurrently downloads parts
go downloadScheduler(jobs, parts)
// Wait for the parts download finished
completed := 0
for completed < len(parts) {
select {
case part := <-results:
completed++
dcp.PartStat[part.Index] = true
dcp.Parts[part.Index].CRC64 = part.CRC64
dcp.dump(cpFilePath)
downBytes := (part.End - part.Start + 1)
completedBytes += downBytes
event = newProgressEvent(TransferDataEvent, completedBytes, dcp.ObjStat.Size, downBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, dcp.ObjStat.Size, 0)
publishProgress(listener, event)
if dcp.enableCRC {
actualCRC := combineCRCInParts(dcp.Parts)
err = CheckDownloadCRC(actualCRC, dcp.CRC)
if err != nil {
return err
}
}
return dcp.complete(cpFilePath, tempFilePath)
}

@ -0,0 +1,94 @@
package oss
import (
"encoding/xml"
"fmt"
"net/http"
"strings"
)
// ServiceError contains fields of the error response from Oss Service REST API.
type ServiceError struct {
XMLName xml.Name `xml:"Error"`
Code string `xml:"Code"` // The error code returned from OSS to the caller
Message string `xml:"Message"` // The detail error message from OSS
RequestID string `xml:"RequestId"` // The UUID used to uniquely identify the request
HostID string `xml:"HostId"` // The OSS server cluster's Id
Endpoint string `xml:"Endpoint"`
RawMessage string // The raw messages from OSS
StatusCode int // HTTP status code
}
// Error implements interface error
func (e ServiceError) Error() string {
if e.Endpoint == "" {
return fmt.Sprintf("oss: service returned error: StatusCode=%d, ErrorCode=%s, ErrorMessage=\"%s\", RequestId=%s",
e.StatusCode, e.Code, e.Message, e.RequestID)
}
return fmt.Sprintf("oss: service returned error: StatusCode=%d, ErrorCode=%s, ErrorMessage=\"%s\", RequestId=%s, Endpoint=%s",
e.StatusCode, e.Code, e.Message, e.RequestID, e.Endpoint)
}
// UnexpectedStatusCodeError is returned when a storage service responds with neither an error
// nor with an HTTP status code indicating success.
type UnexpectedStatusCodeError struct {
allowed []int // The expected HTTP stats code returned from OSS
got int // The actual HTTP status code from OSS
}
// Error implements interface error
func (e UnexpectedStatusCodeError) Error() string {
s := func(i int) string { return fmt.Sprintf("%d %s", i, http.StatusText(i)) }
got := s(e.got)
expected := []string{}
for _, v := range e.allowed {
expected = append(expected, s(v))
}
return fmt.Sprintf("oss: status code from service response is %s; was expecting %s",
got, strings.Join(expected, " or "))
}
// Got is the actual status code returned by oss.
func (e UnexpectedStatusCodeError) Got() int {
return e.got
}
// CheckRespCode returns UnexpectedStatusError if the given response code is not
// one of the allowed status codes; otherwise nil.
func CheckRespCode(respCode int, allowed []int) error {
for _, v := range allowed {
if respCode == v {
return nil
}
}
return UnexpectedStatusCodeError{allowed, respCode}
}
// CRCCheckError is returned when crc check is inconsistent between client and server
type CRCCheckError struct {
clientCRC uint64 // Calculated CRC64 in client
serverCRC uint64 // Calculated CRC64 in server
operation string // Upload operations such as PutObject/AppendObject/UploadPart, etc
requestID string // The request id of this operation
}
// Error implements interface error
func (e CRCCheckError) Error() string {
return fmt.Sprintf("oss: the crc of %s is inconsistent, client %d but server %d; request id is %s",
e.operation, e.clientCRC, e.serverCRC, e.requestID)
}
func CheckDownloadCRC(clientCRC, serverCRC uint64) error {
if clientCRC == serverCRC {
return nil
}
return CRCCheckError{clientCRC, serverCRC, "DownloadFile", ""}
}
func CheckCRC(resp *Response, operation string) error {
if resp.Headers.Get(HTTPHeaderOssCRC64) == "" || resp.ClientCRC == resp.ServerCRC {
return nil
}
return CRCCheckError{resp.ClientCRC, resp.ServerCRC, operation, resp.Headers.Get(HTTPHeaderOssRequestID)}
}

@ -0,0 +1,28 @@
// +build !go1.7
// "golang.org/x/time/rate" is depended on golang context package go1.7 onward
// this file is only for build,not supports limit upload speed
package oss
import (
"fmt"
"io"
)
const (
perTokenBandwidthSize int = 1024
)
type OssLimiter struct {
}
type LimitSpeedReader struct {
io.ReadCloser
reader io.Reader
ossLimiter *OssLimiter
}
func GetOssLimiter(uploadSpeed int) (ossLimiter *OssLimiter, err error) {
err = fmt.Errorf("rate.Limiter is not supported below version go1.7")
return nil, err
}

@ -0,0 +1,90 @@
// +build go1.7
package oss
import (
"fmt"
"io"
"math"
"time"
"golang.org/x/time/rate"
)
const (
perTokenBandwidthSize int = 1024
)
// OssLimiter wrapper rate.Limiter
type OssLimiter struct {
limiter *rate.Limiter
}
// GetOssLimiter create OssLimiter
// uploadSpeed KB/s
func GetOssLimiter(uploadSpeed int) (ossLimiter *OssLimiter, err error) {
limiter := rate.NewLimiter(rate.Limit(uploadSpeed), uploadSpeed)
// first consume the initial full token,the limiter will behave more accurately
limiter.AllowN(time.Now(), uploadSpeed)
return &OssLimiter{
limiter: limiter,
}, nil
}
// LimitSpeedReader for limit bandwidth upload
type LimitSpeedReader struct {
io.ReadCloser
reader io.Reader
ossLimiter *OssLimiter
}
// Read
func (r *LimitSpeedReader) Read(p []byte) (n int, err error) {
n = 0
err = nil
start := 0
burst := r.ossLimiter.limiter.Burst()
var end int
var tmpN int
var tc int
for start < len(p) {
if start+burst*perTokenBandwidthSize < len(p) {
end = start + burst*perTokenBandwidthSize
} else {
end = len(p)
}
tmpN, err = r.reader.Read(p[start:end])
if tmpN > 0 {
n += tmpN
start = n
}
if err != nil {
return
}
tc = int(math.Ceil(float64(tmpN) / float64(perTokenBandwidthSize)))
now := time.Now()
re := r.ossLimiter.limiter.ReserveN(now, tc)
if !re.OK() {
err = fmt.Errorf("LimitSpeedReader.Read() failure,ReserveN error,start:%d,end:%d,burst:%d,perTokenBandwidthSize:%d",
start, end, burst, perTokenBandwidthSize)
return
}
timeDelay := re.Delay()
time.Sleep(timeDelay)
}
return
}
// Close ...
func (r *LimitSpeedReader) Close() error {
rc, ok := r.reader.(io.ReadCloser)
if ok {
return rc.Close()
}
return nil
}

@ -0,0 +1,257 @@
package oss
import (
"bytes"
"encoding/xml"
"fmt"
"io"
"net/http"
"strconv"
"time"
)
//
// CreateLiveChannel create a live-channel
//
// channelName the name of the channel
// config configuration of the channel
//
// CreateLiveChannelResult the result of create live-channel
// error nil if success, otherwise error
//
func (bucket Bucket) CreateLiveChannel(channelName string, config LiveChannelConfiguration) (CreateLiveChannelResult, error) {
var out CreateLiveChannelResult
bs, err := xml.Marshal(config)
if err != nil {
return out, err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
params := map[string]interface{}{}
params["live"] = nil
resp, err := bucket.do("PUT", channelName, params, nil, buffer, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
//
// PutLiveChannelStatus Set the status of the live-channel: enabled/disabled
//
// channelName the name of the channel
// status enabled/disabled
//
// error nil if success, otherwise error
//
func (bucket Bucket) PutLiveChannelStatus(channelName, status string) error {
params := map[string]interface{}{}
params["live"] = nil
params["status"] = status
resp, err := bucket.do("PUT", channelName, params, nil, nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
return CheckRespCode(resp.StatusCode, []int{http.StatusOK})
}
// PostVodPlaylist create an playlist based on the specified playlist name, startTime and endTime
//
// channelName the name of the channel
// playlistName the name of the playlist, must end with ".m3u8"
// startTime the start time of the playlist
// endTime the endtime of the playlist
//
// error nil if success, otherwise error
//
func (bucket Bucket) PostVodPlaylist(channelName, playlistName string, startTime, endTime time.Time) error {
params := map[string]interface{}{}
params["vod"] = nil
params["startTime"] = strconv.FormatInt(startTime.Unix(), 10)
params["endTime"] = strconv.FormatInt(endTime.Unix(), 10)
key := fmt.Sprintf("%s/%s", channelName, playlistName)
resp, err := bucket.do("POST", key, params, nil, nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
return CheckRespCode(resp.StatusCode, []int{http.StatusOK})
}
// GetVodPlaylist get the playlist based on the specified channelName, startTime and endTime
//
// channelName the name of the channel
// startTime the start time of the playlist
// endTime the endtime of the playlist
//
// io.ReadCloser reader instance for reading data from response. It must be called close() after the usage and only valid when error is nil.
// error nil if success, otherwise error
//
func (bucket Bucket) GetVodPlaylist(channelName string, startTime, endTime time.Time) (io.ReadCloser, error) {
params := map[string]interface{}{}
params["vod"] = nil
params["startTime"] = strconv.FormatInt(startTime.Unix(), 10)
params["endTime"] = strconv.FormatInt(endTime.Unix(), 10)
resp, err := bucket.do("GET", channelName, params, nil, nil, nil)
if err != nil {
return nil, err
}
return resp.Body, nil
}
//
// GetLiveChannelStat Get the state of the live-channel
//
// channelName the name of the channel
//
// LiveChannelStat the state of the live-channel
// error nil if success, otherwise error
//
func (bucket Bucket) GetLiveChannelStat(channelName string) (LiveChannelStat, error) {
var out LiveChannelStat
params := map[string]interface{}{}
params["live"] = nil
params["comp"] = "stat"
resp, err := bucket.do("GET", channelName, params, nil, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
//
// GetLiveChannelInfo Get the configuration info of the live-channel
//
// channelName the name of the channel
//
// LiveChannelConfiguration the configuration info of the live-channel
// error nil if success, otherwise error
//
func (bucket Bucket) GetLiveChannelInfo(channelName string) (LiveChannelConfiguration, error) {
var out LiveChannelConfiguration
params := map[string]interface{}{}
params["live"] = nil
resp, err := bucket.do("GET", channelName, params, nil, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
//
// GetLiveChannelHistory Get push records of live-channel
//
// channelName the name of the channel
//
// LiveChannelHistory push records
// error nil if success, otherwise error
//
func (bucket Bucket) GetLiveChannelHistory(channelName string) (LiveChannelHistory, error) {
var out LiveChannelHistory
params := map[string]interface{}{}
params["live"] = nil
params["comp"] = "history"
resp, err := bucket.do("GET", channelName, params, nil, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
//
// ListLiveChannel list the live-channels
//
// options Prefix: filter by the name start with the value of "Prefix"
// MaxKeys: the maximum count returned
// Marker: cursor from which starting list
//
// ListLiveChannelResult live-channel list
// error nil if success, otherwise error
//
func (bucket Bucket) ListLiveChannel(options ...Option) (ListLiveChannelResult, error) {
var out ListLiveChannelResult
params, err := GetRawParams(options)
if err != nil {
return out, err
}
params["live"] = nil
resp, err := bucket.do("GET", "", params, nil, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
//
// DeleteLiveChannel Delete the live-channel. When a client trying to stream the live-channel, the operation will fail. it will only delete the live-channel itself and the object generated by the live-channel will not be deleted.
//
// channelName the name of the channel
//
// error nil if success, otherwise error
//
func (bucket Bucket) DeleteLiveChannel(channelName string) error {
params := map[string]interface{}{}
params["live"] = nil
if channelName == "" {
return fmt.Errorf("invalid argument: channel name is empty")
}
resp, err := bucket.do("DELETE", channelName, params, nil, nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
return CheckRespCode(resp.StatusCode, []int{http.StatusNoContent})
}
//
// SignRtmpURL Generate a RTMP push-stream signature URL for the trusted user to push the RTMP stream to the live-channel.
//
// channelName the name of the channel
// playlistName the name of the playlist, must end with ".m3u8"
// expires expiration (in seconds)
//
// string singed rtmp push stream url
// error nil if success, otherwise error
//
func (bucket Bucket) SignRtmpURL(channelName, playlistName string, expires int64) (string, error) {
if expires <= 0 {
return "", fmt.Errorf("invalid argument: %d, expires must greater than 0", expires)
}
expiration := time.Now().Unix() + expires
return bucket.Client.Conn.signRtmpURL(bucket.BucketName, channelName, playlistName, expiration), nil
}

@ -0,0 +1,572 @@
package oss
import (
"mime"
"path"
"strings"
)
var extToMimeType = map[string]string{
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
".xltx": "application/vnd.openxmlformats-officedocument.spreadsheetml.template",
".potx": "application/vnd.openxmlformats-officedocument.presentationml.template",
".ppsx": "application/vnd.openxmlformats-officedocument.presentationml.slideshow",
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
".sldx": "application/vnd.openxmlformats-officedocument.presentationml.slide",
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
".dotx": "application/vnd.openxmlformats-officedocument.wordprocessingml.template",
".xlam": "application/vnd.ms-excel.addin.macroEnabled.12",
".xlsb": "application/vnd.ms-excel.sheet.binary.macroEnabled.12",
".apk": "application/vnd.android.package-archive",
".hqx": "application/mac-binhex40",
".cpt": "application/mac-compactpro",
".doc": "application/msword",
".ogg": "application/ogg",
".pdf": "application/pdf",
".rtf": "text/rtf",
".mif": "application/vnd.mif",
".xls": "application/vnd.ms-excel",
".ppt": "application/vnd.ms-powerpoint",
".odc": "application/vnd.oasis.opendocument.chart",
".odb": "application/vnd.oasis.opendocument.database",
".odf": "application/vnd.oasis.opendocument.formula",
".odg": "application/vnd.oasis.opendocument.graphics",
".otg": "application/vnd.oasis.opendocument.graphics-template",
".odi": "application/vnd.oasis.opendocument.image",
".odp": "application/vnd.oasis.opendocument.presentation",
".otp": "application/vnd.oasis.opendocument.presentation-template",
".ods": "application/vnd.oasis.opendocument.spreadsheet",
".ots": "application/vnd.oasis.opendocument.spreadsheet-template",
".odt": "application/vnd.oasis.opendocument.text",
".odm": "application/vnd.oasis.opendocument.text-master",
".ott": "application/vnd.oasis.opendocument.text-template",
".oth": "application/vnd.oasis.opendocument.text-web",
".sxw": "application/vnd.sun.xml.writer",
".stw": "application/vnd.sun.xml.writer.template",
".sxc": "application/vnd.sun.xml.calc",
".stc": "application/vnd.sun.xml.calc.template",
".sxd": "application/vnd.sun.xml.draw",
".std": "application/vnd.sun.xml.draw.template",
".sxi": "application/vnd.sun.xml.impress",
".sti": "application/vnd.sun.xml.impress.template",
".sxg": "application/vnd.sun.xml.writer.global",
".sxm": "application/vnd.sun.xml.math",
".sis": "application/vnd.symbian.install",
".wbxml": "application/vnd.wap.wbxml",
".wmlc": "application/vnd.wap.wmlc",
".wmlsc": "application/vnd.wap.wmlscriptc",
".bcpio": "application/x-bcpio",
".torrent": "application/x-bittorrent",
".bz2": "application/x-bzip2",
".vcd": "application/x-cdlink",
".pgn": "application/x-chess-pgn",
".cpio": "application/x-cpio",
".csh": "application/x-csh",
".dvi": "application/x-dvi",
".spl": "application/x-futuresplash",
".gtar": "application/x-gtar",
".hdf": "application/x-hdf",
".jar": "application/x-java-archive",
".jnlp": "application/x-java-jnlp-file",
".js": "application/x-javascript",
".ksp": "application/x-kspread",
".chrt": "application/x-kchart",
".kil": "application/x-killustrator",
".latex": "application/x-latex",
".rpm": "application/x-rpm",
".sh": "application/x-sh",
".shar": "application/x-shar",
".swf": "application/x-shockwave-flash",
".sit": "application/x-stuffit",
".sv4cpio": "application/x-sv4cpio",
".sv4crc": "application/x-sv4crc",
".tar": "application/x-tar",
".tcl": "application/x-tcl",
".tex": "application/x-tex",
".man": "application/x-troff-man",
".me": "application/x-troff-me",
".ms": "application/x-troff-ms",
".ustar": "application/x-ustar",
".src": "application/x-wais-source",
".zip": "application/zip",
".m3u": "audio/x-mpegurl",
".ra": "audio/x-pn-realaudio",
".wav": "audio/x-wav",
".wma": "audio/x-ms-wma",
".wax": "audio/x-ms-wax",
".pdb": "chemical/x-pdb",
".xyz": "chemical/x-xyz",
".bmp": "image/bmp",
".gif": "image/gif",
".ief": "image/ief",
".png": "image/png",
".wbmp": "image/vnd.wap.wbmp",
".ras": "image/x-cmu-raster",
".pnm": "image/x-portable-anymap",
".pbm": "image/x-portable-bitmap",
".pgm": "image/x-portable-graymap",
".ppm": "image/x-portable-pixmap",
".rgb": "image/x-rgb",
".xbm": "image/x-xbitmap",
".xpm": "image/x-xpixmap",
".xwd": "image/x-xwindowdump",
".css": "text/css",
".rtx": "text/richtext",
".tsv": "text/tab-separated-values",
".jad": "text/vnd.sun.j2me.app-descriptor",
".wml": "text/vnd.wap.wml",
".wmls": "text/vnd.wap.wmlscript",
".etx": "text/x-setext",
".mxu": "video/vnd.mpegurl",
".flv": "video/x-flv",
".wm": "video/x-ms-wm",
".wmv": "video/x-ms-wmv",
".wmx": "video/x-ms-wmx",
".wvx": "video/x-ms-wvx",
".avi": "video/x-msvideo",
".movie": "video/x-sgi-movie",
".ice": "x-conference/x-cooltalk",
".3gp": "video/3gpp",
".ai": "application/postscript",
".aif": "audio/x-aiff",
".aifc": "audio/x-aiff",
".aiff": "audio/x-aiff",
".asc": "text/plain",
".atom": "application/atom+xml",
".au": "audio/basic",
".bin": "application/octet-stream",
".cdf": "application/x-netcdf",
".cgm": "image/cgm",
".class": "application/octet-stream",
".dcr": "application/x-director",
".dif": "video/x-dv",
".dir": "application/x-director",
".djv": "image/vnd.djvu",
".djvu": "image/vnd.djvu",
".dll": "application/octet-stream",
".dmg": "application/octet-stream",
".dms": "application/octet-stream",
".dtd": "application/xml-dtd",
".dv": "video/x-dv",
".dxr": "application/x-director",
".eps": "application/postscript",
".exe": "application/octet-stream",
".ez": "application/andrew-inset",
".gram": "application/srgs",
".grxml": "application/srgs+xml",
".gz": "application/x-gzip",
".htm": "text/html",
".html": "text/html",
".ico": "image/x-icon",
".ics": "text/calendar",
".ifb": "text/calendar",
".iges": "model/iges",
".igs": "model/iges",
".jp2": "image/jp2",
".jpe": "image/jpeg",
".jpeg": "image/jpeg",
".jpg": "image/jpeg",
".kar": "audio/midi",
".lha": "application/octet-stream",
".lzh": "application/octet-stream",
".m4a": "audio/mp4a-latm",
".m4p": "audio/mp4a-latm",
".m4u": "video/vnd.mpegurl",
".m4v": "video/x-m4v",
".mac": "image/x-macpaint",
".mathml": "application/mathml+xml",
".mesh": "model/mesh",
".mid": "audio/midi",
".midi": "audio/midi",
".mov": "video/quicktime",
".mp2": "audio/mpeg",
".mp3": "audio/mpeg",
".mp4": "video/mp4",
".mpe": "video/mpeg",
".mpeg": "video/mpeg",
".mpg": "video/mpeg",
".mpga": "audio/mpeg",
".msh": "model/mesh",
".nc": "application/x-netcdf",
".oda": "application/oda",
".ogv": "video/ogv",
".pct": "image/pict",
".pic": "image/pict",
".pict": "image/pict",
".pnt": "image/x-macpaint",
".pntg": "image/x-macpaint",
".ps": "application/postscript",
".qt": "video/quicktime",
".qti": "image/x-quicktime",
".qtif": "image/x-quicktime",
".ram": "audio/x-pn-realaudio",
".rdf": "application/rdf+xml",
".rm": "application/vnd.rn-realmedia",
".roff": "application/x-troff",
".sgm": "text/sgml",
".sgml": "text/sgml",
".silo": "model/mesh",
".skd": "application/x-koan",
".skm": "application/x-koan",
".skp": "application/x-koan",
".skt": "application/x-koan",
".smi": "application/smil",
".smil": "application/smil",
".snd": "audio/basic",
".so": "application/octet-stream",
".svg": "image/svg+xml",
".t": "application/x-troff",
".texi": "application/x-texinfo",
".texinfo": "application/x-texinfo",
".tif": "image/tiff",
".tiff": "image/tiff",
".tr": "application/x-troff",
".txt": "text/plain",
".vrml": "model/vrml",
".vxml": "application/voicexml+xml",
".webm": "video/webm",
".wrl": "model/vrml",
".xht": "application/xhtml+xml",
".xhtml": "application/xhtml+xml",
".xml": "application/xml",
".xsl": "application/xml",
".xslt": "application/xslt+xml",
".xul": "application/vnd.mozilla.xul+xml",
".webp": "image/webp",
".323": "text/h323",
".aab": "application/x-authoware-bin",
".aam": "application/x-authoware-map",
".aas": "application/x-authoware-seg",
".acx": "application/internet-property-stream",
".als": "audio/X-Alpha5",
".amc": "application/x-mpeg",
".ani": "application/octet-stream",
".asd": "application/astound",
".asf": "video/x-ms-asf",
".asn": "application/astound",
".asp": "application/x-asap",
".asr": "video/x-ms-asf",
".asx": "video/x-ms-asf",
".avb": "application/octet-stream",
".awb": "audio/amr-wb",
".axs": "application/olescript",
".bas": "text/plain",
".bin ": "application/octet-stream",
".bld": "application/bld",
".bld2": "application/bld2",
".bpk": "application/octet-stream",
".c": "text/plain",
".cal": "image/x-cals",
".cat": "application/vnd.ms-pkiseccat",
".ccn": "application/x-cnc",
".cco": "application/x-cocoa",
".cer": "application/x-x509-ca-cert",
".cgi": "magnus-internal/cgi",
".chat": "application/x-chat",
".clp": "application/x-msclip",
".cmx": "image/x-cmx",
".co": "application/x-cult3d-object",
".cod": "image/cis-cod",
".conf": "text/plain",
".cpp": "text/plain",
".crd": "application/x-mscardfile",
".crl": "application/pkix-crl",
".crt": "application/x-x509-ca-cert",
".csm": "chemical/x-csml",
".csml": "chemical/x-csml",
".cur": "application/octet-stream",
".dcm": "x-lml/x-evm",
".dcx": "image/x-dcx",
".der": "application/x-x509-ca-cert",
".dhtml": "text/html",
".dot": "application/msword",
".dwf": "drawing/x-dwf",
".dwg": "application/x-autocad",
".dxf": "application/x-autocad",
".ebk": "application/x-expandedbook",
".emb": "chemical/x-embl-dl-nucleotide",
".embl": "chemical/x-embl-dl-nucleotide",
".epub": "application/epub+zip",
".eri": "image/x-eri",
".es": "audio/echospeech",
".esl": "audio/echospeech",
".etc": "application/x-earthtime",
".evm": "x-lml/x-evm",
".evy": "application/envoy",
".fh4": "image/x-freehand",
".fh5": "image/x-freehand",
".fhc": "image/x-freehand",
".fif": "application/fractals",
".flr": "x-world/x-vrml",
".fm": "application/x-maker",
".fpx": "image/x-fpx",
".fvi": "video/isivideo",
".gau": "chemical/x-gaussian-input",
".gca": "application/x-gca-compressed",
".gdb": "x-lml/x-gdb",
".gps": "application/x-gps",
".h": "text/plain",
".hdm": "text/x-hdml",
".hdml": "text/x-hdml",
".hlp": "application/winhlp",
".hta": "application/hta",
".htc": "text/x-component",
".hts": "text/html",
".htt": "text/webviewhtml",
".ifm": "image/gif",
".ifs": "image/ifs",
".iii": "application/x-iphone",
".imy": "audio/melody",
".ins": "application/x-internet-signup",
".ips": "application/x-ipscript",
".ipx": "application/x-ipix",
".isp": "application/x-internet-signup",
".it": "audio/x-mod",
".itz": "audio/x-mod",
".ivr": "i-world/i-vrml",
".j2k": "image/j2k",
".jam": "application/x-jam",
".java": "text/plain",
".jfif": "image/pipeg",
".jpz": "image/jpeg",
".jwc": "application/jwc",
".kjx": "application/x-kjx",
".lak": "x-lml/x-lak",
".lcc": "application/fastman",
".lcl": "application/x-digitalloca",
".lcr": "application/x-digitalloca",
".lgh": "application/lgh",
".lml": "x-lml/x-lml",
".lmlpack": "x-lml/x-lmlpack",
".log": "text/plain",
".lsf": "video/x-la-asf",
".lsx": "video/x-la-asf",
".m13": "application/x-msmediaview",
".m14": "application/x-msmediaview",
".m15": "audio/x-mod",
".m3url": "audio/x-mpegurl",
".m4b": "audio/mp4a-latm",
".ma1": "audio/ma1",
".ma2": "audio/ma2",
".ma3": "audio/ma3",
".ma5": "audio/ma5",
".map": "magnus-internal/imagemap",
".mbd": "application/mbedlet",
".mct": "application/x-mascot",
".mdb": "application/x-msaccess",
".mdz": "audio/x-mod",
".mel": "text/x-vmel",
".mht": "message/rfc822",
".mhtml": "message/rfc822",
".mi": "application/x-mif",
".mil": "image/x-cals",
".mio": "audio/x-mio",
".mmf": "application/x-skt-lbs",
".mng": "video/x-mng",
".mny": "application/x-msmoney",
".moc": "application/x-mocha",
".mocha": "application/x-mocha",
".mod": "audio/x-mod",
".mof": "application/x-yumekara",
".mol": "chemical/x-mdl-molfile",
".mop": "chemical/x-mopac-input",
".mpa": "video/mpeg",
".mpc": "application/vnd.mpohun.certificate",
".mpg4": "video/mp4",
".mpn": "application/vnd.mophun.application",
".mpp": "application/vnd.ms-project",
".mps": "application/x-mapserver",
".mpv2": "video/mpeg",
".mrl": "text/x-mrml",
".mrm": "application/x-mrm",
".msg": "application/vnd.ms-outlook",
".mts": "application/metastream",
".mtx": "application/metastream",
".mtz": "application/metastream",
".mvb": "application/x-msmediaview",
".mzv": "application/metastream",
".nar": "application/zip",
".nbmp": "image/nbmp",
".ndb": "x-lml/x-ndb",
".ndwn": "application/ndwn",
".nif": "application/x-nif",
".nmz": "application/x-scream",
".nokia-op-logo": "image/vnd.nok-oplogo-color",
".npx": "application/x-netfpx",
".nsnd": "audio/nsnd",
".nva": "application/x-neva1",
".nws": "message/rfc822",
".oom": "application/x-AtlasMate-Plugin",
".p10": "application/pkcs10",
".p12": "application/x-pkcs12",
".p7b": "application/x-pkcs7-certificates",
".p7c": "application/x-pkcs7-mime",
".p7m": "application/x-pkcs7-mime",
".p7r": "application/x-pkcs7-certreqresp",
".p7s": "application/x-pkcs7-signature",
".pac": "audio/x-pac",
".pae": "audio/x-epac",
".pan": "application/x-pan",
".pcx": "image/x-pcx",
".pda": "image/x-pda",
".pfr": "application/font-tdpfr",
".pfx": "application/x-pkcs12",
".pko": "application/ynd.ms-pkipko",
".pm": "application/x-perl",
".pma": "application/x-perfmon",
".pmc": "application/x-perfmon",
".pmd": "application/x-pmd",
".pml": "application/x-perfmon",
".pmr": "application/x-perfmon",
".pmw": "application/x-perfmon",
".pnz": "image/png",
".pot,": "application/vnd.ms-powerpoint",
".pps": "application/vnd.ms-powerpoint",
".pqf": "application/x-cprplayer",
".pqi": "application/cprplayer",
".prc": "application/x-prc",
".prf": "application/pics-rules",
".prop": "text/plain",
".proxy": "application/x-ns-proxy-autoconfig",
".ptlk": "application/listenup",
".pub": "application/x-mspublisher",
".pvx": "video/x-pv-pvx",
".qcp": "audio/vnd.qcelp",
".r3t": "text/vnd.rn-realtext3d",
".rar": "application/octet-stream",
".rc": "text/plain",
".rf": "image/vnd.rn-realflash",
".rlf": "application/x-richlink",
".rmf": "audio/x-rmf",
".rmi": "audio/mid",
".rmm": "audio/x-pn-realaudio",
".rmvb": "audio/x-pn-realaudio",
".rnx": "application/vnd.rn-realplayer",
".rp": "image/vnd.rn-realpix",
".rt": "text/vnd.rn-realtext",
".rte": "x-lml/x-gps",
".rtg": "application/metastream",
".rv": "video/vnd.rn-realvideo",
".rwc": "application/x-rogerwilco",
".s3m": "audio/x-mod",
".s3z": "audio/x-mod",
".sca": "application/x-supercard",
".scd": "application/x-msschedule",
".sct": "text/scriptlet",
".sdf": "application/e-score",
".sea": "application/x-stuffit",
".setpay": "application/set-payment-initiation",
".setreg": "application/set-registration-initiation",
".shtml": "text/html",
".shtm": "text/html",
".shw": "application/presentations",
".si6": "image/si6",
".si7": "image/vnd.stiwap.sis",
".si9": "image/vnd.lgtwap.sis",
".slc": "application/x-salsa",
".smd": "audio/x-smd",
".smp": "application/studiom",
".smz": "audio/x-smd",
".spc": "application/x-pkcs7-certificates",
".spr": "application/x-sprite",
".sprite": "application/x-sprite",
".sdp": "application/sdp",
".spt": "application/x-spt",
".sst": "application/vnd.ms-pkicertstore",
".stk": "application/hyperstudio",
".stl": "application/vnd.ms-pkistl",
".stm": "text/html",
".svf": "image/vnd",
".svh": "image/svh",
".svr": "x-world/x-svr",
".swfl": "application/x-shockwave-flash",
".tad": "application/octet-stream",
".talk": "text/x-speech",
".taz": "application/x-tar",
".tbp": "application/x-timbuktu",
".tbt": "application/x-timbuktu",
".tgz": "application/x-compressed",
".thm": "application/vnd.eri.thm",
".tki": "application/x-tkined",
".tkined": "application/x-tkined",
".toc": "application/toc",
".toy": "image/toy",
".trk": "x-lml/x-gps",
".trm": "application/x-msterminal",
".tsi": "audio/tsplayer",
".tsp": "application/dsptype",
".ttf": "application/octet-stream",
".ttz": "application/t-time",
".uls": "text/iuls",
".ult": "audio/x-mod",
".uu": "application/x-uuencode",
".uue": "application/x-uuencode",
".vcf": "text/x-vcard",
".vdo": "video/vdo",
".vib": "audio/vib",
".viv": "video/vivo",
".vivo": "video/vivo",
".vmd": "application/vocaltec-media-desc",
".vmf": "application/vocaltec-media-file",
".vmi": "application/x-dreamcast-vms-info",
".vms": "application/x-dreamcast-vms",
".vox": "audio/voxware",
".vqe": "audio/x-twinvq-plugin",
".vqf": "audio/x-twinvq",
".vql": "audio/x-twinvq",
".vre": "x-world/x-vream",
".vrt": "x-world/x-vrt",
".vrw": "x-world/x-vream",
".vts": "workbook/formulaone",
".wcm": "application/vnd.ms-works",
".wdb": "application/vnd.ms-works",
".web": "application/vnd.xara",
".wi": "image/wavelet",
".wis": "application/x-InstallShield",
".wks": "application/vnd.ms-works",
".wmd": "application/x-ms-wmd",
".wmf": "application/x-msmetafile",
".wmlscript": "text/vnd.wap.wmlscript",
".wmz": "application/x-ms-wmz",
".wpng": "image/x-up-wpng",
".wps": "application/vnd.ms-works",
".wpt": "x-lml/x-gps",
".wri": "application/x-mswrite",
".wrz": "x-world/x-vrml",
".ws": "text/vnd.wap.wmlscript",
".wsc": "application/vnd.wap.wmlscriptc",
".wv": "video/wavelet",
".wxl": "application/x-wxl",
".x-gzip": "application/x-gzip",
".xaf": "x-world/x-vrml",
".xar": "application/vnd.xara",
".xdm": "application/x-xdma",
".xdma": "application/x-xdma",
".xdw": "application/vnd.fujixerox.docuworks",
".xhtm": "application/xhtml+xml",
".xla": "application/vnd.ms-excel",
".xlc": "application/vnd.ms-excel",
".xll": "application/x-excel",
".xlm": "application/vnd.ms-excel",
".xlt": "application/vnd.ms-excel",
".xlw": "application/vnd.ms-excel",
".xm": "audio/x-mod",
".xmz": "audio/x-mod",
".xof": "x-world/x-vrml",
".xpi": "application/x-xpinstall",
".xsit": "text/xml",
".yz1": "application/x-yz1",
".z": "application/x-compress",
".zac": "application/x-zaurus-zac",
".json": "application/json",
}
// TypeByExtension returns the MIME type associated with the file extension ext.
// gets the file's MIME type for HTTP header Content-Type
func TypeByExtension(filePath string) string {
typ := mime.TypeByExtension(path.Ext(filePath))
if typ == "" {
typ = extToMimeType[strings.ToLower(path.Ext(filePath))]
}
return typ
}

@ -0,0 +1,69 @@
package oss
import (
"hash"
"io"
"net/http"
)
// Response defines HTTP response from OSS
type Response struct {
StatusCode int
Headers http.Header
Body io.ReadCloser
ClientCRC uint64
ServerCRC uint64
}
func (r *Response) Read(p []byte) (n int, err error) {
return r.Body.Read(p)
}
// Close close http reponse body
func (r *Response) Close() error {
return r.Body.Close()
}
// PutObjectRequest is the request of DoPutObject
type PutObjectRequest struct {
ObjectKey string
Reader io.Reader
}
// GetObjectRequest is the request of DoGetObject
type GetObjectRequest struct {
ObjectKey string
}
// GetObjectResult is the result of DoGetObject
type GetObjectResult struct {
Response *Response
ClientCRC hash.Hash64
ServerCRC uint64
}
// AppendObjectRequest is the requtest of DoAppendObject
type AppendObjectRequest struct {
ObjectKey string
Reader io.Reader
Position int64
}
// AppendObjectResult is the result of DoAppendObject
type AppendObjectResult struct {
NextPosition int64
CRC uint64
}
// UploadPartRequest is the request of DoUploadPart
type UploadPartRequest struct {
InitResult *InitiateMultipartUploadResult
Reader io.Reader
PartSize int64
PartNumber int
}
// UploadPartResult is the result of DoUploadPart
type UploadPartResult struct {
Part UploadPart
}

@ -0,0 +1,474 @@
package oss
import (
"crypto/md5"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"strconv"
)
// CopyFile is multipart copy object
//
// srcBucketName source bucket name
// srcObjectKey source object name
// destObjectKey target object name in the form of bucketname.objectkey
// partSize the part size in byte.
// options object's contraints. Check out function InitiateMultipartUpload.
//
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) CopyFile(srcBucketName, srcObjectKey, destObjectKey string, partSize int64, options ...Option) error {
destBucketName := bucket.BucketName
if partSize < MinPartSize || partSize > MaxPartSize {
return errors.New("oss: part size invalid range (1024KB, 5GB]")
}
cpConf := getCpConfig(options)
routines := getRoutines(options)
var strVersionId string
versionId, _ := FindOption(options, "versionId", nil)
if versionId != nil {
strVersionId = versionId.(string)
}
if cpConf != nil && cpConf.IsEnable {
cpFilePath := getCopyCpFilePath(cpConf, srcBucketName, srcObjectKey, destBucketName, destObjectKey, strVersionId)
if cpFilePath != "" {
return bucket.copyFileWithCp(srcBucketName, srcObjectKey, destBucketName, destObjectKey, partSize, options, cpFilePath, routines)
}
}
return bucket.copyFile(srcBucketName, srcObjectKey, destBucketName, destObjectKey,
partSize, options, routines)
}
func getCopyCpFilePath(cpConf *cpConfig, srcBucket, srcObject, destBucket, destObject, versionId string) string {
if cpConf.FilePath == "" && cpConf.DirPath != "" {
dest := fmt.Sprintf("oss://%v/%v", destBucket, destObject)
src := fmt.Sprintf("oss://%v/%v", srcBucket, srcObject)
cpFileName := getCpFileName(src, dest, versionId)
cpConf.FilePath = cpConf.DirPath + string(os.PathSeparator) + cpFileName
}
return cpConf.FilePath
}
// ----- Concurrently copy without checkpoint ---------
// copyWorkerArg defines the copy worker arguments
type copyWorkerArg struct {
bucket *Bucket
imur InitiateMultipartUploadResult
srcBucketName string
srcObjectKey string
options []Option
hook copyPartHook
}
// copyPartHook is the hook for testing purpose
type copyPartHook func(part copyPart) error
var copyPartHooker copyPartHook = defaultCopyPartHook
func defaultCopyPartHook(part copyPart) error {
return nil
}
// copyWorker copies worker
func copyWorker(id int, arg copyWorkerArg, jobs <-chan copyPart, results chan<- UploadPart, failed chan<- error, die <-chan bool) {
for chunk := range jobs {
if err := arg.hook(chunk); err != nil {
failed <- err
break
}
chunkSize := chunk.End - chunk.Start + 1
part, err := arg.bucket.UploadPartCopy(arg.imur, arg.srcBucketName, arg.srcObjectKey,
chunk.Start, chunkSize, chunk.Number, arg.options...)
if err != nil {
failed <- err
break
}
select {
case <-die:
return
default:
}
results <- part
}
}
// copyScheduler
func copyScheduler(jobs chan copyPart, parts []copyPart) {
for _, part := range parts {
jobs <- part
}
close(jobs)
}
// copyPart structure
type copyPart struct {
Number int // Part number (from 1 to 10,000)
Start int64 // The start index in the source file.
End int64 // The end index in the source file
}
// getCopyParts calculates copy parts
func getCopyParts(objectSize, partSize int64) []copyPart {
parts := []copyPart{}
part := copyPart{}
i := 0
for offset := int64(0); offset < objectSize; offset += partSize {
part.Number = i + 1
part.Start = offset
part.End = GetPartEnd(offset, objectSize, partSize)
parts = append(parts, part)
i++
}
return parts
}
// getSrcObjectBytes gets the source file size
func getSrcObjectBytes(parts []copyPart) int64 {
var ob int64
for _, part := range parts {
ob += (part.End - part.Start + 1)
}
return ob
}
// copyFile is a concurrently copy without checkpoint
func (bucket Bucket) copyFile(srcBucketName, srcObjectKey, destBucketName, destObjectKey string,
partSize int64, options []Option, routines int) error {
descBucket, err := bucket.Client.Bucket(destBucketName)
srcBucket, err := bucket.Client.Bucket(srcBucketName)
listener := GetProgressListener(options)
// choice valid options
headerOptions := ChoiceHeadObjectOption(options)
partOptions := ChoiceTransferPartOption(options)
completeOptions := ChoiceCompletePartOption(options)
abortOptions := ChoiceAbortPartOption(options)
meta, err := srcBucket.GetObjectDetailedMeta(srcObjectKey, headerOptions...)
if err != nil {
return err
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 0)
if err != nil {
return err
}
// Get copy parts
parts := getCopyParts(objectSize, partSize)
// Initialize the multipart upload
imur, err := descBucket.InitiateMultipartUpload(destObjectKey, options...)
if err != nil {
return err
}
jobs := make(chan copyPart, len(parts))
results := make(chan UploadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
var completedBytes int64
totalBytes := getSrcObjectBytes(parts)
event := newProgressEvent(TransferStartedEvent, 0, totalBytes, 0)
publishProgress(listener, event)
// Start to copy workers
arg := copyWorkerArg{descBucket, imur, srcBucketName, srcObjectKey, partOptions, copyPartHooker}
for w := 1; w <= routines; w++ {
go copyWorker(w, arg, jobs, results, failed, die)
}
// Start the scheduler
go copyScheduler(jobs, parts)
// Wait for the parts finished.
completed := 0
ups := make([]UploadPart, len(parts))
for completed < len(parts) {
select {
case part := <-results:
completed++
ups[part.PartNumber-1] = part
copyBytes := (parts[part.PartNumber-1].End - parts[part.PartNumber-1].Start + 1)
completedBytes += copyBytes
event = newProgressEvent(TransferDataEvent, completedBytes, totalBytes, copyBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
descBucket.AbortMultipartUpload(imur, abortOptions...)
event = newProgressEvent(TransferFailedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
// Complete the multipart upload
_, err = descBucket.CompleteMultipartUpload(imur, ups, completeOptions...)
if err != nil {
bucket.AbortMultipartUpload(imur, abortOptions...)
return err
}
return nil
}
// ----- Concurrently copy with checkpoint -----
const copyCpMagic = "84F1F18C-FF1D-403B-A1D8-9DEB5F65910A"
type copyCheckpoint struct {
Magic string // Magic
MD5 string // CP content MD5
SrcBucketName string // Source bucket
SrcObjectKey string // Source object
DestBucketName string // Target bucket
DestObjectKey string // Target object
CopyID string // Copy ID
ObjStat objectStat // Object stat
Parts []copyPart // Copy parts
CopyParts []UploadPart // The uploaded parts
PartStat []bool // The part status
}
// isValid checks if the data is valid which means CP is valid and object is not updated.
func (cp copyCheckpoint) isValid(meta http.Header) (bool, error) {
// Compare CP's magic number and the MD5.
cpb := cp
cpb.MD5 = ""
js, _ := json.Marshal(cpb)
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
if cp.Magic != downloadCpMagic || b64 != cp.MD5 {
return false, nil
}
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return false, err
}
// Compare the object size and last modified time and etag.
if cp.ObjStat.Size != objectSize ||
cp.ObjStat.LastModified != meta.Get(HTTPHeaderLastModified) ||
cp.ObjStat.Etag != meta.Get(HTTPHeaderEtag) {
return false, nil
}
return true, nil
}
// load loads from the checkpoint file
func (cp *copyCheckpoint) load(filePath string) error {
contents, err := ioutil.ReadFile(filePath)
if err != nil {
return err
}
err = json.Unmarshal(contents, cp)
return err
}
// update updates the parts status
func (cp *copyCheckpoint) update(part UploadPart) {
cp.CopyParts[part.PartNumber-1] = part
cp.PartStat[part.PartNumber-1] = true
}
// dump dumps the CP to the file
func (cp *copyCheckpoint) dump(filePath string) error {
bcp := *cp
// Calculate MD5
bcp.MD5 = ""
js, err := json.Marshal(bcp)
if err != nil {
return err
}
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
bcp.MD5 = b64
// Serialization
js, err = json.Marshal(bcp)
if err != nil {
return err
}
// Dump
return ioutil.WriteFile(filePath, js, FilePermMode)
}
// todoParts returns unfinished parts
func (cp copyCheckpoint) todoParts() []copyPart {
dps := []copyPart{}
for i, ps := range cp.PartStat {
if !ps {
dps = append(dps, cp.Parts[i])
}
}
return dps
}
// getCompletedBytes returns finished bytes count
func (cp copyCheckpoint) getCompletedBytes() int64 {
var completedBytes int64
for i, part := range cp.Parts {
if cp.PartStat[i] {
completedBytes += (part.End - part.Start + 1)
}
}
return completedBytes
}
// prepare initializes the multipart upload
func (cp *copyCheckpoint) prepare(meta http.Header, srcBucket *Bucket, srcObjectKey string, destBucket *Bucket, destObjectKey string,
partSize int64, options []Option) error {
// CP
cp.Magic = copyCpMagic
cp.SrcBucketName = srcBucket.BucketName
cp.SrcObjectKey = srcObjectKey
cp.DestBucketName = destBucket.BucketName
cp.DestObjectKey = destObjectKey
objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 64)
if err != nil {
return err
}
cp.ObjStat.Size = objectSize
cp.ObjStat.LastModified = meta.Get(HTTPHeaderLastModified)
cp.ObjStat.Etag = meta.Get(HTTPHeaderEtag)
// Parts
cp.Parts = getCopyParts(objectSize, partSize)
cp.PartStat = make([]bool, len(cp.Parts))
for i := range cp.PartStat {
cp.PartStat[i] = false
}
cp.CopyParts = make([]UploadPart, len(cp.Parts))
// Init copy
imur, err := destBucket.InitiateMultipartUpload(destObjectKey, options...)
if err != nil {
return err
}
cp.CopyID = imur.UploadID
return nil
}
func (cp *copyCheckpoint) complete(bucket *Bucket, parts []UploadPart, cpFilePath string, options []Option) error {
imur := InitiateMultipartUploadResult{Bucket: cp.DestBucketName,
Key: cp.DestObjectKey, UploadID: cp.CopyID}
_, err := bucket.CompleteMultipartUpload(imur, parts, options...)
if err != nil {
return err
}
os.Remove(cpFilePath)
return err
}
// copyFileWithCp is concurrently copy with checkpoint
func (bucket Bucket) copyFileWithCp(srcBucketName, srcObjectKey, destBucketName, destObjectKey string,
partSize int64, options []Option, cpFilePath string, routines int) error {
descBucket, err := bucket.Client.Bucket(destBucketName)
srcBucket, err := bucket.Client.Bucket(srcBucketName)
listener := GetProgressListener(options)
// Load CP data
ccp := copyCheckpoint{}
err = ccp.load(cpFilePath)
if err != nil {
os.Remove(cpFilePath)
}
// choice valid options
headerOptions := ChoiceHeadObjectOption(options)
partOptions := ChoiceTransferPartOption(options)
completeOptions := ChoiceCompletePartOption(options)
meta, err := srcBucket.GetObjectDetailedMeta(srcObjectKey, headerOptions...)
if err != nil {
return err
}
// Load error or the CP data is invalid---reinitialize
valid, err := ccp.isValid(meta)
if err != nil || !valid {
if err = ccp.prepare(meta, srcBucket, srcObjectKey, descBucket, destObjectKey, partSize, options); err != nil {
return err
}
os.Remove(cpFilePath)
}
// Unfinished parts
parts := ccp.todoParts()
imur := InitiateMultipartUploadResult{
Bucket: destBucketName,
Key: destObjectKey,
UploadID: ccp.CopyID}
jobs := make(chan copyPart, len(parts))
results := make(chan UploadPart, len(parts))
failed := make(chan error)
die := make(chan bool)
completedBytes := ccp.getCompletedBytes()
event := newProgressEvent(TransferStartedEvent, completedBytes, ccp.ObjStat.Size, 0)
publishProgress(listener, event)
// Start the worker coroutines
arg := copyWorkerArg{descBucket, imur, srcBucketName, srcObjectKey, partOptions, copyPartHooker}
for w := 1; w <= routines; w++ {
go copyWorker(w, arg, jobs, results, failed, die)
}
// Start the scheduler
go copyScheduler(jobs, parts)
// Wait for the parts completed.
completed := 0
for completed < len(parts) {
select {
case part := <-results:
completed++
ccp.update(part)
ccp.dump(cpFilePath)
copyBytes := (parts[part.PartNumber-1].End - parts[part.PartNumber-1].Start + 1)
completedBytes += copyBytes
event = newProgressEvent(TransferDataEvent, completedBytes, ccp.ObjStat.Size, copyBytes)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, ccp.ObjStat.Size, 0)
publishProgress(listener, event)
return err
}
if completed >= len(parts) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, ccp.ObjStat.Size, 0)
publishProgress(listener, event)
return ccp.complete(descBucket, ccp.CopyParts, cpFilePath, completeOptions)
}

@ -0,0 +1,305 @@
package oss
import (
"bytes"
"encoding/xml"
"io"
"net/http"
"net/url"
"os"
"sort"
"strconv"
)
// InitiateMultipartUpload initializes multipart upload
//
// objectKey object name
// options the object constricts for upload. The valid options are CacheControl, ContentDisposition, ContentEncoding, Expires,
// ServerSideEncryption, Meta, check out the following link:
// https://help.aliyun.com/document_detail/oss/api-reference/multipart-upload/InitiateMultipartUpload.html
//
// InitiateMultipartUploadResult the return value of the InitiateMultipartUpload, which is used for calls later on such as UploadPartFromFile,UploadPartCopy.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) InitiateMultipartUpload(objectKey string, options ...Option) (InitiateMultipartUploadResult, error) {
var imur InitiateMultipartUploadResult
opts := AddContentType(options, objectKey)
params, _ := GetRawParams(options)
paramKeys := []string{"sequential", "withHashContext", "x-oss-enable-md5", "x-oss-enable-sha1", "x-oss-enable-sha256"}
ConvertEmptyValueToNil(params, paramKeys)
params["uploads"] = nil
resp, err := bucket.do("POST", objectKey, params, opts, nil, nil)
if err != nil {
return imur, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &imur)
return imur, err
}
// UploadPart uploads parts
//
// After initializing a Multipart Upload, the upload Id and object key could be used for uploading the parts.
// Each part has its part number (ranges from 1 to 10,000). And for each upload Id, the part number identifies the position of the part in the whole file.
// And thus with the same part number and upload Id, another part upload will overwrite the data.
// Except the last one, minimal part size is 100KB. There's no limit on the last part size.
//
// imur the returned value of InitiateMultipartUpload.
// reader io.Reader the reader for the part's data.
// size the part size.
// partNumber the part number (ranges from 1 to 10,000). Invalid part number will lead to InvalidArgument error.
//
// UploadPart the return value of the upload part. It consists of PartNumber and ETag. It's valid when error is nil.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) UploadPart(imur InitiateMultipartUploadResult, reader io.Reader,
partSize int64, partNumber int, options ...Option) (UploadPart, error) {
request := &UploadPartRequest{
InitResult: &imur,
Reader: reader,
PartSize: partSize,
PartNumber: partNumber,
}
result, err := bucket.DoUploadPart(request, options)
return result.Part, err
}
// UploadPartFromFile uploads part from the file.
//
// imur the return value of a successful InitiateMultipartUpload.
// filePath the local file path to upload.
// startPosition the start position in the local file.
// partSize the part size.
// partNumber the part number (from 1 to 10,000)
//
// UploadPart the return value consists of PartNumber and ETag.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) UploadPartFromFile(imur InitiateMultipartUploadResult, filePath string,
startPosition, partSize int64, partNumber int, options ...Option) (UploadPart, error) {
var part = UploadPart{}
fd, err := os.Open(filePath)
if err != nil {
return part, err
}
defer fd.Close()
fd.Seek(startPosition, os.SEEK_SET)
request := &UploadPartRequest{
InitResult: &imur,
Reader: fd,
PartSize: partSize,
PartNumber: partNumber,
}
result, err := bucket.DoUploadPart(request, options)
return result.Part, err
}
// DoUploadPart does the actual part upload.
//
// request part upload request
//
// UploadPartResult the result of uploading part.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) DoUploadPart(request *UploadPartRequest, options []Option) (*UploadPartResult, error) {
listener := GetProgressListener(options)
options = append(options, ContentLength(request.PartSize))
params := map[string]interface{}{}
params["partNumber"] = strconv.Itoa(request.PartNumber)
params["uploadId"] = request.InitResult.UploadID
resp, err := bucket.do("PUT", request.InitResult.Key, params, options,
&io.LimitedReader{R: request.Reader, N: request.PartSize}, listener)
if err != nil {
return &UploadPartResult{}, err
}
defer resp.Body.Close()
part := UploadPart{
ETag: resp.Headers.Get(HTTPHeaderEtag),
PartNumber: request.PartNumber,
}
if bucket.GetConfig().IsEnableCRC {
err = CheckCRC(resp, "DoUploadPart")
if err != nil {
return &UploadPartResult{part}, err
}
}
return &UploadPartResult{part}, nil
}
// UploadPartCopy uploads part copy
//
// imur the return value of InitiateMultipartUpload
// copySrc source Object name
// startPosition the part's start index in the source file
// partSize the part size
// partNumber the part number, ranges from 1 to 10,000. If it exceeds the range OSS returns InvalidArgument error.
// options the constraints of source object for the copy. The copy happens only when these contraints are met. Otherwise it returns error.
// CopySourceIfNoneMatch, CopySourceIfModifiedSince CopySourceIfUnmodifiedSince, check out the following link for the detail
// https://help.aliyun.com/document_detail/oss/api-reference/multipart-upload/UploadPartCopy.html
//
// UploadPart the return value consists of PartNumber and ETag.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) UploadPartCopy(imur InitiateMultipartUploadResult, srcBucketName, srcObjectKey string,
startPosition, partSize int64, partNumber int, options ...Option) (UploadPart, error) {
var out UploadPartCopyResult
var part UploadPart
var opts []Option
//first find version id
versionIdKey := "versionId"
versionId, _ := FindOption(options, versionIdKey, nil)
if versionId == nil {
opts = []Option{CopySource(srcBucketName, url.QueryEscape(srcObjectKey)),
CopySourceRange(startPosition, partSize)}
} else {
opts = []Option{CopySourceVersion(srcBucketName, url.QueryEscape(srcObjectKey), versionId.(string)),
CopySourceRange(startPosition, partSize)}
options = DeleteOption(options, versionIdKey)
}
opts = append(opts, options...)
params := map[string]interface{}{}
params["partNumber"] = strconv.Itoa(partNumber)
params["uploadId"] = imur.UploadID
resp, err := bucket.do("PUT", imur.Key, params, opts, nil, nil)
if err != nil {
return part, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
if err != nil {
return part, err
}
part.ETag = out.ETag
part.PartNumber = partNumber
return part, nil
}
// CompleteMultipartUpload completes the multipart upload.
//
// imur the return value of InitiateMultipartUpload.
// parts the array of return value of UploadPart/UploadPartFromFile/UploadPartCopy.
//
// CompleteMultipartUploadResponse the return value when the call succeeds. Only valid when the error is nil.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) CompleteMultipartUpload(imur InitiateMultipartUploadResult,
parts []UploadPart, options ...Option) (CompleteMultipartUploadResult, error) {
var out CompleteMultipartUploadResult
sort.Sort(UploadParts(parts))
cxml := completeMultipartUploadXML{}
cxml.Part = parts
bs, err := xml.Marshal(cxml)
if err != nil {
return out, err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
params := map[string]interface{}{}
params["uploadId"] = imur.UploadID
resp, err := bucket.do("POST", imur.Key, params, options, buffer, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
return out, err
}
// AbortMultipartUpload aborts the multipart upload.
//
// imur the return value of InitiateMultipartUpload.
//
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) AbortMultipartUpload(imur InitiateMultipartUploadResult, options ...Option) error {
params := map[string]interface{}{}
params["uploadId"] = imur.UploadID
resp, err := bucket.do("DELETE", imur.Key, params, options, nil, nil)
if err != nil {
return err
}
defer resp.Body.Close()
return CheckRespCode(resp.StatusCode, []int{http.StatusNoContent})
}
// ListUploadedParts lists the uploaded parts.
//
// imur the return value of InitiateMultipartUpload.
//
// ListUploadedPartsResponse the return value if it succeeds, only valid when error is nil.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) ListUploadedParts(imur InitiateMultipartUploadResult, options ...Option) (ListUploadedPartsResult, error) {
var out ListUploadedPartsResult
options = append(options, EncodingType("url"))
params := map[string]interface{}{}
params, err := GetRawParams(options)
if err != nil {
return out, err
}
params["uploadId"] = imur.UploadID
resp, err := bucket.do("GET", imur.Key, params, options, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
if err != nil {
return out, err
}
err = decodeListUploadedPartsResult(&out)
return out, err
}
// ListMultipartUploads lists all ongoing multipart upload tasks
//
// options listObject's filter. Prefix specifies the returned object's prefix; KeyMarker specifies the returned object's start point in lexicographic order;
// MaxKeys specifies the max entries to return; Delimiter is the character for grouping object keys.
//
// ListMultipartUploadResponse the return value if it succeeds, only valid when error is nil.
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) ListMultipartUploads(options ...Option) (ListMultipartUploadResult, error) {
var out ListMultipartUploadResult
options = append(options, EncodingType("url"))
params, err := GetRawParams(options)
if err != nil {
return out, err
}
params["uploads"] = nil
resp, err := bucket.do("GET", "", params, options, nil, nil)
if err != nil {
return out, err
}
defer resp.Body.Close()
err = xmlUnmarshal(resp.Body, &out)
if err != nil {
return out, err
}
err = decodeListMultipartUploadResult(&out)
return out, err
}

@ -0,0 +1,689 @@
package oss
import (
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
)
type optionType string
const (
optionParam optionType = "HTTPParameter" // URL parameter
optionHTTP optionType = "HTTPHeader" // HTTP header
optionArg optionType = "FuncArgument" // Function argument
)
const (
deleteObjectsQuiet = "delete-objects-quiet"
routineNum = "x-routine-num"
checkpointConfig = "x-cp-config"
initCRC64 = "init-crc64"
progressListener = "x-progress-listener"
storageClass = "storage-class"
responseHeader = "x-response-header"
redundancyType = "redundancy-type"
objectHashFunc = "object-hash-func"
)
type (
optionValue struct {
Value interface{}
Type optionType
}
// Option HTTP option
Option func(map[string]optionValue) error
)
// ACL is an option to set X-Oss-Acl header
func ACL(acl ACLType) Option {
return setHeader(HTTPHeaderOssACL, string(acl))
}
// ContentType is an option to set Content-Type header
func ContentType(value string) Option {
return setHeader(HTTPHeaderContentType, value)
}
// ContentLength is an option to set Content-Length header
func ContentLength(length int64) Option {
return setHeader(HTTPHeaderContentLength, strconv.FormatInt(length, 10))
}
// CacheControl is an option to set Cache-Control header
func CacheControl(value string) Option {
return setHeader(HTTPHeaderCacheControl, value)
}
// ContentDisposition is an option to set Content-Disposition header
func ContentDisposition(value string) Option {
return setHeader(HTTPHeaderContentDisposition, value)
}
// ContentEncoding is an option to set Content-Encoding header
func ContentEncoding(value string) Option {
return setHeader(HTTPHeaderContentEncoding, value)
}
// ContentLanguage is an option to set Content-Language header
func ContentLanguage(value string) Option {
return setHeader(HTTPHeaderContentLanguage, value)
}
// ContentMD5 is an option to set Content-MD5 header
func ContentMD5(value string) Option {
return setHeader(HTTPHeaderContentMD5, value)
}
// Expires is an option to set Expires header
func Expires(t time.Time) Option {
return setHeader(HTTPHeaderExpires, t.Format(http.TimeFormat))
}
// Meta is an option to set Meta header
func Meta(key, value string) Option {
return setHeader(HTTPHeaderOssMetaPrefix+key, value)
}
// Range is an option to set Range header, [start, end]
func Range(start, end int64) Option {
return setHeader(HTTPHeaderRange, fmt.Sprintf("bytes=%d-%d", start, end))
}
// NormalizedRange is an option to set Range header, such as 1024-2048 or 1024- or -2048
func NormalizedRange(nr string) Option {
return setHeader(HTTPHeaderRange, fmt.Sprintf("bytes=%s", strings.TrimSpace(nr)))
}
// AcceptEncoding is an option to set Accept-Encoding header
func AcceptEncoding(value string) Option {
return setHeader(HTTPHeaderAcceptEncoding, value)
}
// IfModifiedSince is an option to set If-Modified-Since header
func IfModifiedSince(t time.Time) Option {
return setHeader(HTTPHeaderIfModifiedSince, t.Format(http.TimeFormat))
}
// IfUnmodifiedSince is an option to set If-Unmodified-Since header
func IfUnmodifiedSince(t time.Time) Option {
return setHeader(HTTPHeaderIfUnmodifiedSince, t.Format(http.TimeFormat))
}
// IfMatch is an option to set If-Match header
func IfMatch(value string) Option {
return setHeader(HTTPHeaderIfMatch, value)
}
// IfNoneMatch is an option to set IfNoneMatch header
func IfNoneMatch(value string) Option {
return setHeader(HTTPHeaderIfNoneMatch, value)
}
// CopySource is an option to set X-Oss-Copy-Source header
func CopySource(sourceBucket, sourceObject string) Option {
return setHeader(HTTPHeaderOssCopySource, "/"+sourceBucket+"/"+sourceObject)
}
// CopySourceVersion is an option to set X-Oss-Copy-Source header,include versionId
func CopySourceVersion(sourceBucket, sourceObject string, versionId string) Option {
return setHeader(HTTPHeaderOssCopySource, "/"+sourceBucket+"/"+sourceObject+"?"+"versionId="+versionId)
}
// CopySourceRange is an option to set X-Oss-Copy-Source header
func CopySourceRange(startPosition, partSize int64) Option {
val := "bytes=" + strconv.FormatInt(startPosition, 10) + "-" +
strconv.FormatInt((startPosition+partSize-1), 10)
return setHeader(HTTPHeaderOssCopySourceRange, val)
}
// CopySourceIfMatch is an option to set X-Oss-Copy-Source-If-Match header
func CopySourceIfMatch(value string) Option {
return setHeader(HTTPHeaderOssCopySourceIfMatch, value)
}
// CopySourceIfNoneMatch is an option to set X-Oss-Copy-Source-If-None-Match header
func CopySourceIfNoneMatch(value string) Option {
return setHeader(HTTPHeaderOssCopySourceIfNoneMatch, value)
}
// CopySourceIfModifiedSince is an option to set X-Oss-CopySource-If-Modified-Since header
func CopySourceIfModifiedSince(t time.Time) Option {
return setHeader(HTTPHeaderOssCopySourceIfModifiedSince, t.Format(http.TimeFormat))
}
// CopySourceIfUnmodifiedSince is an option to set X-Oss-Copy-Source-If-Unmodified-Since header
func CopySourceIfUnmodifiedSince(t time.Time) Option {
return setHeader(HTTPHeaderOssCopySourceIfUnmodifiedSince, t.Format(http.TimeFormat))
}
// MetadataDirective is an option to set X-Oss-Metadata-Directive header
func MetadataDirective(directive MetadataDirectiveType) Option {
return setHeader(HTTPHeaderOssMetadataDirective, string(directive))
}
// ServerSideEncryption is an option to set X-Oss-Server-Side-Encryption header
func ServerSideEncryption(value string) Option {
return setHeader(HTTPHeaderOssServerSideEncryption, value)
}
// ServerSideEncryptionKeyID is an option to set X-Oss-Server-Side-Encryption-Key-Id header
func ServerSideEncryptionKeyID(value string) Option {
return setHeader(HTTPHeaderOssServerSideEncryptionKeyID, value)
}
// ServerSideDataEncryption is an option to set X-Oss-Server-Side-Data-Encryption header
func ServerSideDataEncryption(value string) Option {
return setHeader(HTTPHeaderOssServerSideDataEncryption, value)
}
// SSECAlgorithm is an option to set X-Oss-Server-Side-Encryption-Customer-Algorithm header
func SSECAlgorithm(value string) Option {
return setHeader(HTTPHeaderSSECAlgorithm, value)
}
// SSECKey is an option to set X-Oss-Server-Side-Encryption-Customer-Key header
func SSECKey(value string) Option {
return setHeader(HTTPHeaderSSECKey, value)
}
// SSECKeyMd5 is an option to set X-Oss-Server-Side-Encryption-Customer-Key-Md5 header
func SSECKeyMd5(value string) Option {
return setHeader(HTTPHeaderSSECKeyMd5, value)
}
// ObjectACL is an option to set X-Oss-Object-Acl header
func ObjectACL(acl ACLType) Option {
return setHeader(HTTPHeaderOssObjectACL, string(acl))
}
// symlinkTarget is an option to set X-Oss-Symlink-Target
func symlinkTarget(targetObjectKey string) Option {
return setHeader(HTTPHeaderOssSymlinkTarget, targetObjectKey)
}
// Origin is an option to set Origin header
func Origin(value string) Option {
return setHeader(HTTPHeaderOrigin, value)
}
// ObjectStorageClass is an option to set the storage class of object
func ObjectStorageClass(storageClass StorageClassType) Option {
return setHeader(HTTPHeaderOssStorageClass, string(storageClass))
}
// Callback is an option to set callback values
func Callback(callback string) Option {
return setHeader(HTTPHeaderOssCallback, callback)
}
// CallbackVar is an option to set callback user defined values
func CallbackVar(callbackVar string) Option {
return setHeader(HTTPHeaderOssCallbackVar, callbackVar)
}
// RequestPayer is an option to set payer who pay for the request
func RequestPayer(payerType PayerType) Option {
return setHeader(HTTPHeaderOssRequester, strings.ToLower(string(payerType)))
}
// RequestPayerParam is an option to set payer who pay for the request
func RequestPayerParam(payerType PayerType) Option {
return addParam(strings.ToLower(HTTPHeaderOssRequester), strings.ToLower(string(payerType)))
}
// SetTagging is an option to set object tagging
func SetTagging(tagging Tagging) Option {
if len(tagging.Tags) == 0 {
return nil
}
taggingValue := ""
for index, tag := range tagging.Tags {
if index != 0 {
taggingValue += "&"
}
taggingValue += url.QueryEscape(tag.Key) + "=" + url.QueryEscape(tag.Value)
}
return setHeader(HTTPHeaderOssTagging, taggingValue)
}
// TaggingDirective is an option to set X-Oss-Metadata-Directive header
func TaggingDirective(directive TaggingDirectiveType) Option {
return setHeader(HTTPHeaderOssTaggingDirective, string(directive))
}
// ACReqMethod is an option to set Access-Control-Request-Method header
func ACReqMethod(value string) Option {
return setHeader(HTTPHeaderACReqMethod, value)
}
// ACReqHeaders is an option to set Access-Control-Request-Headers header
func ACReqHeaders(value string) Option {
return setHeader(HTTPHeaderACReqHeaders, value)
}
// TrafficLimitHeader is an option to set X-Oss-Traffic-Limit
func TrafficLimitHeader(value int64) Option {
return setHeader(HTTPHeaderOssTrafficLimit, strconv.FormatInt(value, 10))
}
// UserAgentHeader is an option to set HTTPHeaderUserAgent
func UserAgentHeader(ua string) Option {
return setHeader(HTTPHeaderUserAgent, ua)
}
// ForbidOverWrite is an option to set X-Oss-Forbid-Overwrite
func ForbidOverWrite(forbidWrite bool) Option {
if forbidWrite {
return setHeader(HTTPHeaderOssForbidOverWrite, "true")
} else {
return setHeader(HTTPHeaderOssForbidOverWrite, "false")
}
}
// RangeBehavior is an option to set Range value, such as "standard"
func RangeBehavior(value string) Option {
return setHeader(HTTPHeaderOssRangeBehavior, value)
}
func PartHashCtxHeader(value string) Option {
return setHeader(HTTPHeaderOssHashCtx, value)
}
func PartMd5CtxHeader(value string) Option {
return setHeader(HTTPHeaderOssMd5Ctx, value)
}
func PartHashCtxParam(value string) Option {
return addParam("x-oss-hash-ctx", value)
}
func PartMd5CtxParam(value string) Option {
return addParam("x-oss-md5-ctx", value)
}
// Delimiter is an option to set delimiler parameter
func Delimiter(value string) Option {
return addParam("delimiter", value)
}
// Marker is an option to set marker parameter
func Marker(value string) Option {
return addParam("marker", value)
}
// MaxKeys is an option to set maxkeys parameter
func MaxKeys(value int) Option {
return addParam("max-keys", strconv.Itoa(value))
}
// Prefix is an option to set prefix parameter
func Prefix(value string) Option {
return addParam("prefix", value)
}
// EncodingType is an option to set encoding-type parameter
func EncodingType(value string) Option {
return addParam("encoding-type", value)
}
// MaxUploads is an option to set max-uploads parameter
func MaxUploads(value int) Option {
return addParam("max-uploads", strconv.Itoa(value))
}
// KeyMarker is an option to set key-marker parameter
func KeyMarker(value string) Option {
return addParam("key-marker", value)
}
// VersionIdMarker is an option to set version-id-marker parameter
func VersionIdMarker(value string) Option {
return addParam("version-id-marker", value)
}
// VersionId is an option to set versionId parameter
func VersionId(value string) Option {
return addParam("versionId", value)
}
// TagKey is an option to set tag key parameter
func TagKey(value string) Option {
return addParam("tag-key", value)
}
// TagValue is an option to set tag value parameter
func TagValue(value string) Option {
return addParam("tag-value", value)
}
// UploadIDMarker is an option to set upload-id-marker parameter
func UploadIDMarker(value string) Option {
return addParam("upload-id-marker", value)
}
// MaxParts is an option to set max-parts parameter
func MaxParts(value int) Option {
return addParam("max-parts", strconv.Itoa(value))
}
// PartNumberMarker is an option to set part-number-marker parameter
func PartNumberMarker(value int) Option {
return addParam("part-number-marker", strconv.Itoa(value))
}
// Sequential is an option to set sequential parameter for InitiateMultipartUpload
func Sequential() Option {
return addParam("sequential", "")
}
// WithHashContext is an option to set withHashContext parameter for InitiateMultipartUpload
func WithHashContext() Option {
return addParam("withHashContext", "")
}
// EnableMd5 is an option to set x-oss-enable-md5 parameter for InitiateMultipartUpload
func EnableMd5() Option {
return addParam("x-oss-enable-md5", "")
}
// EnableSha1 is an option to set x-oss-enable-sha1 parameter for InitiateMultipartUpload
func EnableSha1() Option {
return addParam("x-oss-enable-sha1", "")
}
// EnableSha256 is an option to set x-oss-enable-sha256 parameter for InitiateMultipartUpload
func EnableSha256() Option {
return addParam("x-oss-enable-sha256", "")
}
// ListType is an option to set List-type parameter for ListObjectsV2
func ListType(value int) Option {
return addParam("list-type", strconv.Itoa(value))
}
// StartAfter is an option to set start-after parameter for ListObjectsV2
func StartAfter(value string) Option {
return addParam("start-after", value)
}
// ContinuationToken is an option to set Continuation-token parameter for ListObjectsV2
func ContinuationToken(value string) Option {
if value == "" {
return addParam("continuation-token", nil)
}
return addParam("continuation-token", value)
}
// FetchOwner is an option to set Fetch-owner parameter for ListObjectsV2
func FetchOwner(value bool) Option {
if value {
return addParam("fetch-owner", "true")
}
return addParam("fetch-owner", "false")
}
// DeleteObjectsQuiet false:DeleteObjects in verbose mode; true:DeleteObjects in quite mode. Default is false.
func DeleteObjectsQuiet(isQuiet bool) Option {
return addArg(deleteObjectsQuiet, isQuiet)
}
// StorageClass bucket storage class
func StorageClass(value StorageClassType) Option {
return addArg(storageClass, value)
}
// RedundancyType bucket data redundancy type
func RedundancyType(value DataRedundancyType) Option {
return addArg(redundancyType, value)
}
// RedundancyType bucket data redundancy type
func ObjectHashFunc(value ObjecthashFuncType) Option {
return addArg(objectHashFunc, value)
}
// Checkpoint configuration
type cpConfig struct {
IsEnable bool
FilePath string
DirPath string
}
// Checkpoint sets the isEnable flag and checkpoint file path for DownloadFile/UploadFile.
func Checkpoint(isEnable bool, filePath string) Option {
return addArg(checkpointConfig, &cpConfig{IsEnable: isEnable, FilePath: filePath})
}
// CheckpointDir sets the isEnable flag and checkpoint dir path for DownloadFile/UploadFile.
func CheckpointDir(isEnable bool, dirPath string) Option {
return addArg(checkpointConfig, &cpConfig{IsEnable: isEnable, DirPath: dirPath})
}
// Routines DownloadFile/UploadFile routine count
func Routines(n int) Option {
return addArg(routineNum, n)
}
// InitCRC Init AppendObject CRC
func InitCRC(initCRC uint64) Option {
return addArg(initCRC64, initCRC)
}
// Progress set progress listener
func Progress(listener ProgressListener) Option {
return addArg(progressListener, listener)
}
// GetResponseHeader for get response http header
func GetResponseHeader(respHeader *http.Header) Option {
return addArg(responseHeader, respHeader)
}
// ResponseContentType is an option to set response-content-type param
func ResponseContentType(value string) Option {
return addParam("response-content-type", value)
}
// ResponseContentLanguage is an option to set response-content-language param
func ResponseContentLanguage(value string) Option {
return addParam("response-content-language", value)
}
// ResponseExpires is an option to set response-expires param
func ResponseExpires(value string) Option {
return addParam("response-expires", value)
}
// ResponseCacheControl is an option to set response-cache-control param
func ResponseCacheControl(value string) Option {
return addParam("response-cache-control", value)
}
// ResponseContentDisposition is an option to set response-content-disposition param
func ResponseContentDisposition(value string) Option {
return addParam("response-content-disposition", value)
}
// ResponseContentEncoding is an option to set response-content-encoding param
func ResponseContentEncoding(value string) Option {
return addParam("response-content-encoding", value)
}
// Process is an option to set x-oss-process param
func Process(value string) Option {
return addParam("x-oss-process", value)
}
// TrafficLimitParam is a option to set x-oss-traffic-limit
func TrafficLimitParam(value int64) Option {
return addParam("x-oss-traffic-limit", strconv.FormatInt(value, 10))
}
// SetHeader Allow users to set personalized http headers
func SetHeader(key string, value interface{}) Option {
return setHeader(key, value)
}
// AddParam Allow users to set personalized http params
func AddParam(key string, value interface{}) Option {
return addParam(key, value)
}
func setHeader(key string, value interface{}) Option {
return func(params map[string]optionValue) error {
if value == nil {
return nil
}
params[key] = optionValue{value, optionHTTP}
return nil
}
}
func addParam(key string, value interface{}) Option {
return func(params map[string]optionValue) error {
if value == nil {
return nil
}
params[key] = optionValue{value, optionParam}
return nil
}
}
func addArg(key string, value interface{}) Option {
return func(params map[string]optionValue) error {
if value == nil {
return nil
}
params[key] = optionValue{value, optionArg}
return nil
}
}
func handleOptions(headers map[string]string, options []Option) error {
params := map[string]optionValue{}
for _, option := range options {
if option != nil {
if err := option(params); err != nil {
return err
}
}
}
for k, v := range params {
if v.Type == optionHTTP {
headers[k] = v.Value.(string)
}
}
return nil
}
func GetRawParams(options []Option) (map[string]interface{}, error) {
// Option
params := map[string]optionValue{}
for _, option := range options {
if option != nil {
if err := option(params); err != nil {
return nil, err
}
}
}
paramsm := map[string]interface{}{}
// Serialize
for k, v := range params {
if v.Type == optionParam {
vs := params[k]
paramsm[k] = vs.Value.(string)
}
}
return paramsm, nil
}
func FindOption(options []Option, param string, defaultVal interface{}) (interface{}, error) {
params := map[string]optionValue{}
for _, option := range options {
if option != nil {
if err := option(params); err != nil {
return nil, err
}
}
}
if val, ok := params[param]; ok {
return val.Value, nil
}
return defaultVal, nil
}
func IsOptionSet(options []Option, option string) (bool, interface{}, error) {
params := map[string]optionValue{}
for _, option := range options {
if option != nil {
if err := option(params); err != nil {
return false, nil, err
}
}
}
if val, ok := params[option]; ok {
return true, val.Value, nil
}
return false, nil, nil
}
func DeleteOption(options []Option, strKey string) []Option {
var outOption []Option
params := map[string]optionValue{}
for _, option := range options {
if option != nil {
option(params)
_, exist := params[strKey]
if !exist {
outOption = append(outOption, option)
} else {
delete(params, strKey)
}
}
}
return outOption
}
func GetRequestId(header http.Header) string {
return header.Get("x-oss-request-id")
}
func GetVersionId(header http.Header) string {
return header.Get("x-oss-version-id")
}
func GetCopySrcVersionId(header http.Header) string {
return header.Get("x-oss-copy-source-version-id")
}
func GetDeleteMark(header http.Header) bool {
value := header.Get("x-oss-delete-marker")
if strings.ToUpper(value) == "TRUE" {
return true
}
return false
}
func GetQosDelayTime(header http.Header) string {
return header.Get("x-oss-qos-delay-time")
}
// ForbidOverWrite is an option to set X-Oss-Forbid-Overwrite
func AllowSameActionOverLap(enabled bool) Option {
if enabled {
return setHeader(HTTPHeaderAllowSameActionOverLap, "true")
} else {
return setHeader(HTTPHeaderAllowSameActionOverLap, "false")
}
}

@ -0,0 +1,116 @@
package oss
import (
"io"
)
// ProgressEventType defines transfer progress event type
type ProgressEventType int
const (
// TransferStartedEvent transfer started, set TotalBytes
TransferStartedEvent ProgressEventType = 1 + iota
// TransferDataEvent transfer data, set ConsumedBytes anmd TotalBytes
TransferDataEvent
// TransferCompletedEvent transfer completed
TransferCompletedEvent
// TransferFailedEvent transfer encounters an error
TransferFailedEvent
)
// ProgressEvent defines progress event
type ProgressEvent struct {
ConsumedBytes int64
TotalBytes int64
RwBytes int64
EventType ProgressEventType
}
// ProgressListener listens progress change
type ProgressListener interface {
ProgressChanged(event *ProgressEvent)
}
// -------------------- Private --------------------
func newProgressEvent(eventType ProgressEventType, consumed, total int64, rwBytes int64) *ProgressEvent {
return &ProgressEvent{
ConsumedBytes: consumed,
TotalBytes: total,
RwBytes: rwBytes,
EventType: eventType}
}
// publishProgress
func publishProgress(listener ProgressListener, event *ProgressEvent) {
if listener != nil && event != nil {
listener.ProgressChanged(event)
}
}
type readerTracker struct {
completedBytes int64
}
type teeReader struct {
reader io.Reader
writer io.Writer
listener ProgressListener
consumedBytes int64
totalBytes int64
tracker *readerTracker
}
// TeeReader returns a Reader that writes to w what it reads from r.
// All reads from r performed through it are matched with
// corresponding writes to w. There is no internal buffering -
// the write must complete before the read completes.
// Any error encountered while writing is reported as a read error.
func TeeReader(reader io.Reader, writer io.Writer, totalBytes int64, listener ProgressListener, tracker *readerTracker) io.ReadCloser {
return &teeReader{
reader: reader,
writer: writer,
listener: listener,
consumedBytes: 0,
totalBytes: totalBytes,
tracker: tracker,
}
}
func (t *teeReader) Read(p []byte) (n int, err error) {
n, err = t.reader.Read(p)
// Read encountered error
if err != nil && err != io.EOF {
event := newProgressEvent(TransferFailedEvent, t.consumedBytes, t.totalBytes, 0)
publishProgress(t.listener, event)
}
if n > 0 {
t.consumedBytes += int64(n)
// CRC
if t.writer != nil {
if n, err := t.writer.Write(p[:n]); err != nil {
return n, err
}
}
// Progress
if t.listener != nil {
event := newProgressEvent(TransferDataEvent, t.consumedBytes, t.totalBytes, int64(n))
publishProgress(t.listener, event)
}
// Track
if t.tracker != nil {
t.tracker.completedBytes = t.consumedBytes
}
}
return
}
func (t *teeReader) Close() error {
if rc, ok := t.reader.(io.ReadCloser); ok {
return rc.Close()
}
return nil
}

@ -0,0 +1,11 @@
// +build !go1.7
package oss
import "net/http"
// http.ErrUseLastResponse only is defined go1.7 onward
func disableHTTPRedirect(client *http.Client) {
}

@ -0,0 +1,12 @@
// +build go1.7
package oss
import "net/http"
// http.ErrUseLastResponse only is defined go1.7 onward
func disableHTTPRedirect(client *http.Client) {
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
}

@ -0,0 +1,197 @@
package oss
import (
"bytes"
"encoding/xml"
"hash/crc32"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
)
// CreateSelectCsvObjectMeta is Creating csv object meta
//
// key the object key.
// csvMeta the csv file meta
// options the options for create csv Meta of the object.
//
// MetaEndFrameCSV the csv file meta info
// error it's nil if no error, otherwise it's an error object.
//
func (bucket Bucket) CreateSelectCsvObjectMeta(key string, csvMeta CsvMetaRequest, options ...Option) (MetaEndFrameCSV, error) {
var endFrame MetaEndFrameCSV
params := map[string]interface{}{}
params["x-oss-process"] = "csv/meta"
csvMeta.encodeBase64()
bs, err := xml.Marshal(csvMeta)
if err != nil {
return endFrame, err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
resp, err := bucket.DoPostSelectObject(key, params, buffer, options...)
if err != nil {
return endFrame, err
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp)
return resp.Frame.MetaEndFrameCSV, err
}
// CreateSelectJsonObjectMeta is Creating json object meta
//
// key the object key.
// csvMeta the json file meta
// options the options for create json Meta of the object.
//
// MetaEndFrameJSON the json file meta info
// error it's nil if no error, otherwise it's an error object.
//
func (bucket Bucket) CreateSelectJsonObjectMeta(key string, jsonMeta JsonMetaRequest, options ...Option) (MetaEndFrameJSON, error) {
var endFrame MetaEndFrameJSON
params := map[string]interface{}{}
params["x-oss-process"] = "json/meta"
bs, err := xml.Marshal(jsonMeta)
if err != nil {
return endFrame, err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
resp, err := bucket.DoPostSelectObject(key, params, buffer, options...)
if err != nil {
return endFrame, err
}
defer resp.Body.Close()
_, err = ioutil.ReadAll(resp)
return resp.Frame.MetaEndFrameJSON, err
}
// SelectObject is the select object api, approve csv and json file.
//
// key the object key.
// selectReq the request data for select object
// options the options for select file of the object.
//
// o.ReadCloser reader instance for reading data from response. It must be called close() after the usage and only valid when error is nil.
// error it's nil if no error, otherwise it's an error object.
//
func (bucket Bucket) SelectObject(key string, selectReq SelectRequest, options ...Option) (io.ReadCloser, error) {
params := map[string]interface{}{}
if selectReq.InputSerializationSelect.JsonBodyInput.JsonIsEmpty() {
params["x-oss-process"] = "csv/select" // default select csv file
} else {
params["x-oss-process"] = "json/select"
}
selectReq.encodeBase64()
bs, err := xml.Marshal(selectReq)
if err != nil {
return nil, err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
resp, err := bucket.DoPostSelectObject(key, params, buffer, options...)
if err != nil {
return nil, err
}
if selectReq.OutputSerializationSelect.EnablePayloadCrc != nil && *selectReq.OutputSerializationSelect.EnablePayloadCrc == true {
resp.Frame.EnablePayloadCrc = true
}
resp.Frame.OutputRawData = strings.ToUpper(resp.Headers.Get("x-oss-select-output-raw")) == "TRUE"
return resp, err
}
// DoPostSelectObject is the SelectObject/CreateMeta api, approve csv and json file.
//
// key the object key.
// params the resource of oss approve csv/meta, json/meta, csv/select, json/select.
// buf the request data trans to buffer.
// options the options for select file of the object.
//
// SelectObjectResponse the response of select object.
// error it's nil if no error, otherwise it's an error object.
//
func (bucket Bucket) DoPostSelectObject(key string, params map[string]interface{}, buf *bytes.Buffer, options ...Option) (*SelectObjectResponse, error) {
resp, err := bucket.do("POST", key, params, options, buf, nil)
if err != nil {
return nil, err
}
result := &SelectObjectResponse{
Body: resp.Body,
StatusCode: resp.StatusCode,
Frame: SelectObjectResult{},
}
result.Headers = resp.Headers
// result.Frame = SelectObjectResult{}
result.ReadTimeOut = bucket.GetConfig().Timeout
// Progress
listener := GetProgressListener(options)
// CRC32
crcCalc := crc32.NewIEEE()
result.WriterForCheckCrc32 = crcCalc
result.Body = TeeReader(resp.Body, nil, 0, listener, nil)
err = CheckRespCode(resp.StatusCode, []int{http.StatusPartialContent, http.StatusOK})
return result, err
}
// SelectObjectIntoFile is the selectObject to file api
//
// key the object key.
// fileName saving file's name to localstation.
// selectReq the request data for select object
// options the options for select file of the object.
//
// error it's nil if no error, otherwise it's an error object.
//
func (bucket Bucket) SelectObjectIntoFile(key, fileName string, selectReq SelectRequest, options ...Option) error {
tempFilePath := fileName + TempFileSuffix
params := map[string]interface{}{}
if selectReq.InputSerializationSelect.JsonBodyInput.JsonIsEmpty() {
params["x-oss-process"] = "csv/select" // default select csv file
} else {
params["x-oss-process"] = "json/select"
}
selectReq.encodeBase64()
bs, err := xml.Marshal(selectReq)
if err != nil {
return err
}
buffer := new(bytes.Buffer)
buffer.Write(bs)
resp, err := bucket.DoPostSelectObject(key, params, buffer, options...)
if err != nil {
return err
}
defer resp.Close()
// If the local file does not exist, create a new one. If it exists, overwrite it.
fd, err := os.OpenFile(tempFilePath, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, FilePermMode)
if err != nil {
return err
}
// Copy the data to the local file path.
_, err = io.Copy(fd, resp)
fd.Close()
if err != nil {
return err
}
return os.Rename(tempFilePath, fileName)
}

@ -0,0 +1,364 @@
package oss
import (
"bytes"
"encoding/binary"
"fmt"
"hash"
"hash/crc32"
"io"
"net/http"
"time"
)
// The adapter class for Select object's response.
// The response consists of frames. Each frame has the following format:
// Type | Payload Length | Header Checksum | Payload | Payload Checksum
// |<4-->| <--4 bytes------><---4 bytes-------><-n/a-----><--4 bytes--------->
// And we have three kind of frames.
// Data Frame:
// Type:8388609
// Payload: Offset | Data
// <-8 bytes>
// Continuous Frame
// Type:8388612
// Payload: Offset (8-bytes)
// End Frame
// Type:8388613
// Payload: Offset | total scanned bytes | http status code | error message
// <-- 8bytes--><-----8 bytes--------><---4 bytes-------><---variabe--->
// SelectObjectResponse defines HTTP response from OSS SelectObject
type SelectObjectResponse struct {
StatusCode int
Headers http.Header
Body io.ReadCloser
Frame SelectObjectResult
ReadTimeOut uint
ClientCRC32 uint32
ServerCRC32 uint32
WriterForCheckCrc32 hash.Hash32
Finish bool
}
func (sr *SelectObjectResponse) Read(p []byte) (n int, err error) {
n, err = sr.readFrames(p)
return
}
// Close http reponse body
func (sr *SelectObjectResponse) Close() error {
return sr.Body.Close()
}
// PostSelectResult is the request of SelectObject
type PostSelectResult struct {
Response *SelectObjectResponse
}
// readFrames is read Frame
func (sr *SelectObjectResponse) readFrames(p []byte) (int, error) {
var nn int
var err error
var checkValid bool
if sr.Frame.OutputRawData == true {
nn, err = sr.Body.Read(p)
return nn, err
}
if sr.Finish {
return 0, io.EOF
}
for {
// if this Frame is Readed, then not reading Header
if sr.Frame.OpenLine != true {
err = sr.analysisHeader()
if err != nil {
return nn, err
}
}
if sr.Frame.FrameType == DataFrameType {
n, err := sr.analysisData(p[nn:])
if err != nil {
return nn, err
}
nn += n
// if this Frame is readed all data, then empty the Frame to read it with next frame
if sr.Frame.ConsumedBytesLength == sr.Frame.PayloadLength-8 {
checkValid, err = sr.checkPayloadSum()
if err != nil || !checkValid {
return nn, fmt.Errorf("%s", err.Error())
}
sr.emptyFrame()
}
if nn == len(p) {
return nn, nil
}
} else if sr.Frame.FrameType == ContinuousFrameType {
checkValid, err = sr.checkPayloadSum()
if err != nil || !checkValid {
return nn, fmt.Errorf("%s", err.Error())
}
} else if sr.Frame.FrameType == EndFrameType {
err = sr.analysisEndFrame()
if err != nil {
return nn, err
}
checkValid, err = sr.checkPayloadSum()
if checkValid {
sr.Finish = true
}
return nn, err
} else if sr.Frame.FrameType == MetaEndFrameCSVType {
err = sr.analysisMetaEndFrameCSV()
if err != nil {
return nn, err
}
checkValid, err = sr.checkPayloadSum()
if checkValid {
sr.Finish = true
}
return nn, err
} else if sr.Frame.FrameType == MetaEndFrameJSONType {
err = sr.analysisMetaEndFrameJSON()
if err != nil {
return nn, err
}
checkValid, err = sr.checkPayloadSum()
if checkValid {
sr.Finish = true
}
return nn, err
}
}
return nn, nil
}
type chanReadIO struct {
readLen int
err error
}
func (sr *SelectObjectResponse) readLen(p []byte, timeOut time.Duration) (int, error) {
r := sr.Body
ch := make(chan chanReadIO, 1)
defer close(ch)
go func(p []byte) {
var needReadLength int
readChan := chanReadIO{}
needReadLength = len(p)
for {
n, err := r.Read(p[readChan.readLen:needReadLength])
readChan.readLen += n
if err != nil {
readChan.err = err
ch <- readChan
return
}
if readChan.readLen == needReadLength {
break
}
}
ch <- readChan
}(p)
select {
case <-time.After(time.Second * timeOut):
return 0, fmt.Errorf("requestId: %s, readLen timeout, timeout is %d(second),need read:%d", sr.Headers.Get(HTTPHeaderOssRequestID), timeOut, len(p))
case result := <-ch:
return result.readLen, result.err
}
}
// analysisHeader is reading selectObject response body's header
func (sr *SelectObjectResponse) analysisHeader() error {
headFrameByte := make([]byte, 20)
_, err := sr.readLen(headFrameByte, time.Duration(sr.ReadTimeOut))
if err != nil {
return fmt.Errorf("requestId: %s, Read response frame header failure,err:%s", sr.Headers.Get(HTTPHeaderOssRequestID), err.Error())
}
frameTypeByte := headFrameByte[0:4]
sr.Frame.Version = frameTypeByte[0]
frameTypeByte[0] = 0
bytesToInt(frameTypeByte, &sr.Frame.FrameType)
if sr.Frame.FrameType != DataFrameType && sr.Frame.FrameType != ContinuousFrameType &&
sr.Frame.FrameType != EndFrameType && sr.Frame.FrameType != MetaEndFrameCSVType && sr.Frame.FrameType != MetaEndFrameJSONType {
return fmt.Errorf("requestId: %s, Unexpected frame type: %d", sr.Headers.Get(HTTPHeaderOssRequestID), sr.Frame.FrameType)
}
payloadLengthByte := headFrameByte[4:8]
bytesToInt(payloadLengthByte, &sr.Frame.PayloadLength)
headCheckSumByte := headFrameByte[8:12]
bytesToInt(headCheckSumByte, &sr.Frame.HeaderCheckSum)
byteOffset := headFrameByte[12:20]
bytesToInt(byteOffset, &sr.Frame.Offset)
sr.Frame.OpenLine = true
err = sr.writerCheckCrc32(byteOffset)
return err
}
// analysisData is reading the DataFrameType data of selectObject response body
func (sr *SelectObjectResponse) analysisData(p []byte) (int, error) {
var needReadLength int32
lenP := int32(len(p))
restByteLength := sr.Frame.PayloadLength - 8 - sr.Frame.ConsumedBytesLength
if lenP <= restByteLength {
needReadLength = lenP
} else {
needReadLength = restByteLength
}
n, err := sr.readLen(p[:needReadLength], time.Duration(sr.ReadTimeOut))
if err != nil {
return n, fmt.Errorf("read frame data error,%s", err.Error())
}
sr.Frame.ConsumedBytesLength += int32(n)
err = sr.writerCheckCrc32(p[:n])
return n, err
}
// analysisEndFrame is reading the EndFrameType data of selectObject response body
func (sr *SelectObjectResponse) analysisEndFrame() error {
var eF EndFrame
payLoadBytes := make([]byte, sr.Frame.PayloadLength-8)
_, err := sr.readLen(payLoadBytes, time.Duration(sr.ReadTimeOut))
if err != nil {
return fmt.Errorf("read end frame error:%s", err.Error())
}
bytesToInt(payLoadBytes[0:8], &eF.TotalScanned)
bytesToInt(payLoadBytes[8:12], &eF.HTTPStatusCode)
errMsgLength := sr.Frame.PayloadLength - 20
eF.ErrorMsg = string(payLoadBytes[12 : errMsgLength+12])
sr.Frame.EndFrame.TotalScanned = eF.TotalScanned
sr.Frame.EndFrame.HTTPStatusCode = eF.HTTPStatusCode
sr.Frame.EndFrame.ErrorMsg = eF.ErrorMsg
err = sr.writerCheckCrc32(payLoadBytes)
return err
}
// analysisMetaEndFrameCSV is reading the MetaEndFrameCSVType data of selectObject response body
func (sr *SelectObjectResponse) analysisMetaEndFrameCSV() error {
var mCF MetaEndFrameCSV
payLoadBytes := make([]byte, sr.Frame.PayloadLength-8)
_, err := sr.readLen(payLoadBytes, time.Duration(sr.ReadTimeOut))
if err != nil {
return fmt.Errorf("read meta end csv frame error:%s", err.Error())
}
bytesToInt(payLoadBytes[0:8], &mCF.TotalScanned)
bytesToInt(payLoadBytes[8:12], &mCF.Status)
bytesToInt(payLoadBytes[12:16], &mCF.SplitsCount)
bytesToInt(payLoadBytes[16:24], &mCF.RowsCount)
bytesToInt(payLoadBytes[24:28], &mCF.ColumnsCount)
errMsgLength := sr.Frame.PayloadLength - 36
mCF.ErrorMsg = string(payLoadBytes[28 : errMsgLength+28])
sr.Frame.MetaEndFrameCSV.ErrorMsg = mCF.ErrorMsg
sr.Frame.MetaEndFrameCSV.TotalScanned = mCF.TotalScanned
sr.Frame.MetaEndFrameCSV.Status = mCF.Status
sr.Frame.MetaEndFrameCSV.SplitsCount = mCF.SplitsCount
sr.Frame.MetaEndFrameCSV.RowsCount = mCF.RowsCount
sr.Frame.MetaEndFrameCSV.ColumnsCount = mCF.ColumnsCount
err = sr.writerCheckCrc32(payLoadBytes)
return err
}
// analysisMetaEndFrameJSON is reading the MetaEndFrameJSONType data of selectObject response body
func (sr *SelectObjectResponse) analysisMetaEndFrameJSON() error {
var mJF MetaEndFrameJSON
payLoadBytes := make([]byte, sr.Frame.PayloadLength-8)
_, err := sr.readLen(payLoadBytes, time.Duration(sr.ReadTimeOut))
if err != nil {
return fmt.Errorf("read meta end json frame error:%s", err.Error())
}
bytesToInt(payLoadBytes[0:8], &mJF.TotalScanned)
bytesToInt(payLoadBytes[8:12], &mJF.Status)
bytesToInt(payLoadBytes[12:16], &mJF.SplitsCount)
bytesToInt(payLoadBytes[16:24], &mJF.RowsCount)
errMsgLength := sr.Frame.PayloadLength - 32
mJF.ErrorMsg = string(payLoadBytes[24 : errMsgLength+24])
sr.Frame.MetaEndFrameJSON.ErrorMsg = mJF.ErrorMsg
sr.Frame.MetaEndFrameJSON.TotalScanned = mJF.TotalScanned
sr.Frame.MetaEndFrameJSON.Status = mJF.Status
sr.Frame.MetaEndFrameJSON.SplitsCount = mJF.SplitsCount
sr.Frame.MetaEndFrameJSON.RowsCount = mJF.RowsCount
err = sr.writerCheckCrc32(payLoadBytes)
return err
}
func (sr *SelectObjectResponse) checkPayloadSum() (bool, error) {
payLoadChecksumByte := make([]byte, 4)
n, err := sr.readLen(payLoadChecksumByte, time.Duration(sr.ReadTimeOut))
if n == 4 {
bytesToInt(payLoadChecksumByte, &sr.Frame.PayloadChecksum)
sr.ServerCRC32 = sr.Frame.PayloadChecksum
sr.ClientCRC32 = sr.WriterForCheckCrc32.Sum32()
if sr.Frame.EnablePayloadCrc == true && sr.ServerCRC32 != 0 && sr.ServerCRC32 != sr.ClientCRC32 {
return false, fmt.Errorf("RequestId: %s, Unexpected frame type: %d, client %d but server %d",
sr.Headers.Get(HTTPHeaderOssRequestID), sr.Frame.FrameType, sr.ClientCRC32, sr.ServerCRC32)
}
return true, err
}
return false, fmt.Errorf("RequestId:%s, read checksum error:%s", sr.Headers.Get(HTTPHeaderOssRequestID), err.Error())
}
func (sr *SelectObjectResponse) writerCheckCrc32(p []byte) (err error) {
err = nil
if sr.Frame.EnablePayloadCrc == true {
_, err = sr.WriterForCheckCrc32.Write(p)
}
return err
}
// emptyFrame is emptying SelectObjectResponse Frame information
func (sr *SelectObjectResponse) emptyFrame() {
crcCalc := crc32.NewIEEE()
sr.WriterForCheckCrc32 = crcCalc
sr.Finish = false
sr.Frame.ConsumedBytesLength = 0
sr.Frame.OpenLine = false
sr.Frame.Version = byte(0)
sr.Frame.FrameType = 0
sr.Frame.PayloadLength = 0
sr.Frame.HeaderCheckSum = 0
sr.Frame.Offset = 0
sr.Frame.Data = ""
sr.Frame.EndFrame.TotalScanned = 0
sr.Frame.EndFrame.HTTPStatusCode = 0
sr.Frame.EndFrame.ErrorMsg = ""
sr.Frame.MetaEndFrameCSV.TotalScanned = 0
sr.Frame.MetaEndFrameCSV.Status = 0
sr.Frame.MetaEndFrameCSV.SplitsCount = 0
sr.Frame.MetaEndFrameCSV.RowsCount = 0
sr.Frame.MetaEndFrameCSV.ColumnsCount = 0
sr.Frame.MetaEndFrameCSV.ErrorMsg = ""
sr.Frame.MetaEndFrameJSON.TotalScanned = 0
sr.Frame.MetaEndFrameJSON.Status = 0
sr.Frame.MetaEndFrameJSON.SplitsCount = 0
sr.Frame.MetaEndFrameJSON.RowsCount = 0
sr.Frame.MetaEndFrameJSON.ErrorMsg = ""
sr.Frame.PayloadChecksum = 0
}
// bytesToInt byte's array trans to int
func bytesToInt(b []byte, ret interface{}) {
binBuf := bytes.NewBuffer(b)
binary.Read(binBuf, binary.BigEndian, ret)
}

@ -0,0 +1,41 @@
// +build !go1.7
package oss
import (
"crypto/tls"
"net"
"net/http"
"time"
)
func newTransport(conn *Conn, config *Config) *http.Transport {
httpTimeOut := conn.config.HTTPTimeout
httpMaxConns := conn.config.HTTPMaxConns
// New Transport
transport := &http.Transport{
Dial: func(netw, addr string) (net.Conn, error) {
d := net.Dialer{
Timeout: httpTimeOut.ConnectTimeout,
KeepAlive: 30 * time.Second,
}
if config.LocalAddr != nil {
d.LocalAddr = config.LocalAddr
}
conn, err := d.Dial(netw, addr)
if err != nil {
return nil, err
}
return newTimeoutConn(conn, httpTimeOut.ReadWriteTimeout, httpTimeOut.LongTimeout), nil
},
MaxIdleConnsPerHost: httpMaxConns.MaxIdleConnsPerHost,
ResponseHeaderTimeout: httpTimeOut.HeaderTimeout,
}
if config.InsecureSkipVerify {
transport.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true,
}
}
return transport
}

@ -0,0 +1,44 @@
// +build go1.7
package oss
import (
"crypto/tls"
"net"
"net/http"
"time"
)
func newTransport(conn *Conn, config *Config) *http.Transport {
httpTimeOut := conn.config.HTTPTimeout
httpMaxConns := conn.config.HTTPMaxConns
// New Transport
transport := &http.Transport{
Dial: func(netw, addr string) (net.Conn, error) {
d := net.Dialer{
Timeout: httpTimeOut.ConnectTimeout,
KeepAlive: 30 * time.Second,
}
if config.LocalAddr != nil {
d.LocalAddr = config.LocalAddr
}
conn, err := d.Dial(netw, addr)
if err != nil {
return nil, err
}
return newTimeoutConn(conn, httpTimeOut.ReadWriteTimeout, httpTimeOut.LongTimeout), nil
},
MaxIdleConns: httpMaxConns.MaxIdleConns,
MaxIdleConnsPerHost: httpMaxConns.MaxIdleConnsPerHost,
MaxConnsPerHost: httpMaxConns.MaxConnsPerHost,
IdleConnTimeout: httpTimeOut.IdleConnTimeout,
ResponseHeaderTimeout: httpTimeOut.HeaderTimeout,
}
if config.InsecureSkipVerify {
transport.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true,
}
}
return transport
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,552 @@
package oss
import (
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"time"
)
// UploadFile is multipart file upload.
//
// objectKey the object name.
// filePath the local file path to upload.
// partSize the part size in byte.
// options the options for uploading object.
//
// error it's nil if the operation succeeds, otherwise it's an error object.
//
func (bucket Bucket) UploadFile(objectKey, filePath string, partSize int64, options ...Option) error {
if partSize < MinPartSize || partSize > MaxPartSize {
return errors.New("oss: part size invalid range (100KB, 5GB]")
}
cpConf := getCpConfig(options)
routines := getRoutines(options)
if cpConf != nil && cpConf.IsEnable {
cpFilePath := getUploadCpFilePath(cpConf, filePath, bucket.BucketName, objectKey)
if cpFilePath != "" {
return bucket.uploadFileWithCp(objectKey, filePath, partSize, options, cpFilePath, routines)
}
}
return bucket.uploadFile(objectKey, filePath, partSize, options, routines)
}
func getUploadCpFilePath(cpConf *cpConfig, srcFile, destBucket, destObject string) string {
if cpConf.FilePath == "" && cpConf.DirPath != "" {
dest := fmt.Sprintf("oss://%v/%v", destBucket, destObject)
absPath, _ := filepath.Abs(srcFile)
cpFileName := getCpFileName(absPath, dest, "")
cpConf.FilePath = cpConf.DirPath + string(os.PathSeparator) + cpFileName
}
return cpConf.FilePath
}
// ----- concurrent upload without checkpoint -----
// getCpConfig gets checkpoint configuration
func getCpConfig(options []Option) *cpConfig {
cpcOpt, err := FindOption(options, checkpointConfig, nil)
if err != nil || cpcOpt == nil {
return nil
}
return cpcOpt.(*cpConfig)
}
// getCpFileName return the name of the checkpoint file
func getCpFileName(src, dest, versionId string) string {
md5Ctx := md5.New()
md5Ctx.Write([]byte(src))
srcCheckSum := hex.EncodeToString(md5Ctx.Sum(nil))
md5Ctx.Reset()
md5Ctx.Write([]byte(dest))
destCheckSum := hex.EncodeToString(md5Ctx.Sum(nil))
if versionId == "" {
return fmt.Sprintf("%v-%v.cp", srcCheckSum, destCheckSum)
}
md5Ctx.Reset()
md5Ctx.Write([]byte(versionId))
versionCheckSum := hex.EncodeToString(md5Ctx.Sum(nil))
return fmt.Sprintf("%v-%v-%v.cp", srcCheckSum, destCheckSum, versionCheckSum)
}
// getRoutines gets the routine count. by default it's 1.
func getRoutines(options []Option) int {
rtnOpt, err := FindOption(options, routineNum, nil)
if err != nil || rtnOpt == nil {
return 1
}
rs := rtnOpt.(int)
if rs < 1 {
rs = 1
} else if rs > 100 {
rs = 100
}
return rs
}
// getPayer return the payer of the request
func getPayer(options []Option) string {
payerOpt, err := FindOption(options, HTTPHeaderOssRequester, nil)
if err != nil || payerOpt == nil {
return ""
}
return payerOpt.(string)
}
// GetProgressListener gets the progress callback
func GetProgressListener(options []Option) ProgressListener {
isSet, listener, _ := IsOptionSet(options, progressListener)
if !isSet {
return nil
}
return listener.(ProgressListener)
}
// uploadPartHook is for testing usage
type uploadPartHook func(id int, chunk FileChunk) error
var uploadPartHooker uploadPartHook = defaultUploadPart
func defaultUploadPart(id int, chunk FileChunk) error {
return nil
}
// workerArg defines worker argument structure
type workerArg struct {
bucket *Bucket
filePath string
imur InitiateMultipartUploadResult
options []Option
hook uploadPartHook
}
// worker is the worker coroutine function
type defaultUploadProgressListener struct {
}
// ProgressChanged no-ops
func (listener *defaultUploadProgressListener) ProgressChanged(event *ProgressEvent) {
}
func worker(id int, arg workerArg, jobs <-chan FileChunk, results chan<- UploadPart, failed chan<- error, die <-chan bool) {
for chunk := range jobs {
if err := arg.hook(id, chunk); err != nil {
failed <- err
break
}
var respHeader http.Header
p := Progress(&defaultUploadProgressListener{})
opts := make([]Option, len(arg.options)+2)
opts = append(opts, arg.options...)
// use defaultUploadProgressListener
opts = append(opts, p, GetResponseHeader(&respHeader))
startT := time.Now().UnixNano() / 1000 / 1000 / 1000
part, err := arg.bucket.UploadPartFromFile(arg.imur, arg.filePath, chunk.Offset, chunk.Size, chunk.Number, opts...)
endT := time.Now().UnixNano() / 1000 / 1000 / 1000
if err != nil {
arg.bucket.Client.Config.WriteLog(Debug, "upload part error,cost:%d second,part number:%d,request id:%s,error:%s\n", endT-startT, chunk.Number, GetRequestId(respHeader), err.Error())
failed <- err
break
}
select {
case <-die:
return
default:
}
results <- part
}
}
// scheduler function
func scheduler(jobs chan FileChunk, chunks []FileChunk) {
for _, chunk := range chunks {
jobs <- chunk
}
close(jobs)
}
func getTotalBytes(chunks []FileChunk) int64 {
var tb int64
for _, chunk := range chunks {
tb += chunk.Size
}
return tb
}
// uploadFile is a concurrent upload, without checkpoint
func (bucket Bucket) uploadFile(objectKey, filePath string, partSize int64, options []Option, routines int) error {
listener := GetProgressListener(options)
chunks, err := SplitFileByPartSize(filePath, partSize)
if err != nil {
return err
}
partOptions := ChoiceTransferPartOption(options)
completeOptions := ChoiceCompletePartOption(options)
abortOptions := ChoiceAbortPartOption(options)
// Initialize the multipart upload
imur, err := bucket.InitiateMultipartUpload(objectKey, options...)
if err != nil {
return err
}
jobs := make(chan FileChunk, len(chunks))
results := make(chan UploadPart, len(chunks))
failed := make(chan error)
die := make(chan bool)
var completedBytes int64
totalBytes := getTotalBytes(chunks)
event := newProgressEvent(TransferStartedEvent, 0, totalBytes, 0)
publishProgress(listener, event)
// Start the worker coroutine
arg := workerArg{&bucket, filePath, imur, partOptions, uploadPartHooker}
for w := 1; w <= routines; w++ {
go worker(w, arg, jobs, results, failed, die)
}
// Schedule the jobs
go scheduler(jobs, chunks)
// Waiting for the upload finished
completed := 0
parts := make([]UploadPart, len(chunks))
for completed < len(chunks) {
select {
case part := <-results:
completed++
parts[part.PartNumber-1] = part
completedBytes += chunks[part.PartNumber-1].Size
// why RwBytes in ProgressEvent is 0 ?
// because read or write event has been notified in teeReader.Read()
event = newProgressEvent(TransferDataEvent, completedBytes, totalBytes, chunks[part.PartNumber-1].Size)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
bucket.AbortMultipartUpload(imur, abortOptions...)
return err
}
if completed >= len(chunks) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, totalBytes, 0)
publishProgress(listener, event)
// Complete the multpart upload
_, err = bucket.CompleteMultipartUpload(imur, parts, completeOptions...)
if err != nil {
bucket.AbortMultipartUpload(imur, abortOptions...)
return err
}
return nil
}
// ----- concurrent upload with checkpoint -----
const uploadCpMagic = "FE8BB4EA-B593-4FAC-AD7A-2459A36E2E62"
type uploadCheckpoint struct {
Magic string // Magic
MD5 string // Checkpoint file content's MD5
FilePath string // Local file path
FileStat cpStat // File state
ObjectKey string // Key
UploadID string // Upload ID
Parts []cpPart // All parts of the local file
}
type cpStat struct {
Size int64 // File size
LastModified time.Time // File's last modified time
MD5 string // Local file's MD5
}
type cpPart struct {
Chunk FileChunk // File chunk
Part UploadPart // Uploaded part
IsCompleted bool // Upload complete flag
}
// isValid checks if the uploaded data is valid---it's valid when the file is not updated and the checkpoint data is valid.
func (cp uploadCheckpoint) isValid(filePath string) (bool, error) {
// Compare the CP's magic number and MD5.
cpb := cp
cpb.MD5 = ""
js, _ := json.Marshal(cpb)
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
if cp.Magic != uploadCpMagic || b64 != cp.MD5 {
return false, nil
}
// Make sure if the local file is updated.
fd, err := os.Open(filePath)
if err != nil {
return false, err
}
defer fd.Close()
st, err := fd.Stat()
if err != nil {
return false, err
}
md, err := calcFileMD5(filePath)
if err != nil {
return false, err
}
// Compare the file size, file's last modified time and file's MD5
if cp.FileStat.Size != st.Size() ||
!cp.FileStat.LastModified.Equal(st.ModTime()) ||
cp.FileStat.MD5 != md {
return false, nil
}
return true, nil
}
// load loads from the file
func (cp *uploadCheckpoint) load(filePath string) error {
contents, err := ioutil.ReadFile(filePath)
if err != nil {
return err
}
err = json.Unmarshal(contents, cp)
return err
}
// dump dumps to the local file
func (cp *uploadCheckpoint) dump(filePath string) error {
bcp := *cp
// Calculate MD5
bcp.MD5 = ""
js, err := json.Marshal(bcp)
if err != nil {
return err
}
sum := md5.Sum(js)
b64 := base64.StdEncoding.EncodeToString(sum[:])
bcp.MD5 = b64
// Serialization
js, err = json.Marshal(bcp)
if err != nil {
return err
}
// Dump
return ioutil.WriteFile(filePath, js, FilePermMode)
}
// updatePart updates the part status
func (cp *uploadCheckpoint) updatePart(part UploadPart) {
cp.Parts[part.PartNumber-1].Part = part
cp.Parts[part.PartNumber-1].IsCompleted = true
}
// todoParts returns unfinished parts
func (cp *uploadCheckpoint) todoParts() []FileChunk {
fcs := []FileChunk{}
for _, part := range cp.Parts {
if !part.IsCompleted {
fcs = append(fcs, part.Chunk)
}
}
return fcs
}
// allParts returns all parts
func (cp *uploadCheckpoint) allParts() []UploadPart {
ps := []UploadPart{}
for _, part := range cp.Parts {
ps = append(ps, part.Part)
}
return ps
}
// getCompletedBytes returns completed bytes count
func (cp *uploadCheckpoint) getCompletedBytes() int64 {
var completedBytes int64
for _, part := range cp.Parts {
if part.IsCompleted {
completedBytes += part.Chunk.Size
}
}
return completedBytes
}
// calcFileMD5 calculates the MD5 for the specified local file
func calcFileMD5(filePath string) (string, error) {
return "", nil
}
// prepare initializes the multipart upload
func prepare(cp *uploadCheckpoint, objectKey, filePath string, partSize int64, bucket *Bucket, options []Option) error {
// CP
cp.Magic = uploadCpMagic
cp.FilePath = filePath
cp.ObjectKey = objectKey
// Local file
fd, err := os.Open(filePath)
if err != nil {
return err
}
defer fd.Close()
st, err := fd.Stat()
if err != nil {
return err
}
cp.FileStat.Size = st.Size()
cp.FileStat.LastModified = st.ModTime()
md, err := calcFileMD5(filePath)
if err != nil {
return err
}
cp.FileStat.MD5 = md
// Chunks
parts, err := SplitFileByPartSize(filePath, partSize)
if err != nil {
return err
}
cp.Parts = make([]cpPart, len(parts))
for i, part := range parts {
cp.Parts[i].Chunk = part
cp.Parts[i].IsCompleted = false
}
// Init load
imur, err := bucket.InitiateMultipartUpload(objectKey, options...)
if err != nil {
return err
}
cp.UploadID = imur.UploadID
return nil
}
// complete completes the multipart upload and deletes the local CP files
func complete(cp *uploadCheckpoint, bucket *Bucket, parts []UploadPart, cpFilePath string, options []Option) error {
imur := InitiateMultipartUploadResult{Bucket: bucket.BucketName,
Key: cp.ObjectKey, UploadID: cp.UploadID}
_, err := bucket.CompleteMultipartUpload(imur, parts, options...)
if err != nil {
return err
}
os.Remove(cpFilePath)
return err
}
// uploadFileWithCp handles concurrent upload with checkpoint
func (bucket Bucket) uploadFileWithCp(objectKey, filePath string, partSize int64, options []Option, cpFilePath string, routines int) error {
listener := GetProgressListener(options)
partOptions := ChoiceTransferPartOption(options)
completeOptions := ChoiceCompletePartOption(options)
// Load CP data
ucp := uploadCheckpoint{}
err := ucp.load(cpFilePath)
if err != nil {
os.Remove(cpFilePath)
}
// Load error or the CP data is invalid.
valid, err := ucp.isValid(filePath)
if err != nil || !valid {
if err = prepare(&ucp, objectKey, filePath, partSize, &bucket, options); err != nil {
return err
}
os.Remove(cpFilePath)
}
chunks := ucp.todoParts()
imur := InitiateMultipartUploadResult{
Bucket: bucket.BucketName,
Key: objectKey,
UploadID: ucp.UploadID}
jobs := make(chan FileChunk, len(chunks))
results := make(chan UploadPart, len(chunks))
failed := make(chan error)
die := make(chan bool)
completedBytes := ucp.getCompletedBytes()
// why RwBytes in ProgressEvent is 0 ?
// because read or write event has been notified in teeReader.Read()
event := newProgressEvent(TransferStartedEvent, completedBytes, ucp.FileStat.Size, 0)
publishProgress(listener, event)
// Start the workers
arg := workerArg{&bucket, filePath, imur, partOptions, uploadPartHooker}
for w := 1; w <= routines; w++ {
go worker(w, arg, jobs, results, failed, die)
}
// Schedule jobs
go scheduler(jobs, chunks)
// Waiting for the job finished
completed := 0
for completed < len(chunks) {
select {
case part := <-results:
completed++
ucp.updatePart(part)
ucp.dump(cpFilePath)
completedBytes += ucp.Parts[part.PartNumber-1].Chunk.Size
event = newProgressEvent(TransferDataEvent, completedBytes, ucp.FileStat.Size, ucp.Parts[part.PartNumber-1].Chunk.Size)
publishProgress(listener, event)
case err := <-failed:
close(die)
event = newProgressEvent(TransferFailedEvent, completedBytes, ucp.FileStat.Size, 0)
publishProgress(listener, event)
return err
}
if completed >= len(chunks) {
break
}
}
event = newProgressEvent(TransferCompletedEvent, completedBytes, ucp.FileStat.Size, 0)
publishProgress(listener, event)
// Complete the multipart upload
err = complete(&ucp, &bucket, ucp.allParts(), cpFilePath, completeOptions)
return err
}

@ -0,0 +1,534 @@
package oss
import (
"bytes"
"errors"
"fmt"
"hash/crc32"
"hash/crc64"
"io"
"net/http"
"os"
"os/exec"
"runtime"
"strconv"
"strings"
"time"
)
var sys_name string
var sys_release string
var sys_machine string
func init() {
sys_name = runtime.GOOS
sys_release = "-"
sys_machine = runtime.GOARCH
if out, err := exec.Command("uname", "-s").CombinedOutput(); err == nil {
sys_name = string(bytes.TrimSpace(out))
}
if out, err := exec.Command("uname", "-r").CombinedOutput(); err == nil {
sys_release = string(bytes.TrimSpace(out))
}
if out, err := exec.Command("uname", "-m").CombinedOutput(); err == nil {
sys_machine = string(bytes.TrimSpace(out))
}
}
// userAgent gets user agent
// It has the SDK version information, OS information and GO version
func userAgent() string {
sys := getSysInfo()
return fmt.Sprintf("aliyun-sdk-go/%s (%s/%s/%s;%s)", Version, sys.name,
sys.release, sys.machine, runtime.Version())
}
type sysInfo struct {
name string // OS name such as windows/Linux
release string // OS version 2.6.32-220.23.2.ali1089.el5.x86_64 etc
machine string // CPU type amd64/x86_64
}
// getSysInfo gets system info
// gets the OS information and CPU type
func getSysInfo() sysInfo {
return sysInfo{name: sys_name, release: sys_release, machine: sys_machine}
}
// GetRangeConfig gets the download range from the options.
func GetRangeConfig(options []Option) (*UnpackedRange, error) {
rangeOpt, err := FindOption(options, HTTPHeaderRange, nil)
if err != nil || rangeOpt == nil {
return nil, err
}
return ParseRange(rangeOpt.(string))
}
// UnpackedRange
type UnpackedRange struct {
HasStart bool // Flag indicates if the start point is specified
HasEnd bool // Flag indicates if the end point is specified
Start int64 // Start point
End int64 // End point
}
// InvalidRangeError returns invalid range error
func InvalidRangeError(r string) error {
return fmt.Errorf("InvalidRange %s", r)
}
func GetRangeString(unpackRange UnpackedRange) string {
var strRange string
if unpackRange.HasStart && unpackRange.HasEnd {
strRange = fmt.Sprintf("%d-%d", unpackRange.Start, unpackRange.End)
} else if unpackRange.HasStart {
strRange = fmt.Sprintf("%d-", unpackRange.Start)
} else if unpackRange.HasEnd {
strRange = fmt.Sprintf("-%d", unpackRange.End)
}
return strRange
}
// ParseRange parse various styles of range such as bytes=M-N
func ParseRange(normalizedRange string) (*UnpackedRange, error) {
var err error
hasStart := false
hasEnd := false
var start int64
var end int64
// Bytes==M-N or ranges=M-N
nrSlice := strings.Split(normalizedRange, "=")
if len(nrSlice) != 2 || nrSlice[0] != "bytes" {
return nil, InvalidRangeError(normalizedRange)
}
// Bytes=M-N,X-Y
rSlice := strings.Split(nrSlice[1], ",")
rStr := rSlice[0]
if strings.HasSuffix(rStr, "-") { // M-
startStr := rStr[:len(rStr)-1]
start, err = strconv.ParseInt(startStr, 10, 64)
if err != nil {
return nil, InvalidRangeError(normalizedRange)
}
hasStart = true
} else if strings.HasPrefix(rStr, "-") { // -N
len := rStr[1:]
end, err = strconv.ParseInt(len, 10, 64)
if err != nil {
return nil, InvalidRangeError(normalizedRange)
}
if end == 0 { // -0
return nil, InvalidRangeError(normalizedRange)
}
hasEnd = true
} else { // M-N
valSlice := strings.Split(rStr, "-")
if len(valSlice) != 2 {
return nil, InvalidRangeError(normalizedRange)
}
start, err = strconv.ParseInt(valSlice[0], 10, 64)
if err != nil {
return nil, InvalidRangeError(normalizedRange)
}
hasStart = true
end, err = strconv.ParseInt(valSlice[1], 10, 64)
if err != nil {
return nil, InvalidRangeError(normalizedRange)
}
hasEnd = true
}
return &UnpackedRange{hasStart, hasEnd, start, end}, nil
}
// AdjustRange returns adjusted range, adjust the range according to the length of the file
func AdjustRange(ur *UnpackedRange, size int64) (start, end int64) {
if ur == nil {
return 0, size
}
if ur.HasStart && ur.HasEnd {
start = ur.Start
end = ur.End + 1
if ur.Start < 0 || ur.Start >= size || ur.End > size || ur.Start > ur.End {
start = 0
end = size
}
} else if ur.HasStart {
start = ur.Start
end = size
if ur.Start < 0 || ur.Start >= size {
start = 0
}
} else if ur.HasEnd {
start = size - ur.End
end = size
if ur.End < 0 || ur.End > size {
start = 0
end = size
}
}
return
}
// GetNowSec returns Unix time, the number of seconds elapsed since January 1, 1970 UTC.
// gets the current time in Unix time, in seconds.
func GetNowSec() int64 {
return time.Now().Unix()
}
// GetNowNanoSec returns t as a Unix time, the number of nanoseconds elapsed
// since January 1, 1970 UTC. The result is undefined if the Unix time
// in nanoseconds cannot be represented by an int64. Note that this
// means the result of calling UnixNano on the zero Time is undefined.
// gets the current time in Unix time, in nanoseconds.
func GetNowNanoSec() int64 {
return time.Now().UnixNano()
}
// GetNowGMT gets the current time in GMT format.
func GetNowGMT() string {
return time.Now().UTC().Format(http.TimeFormat)
}
// FileChunk is the file chunk definition
type FileChunk struct {
Number int // Chunk number
Offset int64 // Chunk offset
Size int64 // Chunk size.
}
// SplitFileByPartNum splits big file into parts by the num of parts.
// Split the file with specified parts count, returns the split result when error is nil.
func SplitFileByPartNum(fileName string, chunkNum int) ([]FileChunk, error) {
if chunkNum <= 0 || chunkNum > 10000 {
return nil, errors.New("chunkNum invalid")
}
file, err := os.Open(fileName)
if err != nil {
return nil, err
}
defer file.Close()
stat, err := file.Stat()
if err != nil {
return nil, err
}
if int64(chunkNum) > stat.Size() {
return nil, errors.New("oss: chunkNum invalid")
}
var chunks []FileChunk
var chunk = FileChunk{}
var chunkN = (int64)(chunkNum)
for i := int64(0); i < chunkN; i++ {
chunk.Number = int(i + 1)
chunk.Offset = i * (stat.Size() / chunkN)
if i == chunkN-1 {
chunk.Size = stat.Size()/chunkN + stat.Size()%chunkN
} else {
chunk.Size = stat.Size() / chunkN
}
chunks = append(chunks, chunk)
}
return chunks, nil
}
// SplitFileByPartSize splits big file into parts by the size of parts.
// Splits the file by the part size. Returns the FileChunk when error is nil.
func SplitFileByPartSize(fileName string, chunkSize int64) ([]FileChunk, error) {
if chunkSize <= 0 {
return nil, errors.New("chunkSize invalid")
}
file, err := os.Open(fileName)
if err != nil {
return nil, err
}
defer file.Close()
stat, err := file.Stat()
if err != nil {
return nil, err
}
var chunkN = stat.Size() / chunkSize
if chunkN >= 10000 {
return nil, errors.New("Too many parts, please increase part size")
}
var chunks []FileChunk
var chunk = FileChunk{}
for i := int64(0); i < chunkN; i++ {
chunk.Number = int(i + 1)
chunk.Offset = i * chunkSize
chunk.Size = chunkSize
chunks = append(chunks, chunk)
}
if stat.Size()%chunkSize > 0 {
chunk.Number = len(chunks) + 1
chunk.Offset = int64(len(chunks)) * chunkSize
chunk.Size = stat.Size() % chunkSize
chunks = append(chunks, chunk)
}
return chunks, nil
}
// GetPartEnd calculates the end position
func GetPartEnd(begin int64, total int64, per int64) int64 {
if begin+per > total {
return total - 1
}
return begin + per - 1
}
// CrcTable returns the table constructed from the specified polynomial
var CrcTable = func() *crc64.Table {
return crc64.MakeTable(crc64.ECMA)
}
// CrcTable returns the table constructed from the specified polynomial
var crc32Table = func() *crc32.Table {
return crc32.MakeTable(crc32.IEEE)
}
// choiceTransferPartOption choices valid option supported by Uploadpart or DownloadPart
func ChoiceTransferPartOption(options []Option) []Option {
var outOption []Option
listener, _ := FindOption(options, progressListener, nil)
if listener != nil {
outOption = append(outOption, Progress(listener.(ProgressListener)))
}
payer, _ := FindOption(options, HTTPHeaderOssRequester, nil)
if payer != nil {
outOption = append(outOption, RequestPayer(PayerType(payer.(string))))
}
versionId, _ := FindOption(options, "versionId", nil)
if versionId != nil {
outOption = append(outOption, VersionId(versionId.(string)))
}
trafficLimit, _ := FindOption(options, HTTPHeaderOssTrafficLimit, nil)
if trafficLimit != nil {
speed, _ := strconv.ParseInt(trafficLimit.(string), 10, 64)
outOption = append(outOption, TrafficLimitHeader(speed))
}
respHeader, _ := FindOption(options, responseHeader, nil)
if respHeader != nil {
outOption = append(outOption, GetResponseHeader(respHeader.(*http.Header)))
}
return outOption
}
// ChoiceCompletePartOption choices valid option supported by CompleteMulitiPart
func ChoiceCompletePartOption(options []Option) []Option {
var outOption []Option
listener, _ := FindOption(options, progressListener, nil)
if listener != nil {
outOption = append(outOption, Progress(listener.(ProgressListener)))
}
payer, _ := FindOption(options, HTTPHeaderOssRequester, nil)
if payer != nil {
outOption = append(outOption, RequestPayer(PayerType(payer.(string))))
}
acl, _ := FindOption(options, HTTPHeaderOssObjectACL, nil)
if acl != nil {
outOption = append(outOption, ObjectACL(ACLType(acl.(string))))
}
callback, _ := FindOption(options, HTTPHeaderOssCallback, nil)
if callback != nil {
outOption = append(outOption, Callback(callback.(string)))
}
callbackVar, _ := FindOption(options, HTTPHeaderOssCallbackVar, nil)
if callbackVar != nil {
outOption = append(outOption, CallbackVar(callbackVar.(string)))
}
respHeader, _ := FindOption(options, responseHeader, nil)
if respHeader != nil {
outOption = append(outOption, GetResponseHeader(respHeader.(*http.Header)))
}
forbidOverWrite, _ := FindOption(options, HTTPHeaderOssForbidOverWrite, nil)
if forbidOverWrite != nil {
if forbidOverWrite.(string) == "true" {
outOption = append(outOption, ForbidOverWrite(true))
} else {
outOption = append(outOption, ForbidOverWrite(false))
}
}
return outOption
}
// ChoiceAbortPartOption choices valid option supported by AbortMultipartUpload
func ChoiceAbortPartOption(options []Option) []Option {
var outOption []Option
payer, _ := FindOption(options, HTTPHeaderOssRequester, nil)
if payer != nil {
outOption = append(outOption, RequestPayer(PayerType(payer.(string))))
}
respHeader, _ := FindOption(options, responseHeader, nil)
if respHeader != nil {
outOption = append(outOption, GetResponseHeader(respHeader.(*http.Header)))
}
return outOption
}
// ChoiceHeadObjectOption choices valid option supported by HeadObject
func ChoiceHeadObjectOption(options []Option) []Option {
var outOption []Option
// not select HTTPHeaderRange to get whole object length
payer, _ := FindOption(options, HTTPHeaderOssRequester, nil)
if payer != nil {
outOption = append(outOption, RequestPayer(PayerType(payer.(string))))
}
versionId, _ := FindOption(options, "versionId", nil)
if versionId != nil {
outOption = append(outOption, VersionId(versionId.(string)))
}
respHeader, _ := FindOption(options, responseHeader, nil)
if respHeader != nil {
outOption = append(outOption, GetResponseHeader(respHeader.(*http.Header)))
}
return outOption
}
func CheckBucketName(bucketName string) error {
nameLen := len(bucketName)
if nameLen < 3 || nameLen > 63 {
return fmt.Errorf("bucket name %s len is between [3-63],now is %d", bucketName, nameLen)
}
for _, v := range bucketName {
if !(('a' <= v && v <= 'z') || ('0' <= v && v <= '9') || v == '-') {
return fmt.Errorf("bucket name %s can only include lowercase letters, numbers, and -", bucketName)
}
}
if bucketName[0] == '-' || bucketName[nameLen-1] == '-' {
return fmt.Errorf("bucket name %s must start and end with a lowercase letter or number", bucketName)
}
return nil
}
func GetReaderLen(reader io.Reader) (int64, error) {
var contentLength int64
var err error
switch v := reader.(type) {
case *bytes.Buffer:
contentLength = int64(v.Len())
case *bytes.Reader:
contentLength = int64(v.Len())
case *strings.Reader:
contentLength = int64(v.Len())
case *os.File:
fInfo, fError := v.Stat()
if fError != nil {
err = fmt.Errorf("can't get reader content length,%s", fError.Error())
} else {
contentLength = fInfo.Size()
}
case *io.LimitedReader:
contentLength = int64(v.N)
case *LimitedReadCloser:
contentLength = int64(v.N)
default:
err = fmt.Errorf("can't get reader content length,unkown reader type")
}
return contentLength, err
}
func LimitReadCloser(r io.Reader, n int64) io.Reader {
var lc LimitedReadCloser
lc.R = r
lc.N = n
return &lc
}
// LimitedRC support Close()
type LimitedReadCloser struct {
io.LimitedReader
}
func (lc *LimitedReadCloser) Close() error {
if closer, ok := lc.R.(io.ReadCloser); ok {
return closer.Close()
}
return nil
}
type DiscardReadCloser struct {
RC io.ReadCloser
Discard int
}
func (drc *DiscardReadCloser) Read(b []byte) (int, error) {
n, err := drc.RC.Read(b)
if drc.Discard == 0 || n <= 0 {
return n, err
}
if n <= drc.Discard {
drc.Discard -= n
return 0, err
}
realLen := n - drc.Discard
copy(b[0:realLen], b[drc.Discard:n])
drc.Discard = 0
return realLen, err
}
func (drc *DiscardReadCloser) Close() error {
closer, ok := drc.RC.(io.ReadCloser)
if ok {
return closer.Close()
}
return nil
}
func ConvertEmptyValueToNil(params map[string]interface{}, keys []string) {
for _, key := range keys {
value, ok := params[key]
if ok && value == "" {
// convert "" to nil
params[key] = nil
}
}
}
func EscapeLFString(str string) string {
var log bytes.Buffer
for i := 0; i < len(str); i++ {
if str[i] != '\n' {
log.WriteByte(str[i])
} else {
log.WriteString("\\n")
}
}
return log.String()
}

@ -0,0 +1,28 @@
---
codecov:
require_ci_to_pass: true
comment:
behavior: default
layout: reach, diff, flags, files, footer
require_base: false
require_changes: false
require_head: true
coverage:
precision: 2
range:
- 70
- 100
round: down
status:
changes: false
patch: true
project: true
parsers:
gcov:
branch_detection:
conditional: true
loop: true
macro: false
method: false
javascript:
enable_partials: false

@ -0,0 +1,11 @@
.idea
.DS_Store
/server/server.exe
/server/server
/server/server_dar*
/server/server_fre*
/server/server_win*
/server/server_net*
/server/server_ope*
/server/server_lin*
CHANGELOG.md

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

@ -0,0 +1,200 @@
# BigCache [![Build Status](https://github.com/allegro/bigcache/workflows/build/badge.svg)](https://github.com/allegro/bigcache/actions?query=workflow%3Abuild)&nbsp;[![Coverage Status](https://coveralls.io/repos/github/allegro/bigcache/badge.svg?branch=master)](https://coveralls.io/github/allegro/bigcache?branch=master)&nbsp;[![GoDoc](https://godoc.org/github.com/allegro/bigcache?status.svg)](https://godoc.org/github.com/allegro/bigcache)&nbsp;[![Go Report Card](https://goreportcard.com/badge/github.com/allegro/bigcache)](https://goreportcard.com/report/github.com/allegro/bigcache)
Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance.
BigCache keeps entries on heap but omits GC for them. To achieve that, operations on byte slices take place,
therefore entries (de)serialization in front of the cache will be needed in most use cases.
Requires Go 1.12 or newer.
## Usage
### Simple initialization
```go
import "github.com/allegro/bigcache/v3"
cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))
cache.Set("my-unique-key", []byte("value"))
entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))
```
### Custom initialization
When cache load can be predicted in advance then it is better to use custom initialization because additional memory
allocation can be avoided in that way.
```go
import (
"log"
"github.com/allegro/bigcache/v3"
)
config := bigcache.Config {
// number of shards (must be a power of 2)
Shards: 1024,
// time after which entry can be evicted
LifeWindow: 10 * time.Minute,
// Interval between removing expired entries (clean up).
// If set to <= 0 then no action is performed.
// Setting to < 1 second is counterproductive bigcache has a one second resolution.
CleanWindow: 5 * time.Minute,
// rps * lifeWindow, used only in initial memory allocation
MaxEntriesInWindow: 1000 * 10 * 60,
// max entry size in bytes, used only in initial memory allocation
MaxEntrySize: 500,
// prints information about additional memory allocation
Verbose: true,
// cache will not allocate more memory than this limit, value in MB
// if value is reached then the oldest entries can be overridden for the new ones
// 0 value means no size limit
HardMaxCacheSize: 8192,
// callback fired when the oldest entry is removed because of its expiration time or no space left
// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
OnRemove: nil,
// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
// for the new entry, or because delete was called. A constant representing the reason will be passed through.
// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
// Ignored if OnRemove is specified.
OnRemoveWithReason: nil,
}
cache, initErr := bigcache.NewBigCache(config)
if initErr != nil {
log.Fatal(initErr)
}
cache.Set("my-unique-key", []byte("value"))
if entry, err := cache.Get("my-unique-key"); err == nil {
fmt.Println(string(entry))
}
```
### `LifeWindow` & `CleanWindow`
1. `LifeWindow` is a time. After that time, an entry can be called dead but not deleted.
2. `CleanWindow` is a time. After that time, all the dead entries will be deleted, but not the entries that still have life.
## [Benchmarks](https://github.com/allegro/bigcache-bench)
Three caches were compared: bigcache, [freecache](https://github.com/coocood/freecache) and map.
Benchmark tests were made using an
i7-6700K CPU @ 4.00GHz with 32GB of RAM on Ubuntu 18.04 LTS (5.2.12-050212-generic).
Benchmarks source code can be found [here](https://github.com/allegro/bigcache-bench)
### Writes and reads
```bash
go version
go version go1.13 linux/amd64
go test -bench=. -benchmem -benchtime=4s ./... -timeout 30m
goos: linux
goarch: amd64
pkg: github.com/allegro/bigcache/v3/caches_bench
BenchmarkMapSet-8 12999889 376 ns/op 199 B/op 3 allocs/op
BenchmarkConcurrentMapSet-8 4355726 1275 ns/op 337 B/op 8 allocs/op
BenchmarkFreeCacheSet-8 11068976 703 ns/op 328 B/op 2 allocs/op
BenchmarkBigCacheSet-8 10183717 478 ns/op 304 B/op 2 allocs/op
BenchmarkMapGet-8 16536015 324 ns/op 23 B/op 1 allocs/op
BenchmarkConcurrentMapGet-8 13165708 401 ns/op 24 B/op 2 allocs/op
BenchmarkFreeCacheGet-8 10137682 690 ns/op 136 B/op 2 allocs/op
BenchmarkBigCacheGet-8 11423854 450 ns/op 152 B/op 4 allocs/op
BenchmarkBigCacheSetParallel-8 34233472 148 ns/op 317 B/op 3 allocs/op
BenchmarkFreeCacheSetParallel-8 34222654 268 ns/op 350 B/op 3 allocs/op
BenchmarkConcurrentMapSetParallel-8 19635688 240 ns/op 200 B/op 6 allocs/op
BenchmarkBigCacheGetParallel-8 60547064 86.1 ns/op 152 B/op 4 allocs/op
BenchmarkFreeCacheGetParallel-8 50701280 147 ns/op 136 B/op 3 allocs/op
BenchmarkConcurrentMapGetParallel-8 27353288 175 ns/op 24 B/op 2 allocs/op
PASS
ok github.com/allegro/bigcache/v3/caches_bench 256.257s
```
Writes and reads in bigcache are faster than in freecache.
Writes to map are the slowest.
### GC pause time
```bash
go version
go version go1.13 linux/amd64
go run caches_gc_overhead_comparison.go
Number of entries: 20000000
GC pause for bigcache: 1.506077ms
GC pause for freecache: 5.594416ms
GC pause for map: 9.347015ms
```
```
go version
go version go1.13 linux/arm64
go run caches_gc_overhead_comparison.go
Number of entries: 20000000
GC pause for bigcache: 22.382827ms
GC pause for freecache: 41.264651ms
GC pause for map: 72.236853ms
```
Test shows how long are the GC pauses for caches filled with 20mln of entries.
Bigcache and freecache have very similar GC pause time.
### Memory usage
You may encounter system memory reporting what appears to be an exponential increase, however this is expected behaviour. Go runtime allocates memory in chunks or 'spans' and will inform the OS when they are no longer required by changing their state to 'idle'. The 'spans' will remain part of the process resource usage until the OS needs to repurpose the address. Further reading available [here](https://utcc.utoronto.ca/~cks/space/blog/programming/GoNoMemoryFreeing).
## How it works
BigCache relies on optimization presented in 1.5 version of Go ([issue-9477](https://github.com/golang/go/issues/9477)).
This optimization states that if map without pointers in keys and values is used then GC will omit its content.
Therefore BigCache uses `map[uint64]uint32` where keys are hashed and values are offsets of entries.
Entries are kept in byte slices, to omit GC again.
Byte slices size can grow to gigabytes without impact on performance
because GC will only see single pointer to it.
### Collisions
BigCache does not handle collisions. When new item is inserted and it's hash collides with previously stored item, new item overwrites previously stored value.
## Bigcache vs Freecache
Both caches provide the same core features but they reduce GC overhead in different ways.
Bigcache relies on `map[uint64]uint32`, freecache implements its own mapping built on
slices to reduce number of pointers.
Results from benchmark tests are presented above.
One of the advantage of bigcache over freecache is that you dont need to know
the size of the cache in advance, because when bigcache is full,
it can allocate additional memory for new entries instead of
overwriting existing ones as freecache does currently.
However hard max size in bigcache also can be set, check [HardMaxCacheSize](https://godoc.org/github.com/allegro/bigcache#Config).
## HTTP Server
This package also includes an easily deployable HTTP implementation of BigCache, which can be found in the [server](/server) package.
## More
Bigcache genesis is described in allegro.tech blog post: [writing a very fast cache service in Go](http://allegro.tech/2016/03/writing-fast-cache-service-in-go.html)
## License
BigCache is released under the Apache 2.0 license (see [LICENSE](LICENSE))

@ -0,0 +1,246 @@
package bigcache
import (
"fmt"
"time"
)
const (
minimumEntriesInShard = 10 // Minimum number of entries in single shard
)
// BigCache is fast, concurrent, evicting cache created to keep big number of entries without impact on performance.
// It keeps entries on heap but omits GC for them. To achieve that, operations take place on byte arrays,
// therefore entries (de)serialization in front of the cache will be needed in most use cases.
type BigCache struct {
shards []*cacheShard
lifeWindow uint64
clock clock
hash Hasher
config Config
shardMask uint64
close chan struct{}
}
// Response will contain metadata about the entry for which GetWithInfo(key) was called
type Response struct {
EntryStatus RemoveReason
}
// RemoveReason is a value used to signal to the user why a particular key was removed in the OnRemove callback.
type RemoveReason uint32
const (
// Expired means the key is past its LifeWindow.
Expired = RemoveReason(1)
// NoSpace means the key is the oldest and the cache size was at its maximum when Set was called, or the
// entry exceeded the maximum shard size.
NoSpace = RemoveReason(2)
// Deleted means Delete was called and this key was removed as a result.
Deleted = RemoveReason(3)
)
// NewBigCache initialize new instance of BigCache
func NewBigCache(config Config) (*BigCache, error) {
return newBigCache(config, &systemClock{})
}
func newBigCache(config Config, clock clock) (*BigCache, error) {
if !isPowerOfTwo(config.Shards) {
return nil, fmt.Errorf("Shards number must be power of two")
}
if config.MaxEntrySize < 0 {
return nil, fmt.Errorf("MaxEntrySize must be >= 0")
}
if config.MaxEntriesInWindow < 0 {
return nil, fmt.Errorf("MaxEntriesInWindow must be >= 0")
}
if config.HardMaxCacheSize < 0 {
return nil, fmt.Errorf("HardMaxCacheSize must be >= 0")
}
if config.Hasher == nil {
config.Hasher = newDefaultHasher()
}
cache := &BigCache{
shards: make([]*cacheShard, config.Shards),
lifeWindow: uint64(config.LifeWindow.Seconds()),
clock: clock,
hash: config.Hasher,
config: config,
shardMask: uint64(config.Shards - 1),
close: make(chan struct{}),
}
var onRemove func(wrappedEntry []byte, reason RemoveReason)
if config.OnRemoveWithMetadata != nil {
onRemove = cache.providedOnRemoveWithMetadata
} else if config.OnRemove != nil {
onRemove = cache.providedOnRemove
} else if config.OnRemoveWithReason != nil {
onRemove = cache.providedOnRemoveWithReason
} else {
onRemove = cache.notProvidedOnRemove
}
for i := 0; i < config.Shards; i++ {
cache.shards[i] = initNewShard(config, onRemove, clock)
}
if config.CleanWindow > 0 {
go func() {
ticker := time.NewTicker(config.CleanWindow)
defer ticker.Stop()
for {
select {
case t := <-ticker.C:
cache.cleanUp(uint64(t.Unix()))
case <-cache.close:
return
}
}
}()
}
return cache, nil
}
// Close is used to signal a shutdown of the cache when you are done with it.
// This allows the cleaning goroutines to exit and ensures references are not
// kept to the cache preventing GC of the entire cache.
func (c *BigCache) Close() error {
close(c.close)
return nil
}
// Get reads entry for the key.
// It returns an ErrEntryNotFound when
// no entry exists for the given key.
func (c *BigCache) Get(key string) ([]byte, error) {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.get(key, hashedKey)
}
// GetWithInfo reads entry for the key with Response info.
// It returns an ErrEntryNotFound when
// no entry exists for the given key.
func (c *BigCache) GetWithInfo(key string) ([]byte, Response, error) {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.getWithInfo(key, hashedKey)
}
// Set saves entry under the key
func (c *BigCache) Set(key string, entry []byte) error {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.set(key, hashedKey, entry)
}
// Append appends entry under the key if key exists, otherwise
// it will set the key (same behaviour as Set()). With Append() you can
// concatenate multiple entries under the same key in an lock optimized way.
func (c *BigCache) Append(key string, entry []byte) error {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.append(key, hashedKey, entry)
}
// Delete removes the key
func (c *BigCache) Delete(key string) error {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.del(hashedKey)
}
// Reset empties all cache shards
func (c *BigCache) Reset() error {
for _, shard := range c.shards {
shard.reset(c.config)
}
return nil
}
// Len computes number of entries in cache
func (c *BigCache) Len() int {
var len int
for _, shard := range c.shards {
len += shard.len()
}
return len
}
// Capacity returns amount of bytes store in the cache.
func (c *BigCache) Capacity() int {
var len int
for _, shard := range c.shards {
len += shard.capacity()
}
return len
}
// Stats returns cache's statistics
func (c *BigCache) Stats() Stats {
var s Stats
for _, shard := range c.shards {
tmp := shard.getStats()
s.Hits += tmp.Hits
s.Misses += tmp.Misses
s.DelHits += tmp.DelHits
s.DelMisses += tmp.DelMisses
s.Collisions += tmp.Collisions
}
return s
}
// KeyMetadata returns number of times a cached resource was requested.
func (c *BigCache) KeyMetadata(key string) Metadata {
hashedKey := c.hash.Sum64(key)
shard := c.getShard(hashedKey)
return shard.getKeyMetadataWithLock(hashedKey)
}
// Iterator returns iterator function to iterate over EntryInfo's from whole cache.
func (c *BigCache) Iterator() *EntryInfoIterator {
return newIterator(c)
}
func (c *BigCache) onEvict(oldestEntry []byte, currentTimestamp uint64, evict func(reason RemoveReason) error) bool {
oldestTimestamp := readTimestampFromEntry(oldestEntry)
if currentTimestamp-oldestTimestamp > c.lifeWindow {
evict(Expired)
return true
}
return false
}
func (c *BigCache) cleanUp(currentTimestamp uint64) {
for _, shard := range c.shards {
shard.cleanUp(currentTimestamp)
}
}
func (c *BigCache) getShard(hashedKey uint64) (shard *cacheShard) {
return c.shards[hashedKey&c.shardMask]
}
func (c *BigCache) providedOnRemove(wrappedEntry []byte, reason RemoveReason) {
c.config.OnRemove(readKeyFromEntry(wrappedEntry), readEntry(wrappedEntry))
}
func (c *BigCache) providedOnRemoveWithReason(wrappedEntry []byte, reason RemoveReason) {
if c.config.onRemoveFilter == 0 || (1<<uint(reason))&c.config.onRemoveFilter > 0 {
c.config.OnRemoveWithReason(readKeyFromEntry(wrappedEntry), readEntry(wrappedEntry), reason)
}
}
func (c *BigCache) notProvidedOnRemove(wrappedEntry []byte, reason RemoveReason) {
}
func (c *BigCache) providedOnRemoveWithMetadata(wrappedEntry []byte, reason RemoveReason) {
hashedKey := c.hash.Sum64(readKeyFromEntry(wrappedEntry))
shard := c.getShard(hashedKey)
c.config.OnRemoveWithMetadata(readKeyFromEntry(wrappedEntry), readEntry(wrappedEntry), shard.getKeyMetadata(hashedKey))
}

@ -0,0 +1,11 @@
// +build !appengine
package bigcache
import (
"unsafe"
)
func bytesToString(b []byte) string {
return *(*string)(unsafe.Pointer(&b))
}

@ -0,0 +1,7 @@
// +build appengine
package bigcache
func bytesToString(b []byte) string {
return string(b)
}

@ -0,0 +1,14 @@
package bigcache
import "time"
type clock interface {
Epoch() int64
}
type systemClock struct {
}
func (c systemClock) Epoch() int64 {
return time.Now().Unix()
}

@ -0,0 +1,97 @@
package bigcache
import "time"
// Config for BigCache
type Config struct {
// Number of cache shards, value must be a power of two
Shards int
// Time after which entry can be evicted
LifeWindow time.Duration
// Interval between removing expired entries (clean up).
// If set to <= 0 then no action is performed. Setting to < 1 second is counterproductive — bigcache has a one second resolution.
CleanWindow time.Duration
// Max number of entries in life window. Used only to calculate initial size for cache shards.
// When proper value is set then additional memory allocation does not occur.
MaxEntriesInWindow int
// Max size of entry in bytes. Used only to calculate initial size for cache shards.
MaxEntrySize int
// StatsEnabled if true calculate the number of times a cached resource was requested.
StatsEnabled bool
// Verbose mode prints information about new memory allocation
Verbose bool
// Hasher used to map between string keys and unsigned 64bit integers, by default fnv64 hashing is used.
Hasher Hasher
// HardMaxCacheSize is a limit for BytesQueue size in MB.
// It can protect application from consuming all available memory on machine, therefore from running OOM Killer.
// Default value is 0 which means unlimited size. When the limit is higher than 0 and reached then
// the oldest entries are overridden for the new ones. The max memory consumption will be bigger than
// HardMaxCacheSize due to Shards' s additional memory. Every Shard consumes additional memory for map of keys
// and statistics (map[uint64]uint32) the size of this map is equal to number of entries in
// cache ~ 2×(64+32)×n bits + overhead or map itself.
HardMaxCacheSize int
// OnRemove is a callback fired when the oldest entry is removed because of its expiration time or no space left
// for the new entry, or because delete was called.
// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
// ignored if OnRemoveWithMetadata is specified.
OnRemove func(key string, entry []byte)
// OnRemoveWithMetadata is a callback fired when the oldest entry is removed because of its expiration time or no space left
// for the new entry, or because delete was called. A structure representing details about that specific entry.
// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
OnRemoveWithMetadata func(key string, entry []byte, keyMetadata Metadata)
// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
// for the new entry, or because delete was called. A constant representing the reason will be passed through.
// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
// Ignored if OnRemove is specified.
OnRemoveWithReason func(key string, entry []byte, reason RemoveReason)
onRemoveFilter int
// Logger is a logging interface and used in combination with `Verbose`
// Defaults to `DefaultLogger()`
Logger Logger
}
// DefaultConfig initializes config with default values.
// When load for BigCache can be predicted in advance then it is better to use custom config.
func DefaultConfig(eviction time.Duration) Config {
return Config{
Shards: 1024,
LifeWindow: eviction,
CleanWindow: 1 * time.Second,
MaxEntriesInWindow: 1000 * 10 * 60,
MaxEntrySize: 500,
StatsEnabled: false,
Verbose: true,
Hasher: newDefaultHasher(),
HardMaxCacheSize: 0,
Logger: DefaultLogger(),
}
}
// initialShardSize computes initial shard size
func (c Config) initialShardSize() int {
return max(c.MaxEntriesInWindow/c.Shards, minimumEntriesInShard)
}
// maximumShardSizeInBytes computes maximum shard size in bytes
func (c Config) maximumShardSizeInBytes() int {
maxShardSize := 0
if c.HardMaxCacheSize > 0 {
maxShardSize = convertMBToBytes(c.HardMaxCacheSize) / c.Shards
}
return maxShardSize
}
// OnRemoveFilterSet sets which remove reasons will trigger a call to OnRemoveWithReason.
// Filtering out reasons prevents bigcache from unwrapping them, which saves cpu.
func (c Config) OnRemoveFilterSet(reasons ...RemoveReason) Config {
c.onRemoveFilter = 0
for i := range reasons {
c.onRemoveFilter |= 1 << uint(reasons[i])
}
return c
}

@ -0,0 +1,83 @@
package bigcache
import (
"encoding/binary"
)
const (
timestampSizeInBytes = 8 // Number of bytes used for timestamp
hashSizeInBytes = 8 // Number of bytes used for hash
keySizeInBytes = 2 // Number of bytes used for size of entry key
headersSizeInBytes = timestampSizeInBytes + hashSizeInBytes + keySizeInBytes // Number of bytes used for all headers
)
func wrapEntry(timestamp uint64, hash uint64, key string, entry []byte, buffer *[]byte) []byte {
keyLength := len(key)
blobLength := len(entry) + headersSizeInBytes + keyLength
if blobLength > len(*buffer) {
*buffer = make([]byte, blobLength)
}
blob := *buffer
binary.LittleEndian.PutUint64(blob, timestamp)
binary.LittleEndian.PutUint64(blob[timestampSizeInBytes:], hash)
binary.LittleEndian.PutUint16(blob[timestampSizeInBytes+hashSizeInBytes:], uint16(keyLength))
copy(blob[headersSizeInBytes:], key)
copy(blob[headersSizeInBytes+keyLength:], entry)
return blob[:blobLength]
}
func appendToWrappedEntry(timestamp uint64, wrappedEntry []byte, entry []byte, buffer *[]byte) []byte {
blobLength := len(wrappedEntry) + len(entry)
if blobLength > len(*buffer) {
*buffer = make([]byte, blobLength)
}
blob := *buffer
binary.LittleEndian.PutUint64(blob, timestamp)
copy(blob[timestampSizeInBytes:], wrappedEntry[timestampSizeInBytes:])
copy(blob[len(wrappedEntry):], entry)
return blob[:blobLength]
}
func readEntry(data []byte) []byte {
length := binary.LittleEndian.Uint16(data[timestampSizeInBytes+hashSizeInBytes:])
// copy on read
dst := make([]byte, len(data)-int(headersSizeInBytes+length))
copy(dst, data[headersSizeInBytes+length:])
return dst
}
func readTimestampFromEntry(data []byte) uint64 {
return binary.LittleEndian.Uint64(data)
}
func readKeyFromEntry(data []byte) string {
length := binary.LittleEndian.Uint16(data[timestampSizeInBytes+hashSizeInBytes:])
// copy on read
dst := make([]byte, length)
copy(dst, data[headersSizeInBytes:headersSizeInBytes+length])
return bytesToString(dst)
}
func compareKeyFromEntry(data []byte, key string) bool {
length := binary.LittleEndian.Uint16(data[timestampSizeInBytes+hashSizeInBytes:])
return bytesToString(data[headersSizeInBytes:headersSizeInBytes+length]) == key
}
func readHashFromEntry(data []byte) uint64 {
return binary.LittleEndian.Uint64(data[timestampSizeInBytes:])
}
func resetKeyFromEntry(data []byte) {
binary.LittleEndian.PutUint64(data[timestampSizeInBytes:], 0)
}

@ -0,0 +1,8 @@
package bigcache
import "errors"
var (
// ErrEntryNotFound is an error type struct which is returned when entry was not found for provided key
ErrEntryNotFound = errors.New("Entry not found")
)

@ -0,0 +1,28 @@
package bigcache
// newDefaultHasher returns a new 64-bit FNV-1a Hasher which makes no memory allocations.
// Its Sum64 method will lay the value out in big-endian byte order.
// See https://en.wikipedia.org/wiki/FowlerNollVo_hash_function
func newDefaultHasher() Hasher {
return fnv64a{}
}
type fnv64a struct{}
const (
// offset64 FNVa offset basis. See https://en.wikipedia.org/wiki/FowlerNollVo_hash_function#FNV-1a_hash
offset64 = 14695981039346656037
// prime64 FNVa prime value. See https://en.wikipedia.org/wiki/FowlerNollVo_hash_function#FNV-1a_hash
prime64 = 1099511628211
)
// Sum64 gets the string and returns its uint64 hash value.
func (f fnv64a) Sum64(key string) uint64 {
var hash uint64 = offset64
for i := 0; i < len(key); i++ {
hash ^= uint64(key[i])
hash *= prime64
}
return hash
}

@ -0,0 +1,8 @@
package bigcache
// Hasher is responsible for generating unsigned, 64 bit hash of provided string. Hasher should minimize collisions
// (generating same hash for different strings) and while performance is also important fast functions are preferable (i.e.
// you can use FarmHash family).
type Hasher interface {
Sum64(string) uint64
}

@ -0,0 +1,146 @@
package bigcache
import (
"sync"
)
type iteratorError string
func (e iteratorError) Error() string {
return string(e)
}
// ErrInvalidIteratorState is reported when iterator is in invalid state
const ErrInvalidIteratorState = iteratorError("Iterator is in invalid state. Use SetNext() to move to next position")
// ErrCannotRetrieveEntry is reported when entry cannot be retrieved from underlying
const ErrCannotRetrieveEntry = iteratorError("Could not retrieve entry from cache")
var emptyEntryInfo = EntryInfo{}
// EntryInfo holds informations about entry in the cache
type EntryInfo struct {
timestamp uint64
hash uint64
key string
value []byte
err error
}
// Key returns entry's underlying key
func (e EntryInfo) Key() string {
return e.key
}
// Hash returns entry's hash value
func (e EntryInfo) Hash() uint64 {
return e.hash
}
// Timestamp returns entry's timestamp (time of insertion)
func (e EntryInfo) Timestamp() uint64 {
return e.timestamp
}
// Value returns entry's underlying value
func (e EntryInfo) Value() []byte {
return e.value
}
// EntryInfoIterator allows to iterate over entries in the cache
type EntryInfoIterator struct {
mutex sync.Mutex
cache *BigCache
currentShard int
currentIndex int
currentEntryInfo EntryInfo
elements []uint64
elementsCount int
valid bool
}
// SetNext moves to next element and returns true if it exists.
func (it *EntryInfoIterator) SetNext() bool {
it.mutex.Lock()
it.valid = false
it.currentIndex++
if it.elementsCount > it.currentIndex {
it.valid = true
empty := it.setCurrentEntry()
it.mutex.Unlock()
if empty {
return it.SetNext()
}
return true
}
for i := it.currentShard + 1; i < it.cache.config.Shards; i++ {
it.elements, it.elementsCount = it.cache.shards[i].copyHashedKeys()
// Non empty shard - stick with it
if it.elementsCount > 0 {
it.currentIndex = 0
it.currentShard = i
it.valid = true
empty := it.setCurrentEntry()
it.mutex.Unlock()
if empty {
return it.SetNext()
}
return true
}
}
it.mutex.Unlock()
return false
}
func (it *EntryInfoIterator) setCurrentEntry() bool {
var entryNotFound = false
entry, err := it.cache.shards[it.currentShard].getEntry(it.elements[it.currentIndex])
if err == ErrEntryNotFound {
it.currentEntryInfo = emptyEntryInfo
entryNotFound = true
} else if err != nil {
it.currentEntryInfo = EntryInfo{
err: err,
}
} else {
it.currentEntryInfo = EntryInfo{
timestamp: readTimestampFromEntry(entry),
hash: readHashFromEntry(entry),
key: readKeyFromEntry(entry),
value: readEntry(entry),
err: err,
}
}
return entryNotFound
}
func newIterator(cache *BigCache) *EntryInfoIterator {
elements, count := cache.shards[0].copyHashedKeys()
return &EntryInfoIterator{
cache: cache,
currentShard: 0,
currentIndex: -1,
elements: elements,
elementsCount: count,
}
}
// Value returns current value from the iterator
func (it *EntryInfoIterator) Value() (EntryInfo, error) {
if !it.valid {
return emptyEntryInfo, ErrInvalidIteratorState
}
return it.currentEntryInfo, it.currentEntryInfo.err
}

@ -0,0 +1,30 @@
package bigcache
import (
"log"
"os"
)
// Logger is invoked when `Config.Verbose=true`
type Logger interface {
Printf(format string, v ...interface{})
}
// this is a safeguard, breaking on compile time in case
// `log.Logger` does not adhere to our `Logger` interface.
// see https://golang.org/doc/faq#guarantee_satisfies_interface
var _ Logger = &log.Logger{}
// DefaultLogger returns a `Logger` implementation
// backed by stdlib's log
func DefaultLogger() *log.Logger {
return log.New(os.Stdout, "", log.LstdFlags)
}
func newLogger(custom Logger) Logger {
if custom != nil {
return custom
}
return DefaultLogger()
}

@ -0,0 +1,269 @@
package queue
import (
"encoding/binary"
"log"
"time"
)
const (
// Number of bytes to encode 0 in uvarint format
minimumHeaderSize = 17 // 1 byte blobsize + timestampSizeInBytes + hashSizeInBytes
// Bytes before left margin are not used. Zero index means element does not exist in queue, useful while reading slice from index
leftMarginIndex = 1
)
var (
errEmptyQueue = &queueError{"Empty queue"}
errInvalidIndex = &queueError{"Index must be greater than zero. Invalid index."}
errIndexOutOfBounds = &queueError{"Index out of range"}
)
// BytesQueue is a non-thread safe queue type of fifo based on bytes array.
// For every push operation index of entry is returned. It can be used to read the entry later
type BytesQueue struct {
full bool
array []byte
capacity int
maxCapacity int
head int
tail int
count int
rightMargin int
headerBuffer []byte
verbose bool
}
type queueError struct {
message string
}
// getNeededSize returns the number of bytes an entry of length need in the queue
func getNeededSize(length int) int {
var header int
switch {
case length < 127: // 1<<7-1
header = 1
case length < 16382: // 1<<14-2
header = 2
case length < 2097149: // 1<<21 -3
header = 3
case length < 268435452: // 1<<28 -4
header = 4
default:
header = 5
}
return length + header
}
// NewBytesQueue initialize new bytes queue.
// capacity is used in bytes array allocation
// When verbose flag is set then information about memory allocation are printed
func NewBytesQueue(capacity int, maxCapacity int, verbose bool) *BytesQueue {
return &BytesQueue{
array: make([]byte, capacity),
capacity: capacity,
maxCapacity: maxCapacity,
headerBuffer: make([]byte, binary.MaxVarintLen32),
tail: leftMarginIndex,
head: leftMarginIndex,
rightMargin: leftMarginIndex,
verbose: verbose,
}
}
// Reset removes all entries from queue
func (q *BytesQueue) Reset() {
// Just reset indexes
q.tail = leftMarginIndex
q.head = leftMarginIndex
q.rightMargin = leftMarginIndex
q.count = 0
q.full = false
}
// Push copies entry at the end of queue and moves tail pointer. Allocates more space if needed.
// Returns index for pushed data or error if maximum size queue limit is reached.
func (q *BytesQueue) Push(data []byte) (int, error) {
neededSize := getNeededSize(len(data))
if !q.canInsertAfterTail(neededSize) {
if q.canInsertBeforeHead(neededSize) {
q.tail = leftMarginIndex
} else if q.capacity+neededSize >= q.maxCapacity && q.maxCapacity > 0 {
return -1, &queueError{"Full queue. Maximum size limit reached."}
} else {
q.allocateAdditionalMemory(neededSize)
}
}
index := q.tail
q.push(data, neededSize)
return index, nil
}
func (q *BytesQueue) allocateAdditionalMemory(minimum int) {
start := time.Now()
if q.capacity < minimum {
q.capacity += minimum
}
q.capacity = q.capacity * 2
if q.capacity > q.maxCapacity && q.maxCapacity > 0 {
q.capacity = q.maxCapacity
}
oldArray := q.array
q.array = make([]byte, q.capacity)
if leftMarginIndex != q.rightMargin {
copy(q.array, oldArray[:q.rightMargin])
if q.tail <= q.head {
if q.tail != q.head {
// created slice is slightly larger then need but this is fine after only the needed bytes are copied
q.push(make([]byte, q.head-q.tail), q.head-q.tail)
}
q.head = leftMarginIndex
q.tail = q.rightMargin
}
}
q.full = false
if q.verbose {
log.Printf("Allocated new queue in %s; Capacity: %d \n", time.Since(start), q.capacity)
}
}
func (q *BytesQueue) push(data []byte, len int) {
headerEntrySize := binary.PutUvarint(q.headerBuffer, uint64(len))
q.copy(q.headerBuffer, headerEntrySize)
q.copy(data, len-headerEntrySize)
if q.tail > q.head {
q.rightMargin = q.tail
}
if q.tail == q.head {
q.full = true
}
q.count++
}
func (q *BytesQueue) copy(data []byte, len int) {
q.tail += copy(q.array[q.tail:], data[:len])
}
// Pop reads the oldest entry from queue and moves head pointer to the next one
func (q *BytesQueue) Pop() ([]byte, error) {
data, blockSize, err := q.peek(q.head)
if err != nil {
return nil, err
}
q.head += blockSize
q.count--
if q.head == q.rightMargin {
q.head = leftMarginIndex
if q.tail == q.rightMargin {
q.tail = leftMarginIndex
}
q.rightMargin = q.tail
}
q.full = false
return data, nil
}
// Peek reads the oldest entry from list without moving head pointer
func (q *BytesQueue) Peek() ([]byte, error) {
data, _, err := q.peek(q.head)
return data, err
}
// Get reads entry from index
func (q *BytesQueue) Get(index int) ([]byte, error) {
data, _, err := q.peek(index)
return data, err
}
// CheckGet checks if an entry can be read from index
func (q *BytesQueue) CheckGet(index int) error {
return q.peekCheckErr(index)
}
// Capacity returns number of allocated bytes for queue
func (q *BytesQueue) Capacity() int {
return q.capacity
}
// Len returns number of entries kept in queue
func (q *BytesQueue) Len() int {
return q.count
}
// Error returns error message
func (e *queueError) Error() string {
return e.message
}
// peekCheckErr is identical to peek, but does not actually return any data
func (q *BytesQueue) peekCheckErr(index int) error {
if q.count == 0 {
return errEmptyQueue
}
if index <= 0 {
return errInvalidIndex
}
if index >= len(q.array) {
return errIndexOutOfBounds
}
return nil
}
// peek returns the data from index and the number of bytes to encode the length of the data in uvarint format
func (q *BytesQueue) peek(index int) ([]byte, int, error) {
err := q.peekCheckErr(index)
if err != nil {
return nil, 0, err
}
blockSize, n := binary.Uvarint(q.array[index:])
return q.array[index+n : index+int(blockSize)], int(blockSize), nil
}
// canInsertAfterTail returns true if it's possible to insert an entry of size of need after the tail of the queue
func (q *BytesQueue) canInsertAfterTail(need int) bool {
if q.full {
return false
}
if q.tail >= q.head {
return q.capacity-q.tail >= need
}
// 1. there is exactly need bytes between head and tail, so we do not need
// to reserve extra space for a potential empty entry when realloc this queue
// 2. still have unused space between tail and head, then we must reserve
// at least headerEntrySize bytes so we can put an empty entry
return q.head-q.tail == need || q.head-q.tail >= need+minimumHeaderSize
}
// canInsertBeforeHead returns true if it's possible to insert an entry of size of need before the head of the queue
func (q *BytesQueue) canInsertBeforeHead(need int) bool {
if q.full {
return false
}
if q.tail >= q.head {
return q.head-leftMarginIndex == need || q.head-leftMarginIndex >= need+minimumHeaderSize
}
return q.head-q.tail == need || q.head-q.tail >= need+minimumHeaderSize
}

@ -0,0 +1,434 @@
package bigcache
import (
"fmt"
"sync"
"sync/atomic"
"github.com/allegro/bigcache/v3/queue"
)
type onRemoveCallback func(wrappedEntry []byte, reason RemoveReason)
// Metadata contains information of a specific entry
type Metadata struct {
RequestCount uint32
}
type cacheShard struct {
hashmap map[uint64]uint32
entries queue.BytesQueue
lock sync.RWMutex
entryBuffer []byte
onRemove onRemoveCallback
isVerbose bool
statsEnabled bool
logger Logger
clock clock
lifeWindow uint64
hashmapStats map[uint64]uint32
stats Stats
}
func (s *cacheShard) getWithInfo(key string, hashedKey uint64) (entry []byte, resp Response, err error) {
currentTime := uint64(s.clock.Epoch())
s.lock.RLock()
wrappedEntry, err := s.getWrappedEntry(hashedKey)
if err != nil {
s.lock.RUnlock()
return nil, resp, err
}
if entryKey := readKeyFromEntry(wrappedEntry); key != entryKey {
s.lock.RUnlock()
s.collision()
if s.isVerbose {
s.logger.Printf("Collision detected. Both %q and %q have the same hash %x", key, entryKey, hashedKey)
}
return nil, resp, ErrEntryNotFound
}
entry = readEntry(wrappedEntry)
oldestTimeStamp := readTimestampFromEntry(wrappedEntry)
s.lock.RUnlock()
s.hit(hashedKey)
if currentTime-oldestTimeStamp >= s.lifeWindow {
resp.EntryStatus = Expired
}
return entry, resp, nil
}
func (s *cacheShard) get(key string, hashedKey uint64) ([]byte, error) {
s.lock.RLock()
wrappedEntry, err := s.getWrappedEntry(hashedKey)
if err != nil {
s.lock.RUnlock()
return nil, err
}
if entryKey := readKeyFromEntry(wrappedEntry); key != entryKey {
s.lock.RUnlock()
s.collision()
if s.isVerbose {
s.logger.Printf("Collision detected. Both %q and %q have the same hash %x", key, entryKey, hashedKey)
}
return nil, ErrEntryNotFound
}
entry := readEntry(wrappedEntry)
s.lock.RUnlock()
s.hit(hashedKey)
return entry, nil
}
func (s *cacheShard) getWrappedEntry(hashedKey uint64) ([]byte, error) {
itemIndex := s.hashmap[hashedKey]
if itemIndex == 0 {
s.miss()
return nil, ErrEntryNotFound
}
wrappedEntry, err := s.entries.Get(int(itemIndex))
if err != nil {
s.miss()
return nil, err
}
return wrappedEntry, err
}
func (s *cacheShard) getValidWrapEntry(key string, hashedKey uint64) ([]byte, error) {
wrappedEntry, err := s.getWrappedEntry(hashedKey)
if err != nil {
return nil, err
}
if !compareKeyFromEntry(wrappedEntry, key) {
s.collision()
if s.isVerbose {
s.logger.Printf("Collision detected. Both %q and %q have the same hash %x", key, readKeyFromEntry(wrappedEntry), hashedKey)
}
return nil, ErrEntryNotFound
}
s.hitWithoutLock(hashedKey)
return wrappedEntry, nil
}
func (s *cacheShard) set(key string, hashedKey uint64, entry []byte) error {
currentTimestamp := uint64(s.clock.Epoch())
s.lock.Lock()
if previousIndex := s.hashmap[hashedKey]; previousIndex != 0 {
if previousEntry, err := s.entries.Get(int(previousIndex)); err == nil {
resetKeyFromEntry(previousEntry)
//remove hashkey
delete(s.hashmap, hashedKey)
}
}
if oldestEntry, err := s.entries.Peek(); err == nil {
s.onEvict(oldestEntry, currentTimestamp, s.removeOldestEntry)
}
w := wrapEntry(currentTimestamp, hashedKey, key, entry, &s.entryBuffer)
for {
if index, err := s.entries.Push(w); err == nil {
s.hashmap[hashedKey] = uint32(index)
s.lock.Unlock()
return nil
}
if s.removeOldestEntry(NoSpace) != nil {
s.lock.Unlock()
return fmt.Errorf("entry is bigger than max shard size")
}
}
}
func (s *cacheShard) addNewWithoutLock(key string, hashedKey uint64, entry []byte) error {
currentTimestamp := uint64(s.clock.Epoch())
if oldestEntry, err := s.entries.Peek(); err == nil {
s.onEvict(oldestEntry, currentTimestamp, s.removeOldestEntry)
}
w := wrapEntry(currentTimestamp, hashedKey, key, entry, &s.entryBuffer)
for {
if index, err := s.entries.Push(w); err == nil {
s.hashmap[hashedKey] = uint32(index)
return nil
}
if s.removeOldestEntry(NoSpace) != nil {
return fmt.Errorf("entry is bigger than max shard size")
}
}
}
func (s *cacheShard) setWrappedEntryWithoutLock(currentTimestamp uint64, w []byte, hashedKey uint64) error {
if previousIndex := s.hashmap[hashedKey]; previousIndex != 0 {
if previousEntry, err := s.entries.Get(int(previousIndex)); err == nil {
resetKeyFromEntry(previousEntry)
}
}
if oldestEntry, err := s.entries.Peek(); err == nil {
s.onEvict(oldestEntry, currentTimestamp, s.removeOldestEntry)
}
for {
if index, err := s.entries.Push(w); err == nil {
s.hashmap[hashedKey] = uint32(index)
return nil
}
if s.removeOldestEntry(NoSpace) != nil {
return fmt.Errorf("entry is bigger than max shard size")
}
}
}
func (s *cacheShard) append(key string, hashedKey uint64, entry []byte) error {
s.lock.Lock()
wrappedEntry, err := s.getValidWrapEntry(key, hashedKey)
if err == ErrEntryNotFound {
err = s.addNewWithoutLock(key, hashedKey, entry)
s.lock.Unlock()
return err
}
if err != nil {
s.lock.Unlock()
return err
}
currentTimestamp := uint64(s.clock.Epoch())
w := appendToWrappedEntry(currentTimestamp, wrappedEntry, entry, &s.entryBuffer)
err = s.setWrappedEntryWithoutLock(currentTimestamp, w, hashedKey)
s.lock.Unlock()
return err
}
func (s *cacheShard) del(hashedKey uint64) error {
// Optimistic pre-check using only readlock
s.lock.RLock()
{
itemIndex := s.hashmap[hashedKey]
if itemIndex == 0 {
s.lock.RUnlock()
s.delmiss()
return ErrEntryNotFound
}
if err := s.entries.CheckGet(int(itemIndex)); err != nil {
s.lock.RUnlock()
s.delmiss()
return err
}
}
s.lock.RUnlock()
s.lock.Lock()
{
// After obtaining the writelock, we need to read the same again,
// since the data delivered earlier may be stale now
itemIndex := s.hashmap[hashedKey]
if itemIndex == 0 {
s.lock.Unlock()
s.delmiss()
return ErrEntryNotFound
}
wrappedEntry, err := s.entries.Get(int(itemIndex))
if err != nil {
s.lock.Unlock()
s.delmiss()
return err
}
delete(s.hashmap, hashedKey)
s.onRemove(wrappedEntry, Deleted)
if s.statsEnabled {
delete(s.hashmapStats, hashedKey)
}
resetKeyFromEntry(wrappedEntry)
}
s.lock.Unlock()
s.delhit()
return nil
}
func (s *cacheShard) onEvict(oldestEntry []byte, currentTimestamp uint64, evict func(reason RemoveReason) error) bool {
oldestTimestamp := readTimestampFromEntry(oldestEntry)
if currentTimestamp-oldestTimestamp > s.lifeWindow {
evict(Expired)
return true
}
return false
}
func (s *cacheShard) cleanUp(currentTimestamp uint64) {
s.lock.Lock()
for {
if oldestEntry, err := s.entries.Peek(); err != nil {
break
} else if evicted := s.onEvict(oldestEntry, currentTimestamp, s.removeOldestEntry); !evicted {
break
}
}
s.lock.Unlock()
}
func (s *cacheShard) getEntry(hashedKey uint64) ([]byte, error) {
s.lock.RLock()
entry, err := s.getWrappedEntry(hashedKey)
// copy entry
newEntry := make([]byte, len(entry))
copy(newEntry, entry)
s.lock.RUnlock()
return newEntry, err
}
func (s *cacheShard) copyHashedKeys() (keys []uint64, next int) {
s.lock.RLock()
keys = make([]uint64, len(s.hashmap))
for key := range s.hashmap {
keys[next] = key
next++
}
s.lock.RUnlock()
return keys, next
}
func (s *cacheShard) removeOldestEntry(reason RemoveReason) error {
oldest, err := s.entries.Pop()
if err == nil {
hash := readHashFromEntry(oldest)
if hash == 0 {
// entry has been explicitly deleted with resetKeyFromEntry, ignore
return nil
}
delete(s.hashmap, hash)
s.onRemove(oldest, reason)
if s.statsEnabled {
delete(s.hashmapStats, hash)
}
return nil
}
return err
}
func (s *cacheShard) reset(config Config) {
s.lock.Lock()
s.hashmap = make(map[uint64]uint32, config.initialShardSize())
s.entryBuffer = make([]byte, config.MaxEntrySize+headersSizeInBytes)
s.entries.Reset()
s.lock.Unlock()
}
func (s *cacheShard) len() int {
s.lock.RLock()
res := len(s.hashmap)
s.lock.RUnlock()
return res
}
func (s *cacheShard) capacity() int {
s.lock.RLock()
res := s.entries.Capacity()
s.lock.RUnlock()
return res
}
func (s *cacheShard) getStats() Stats {
var stats = Stats{
Hits: atomic.LoadInt64(&s.stats.Hits),
Misses: atomic.LoadInt64(&s.stats.Misses),
DelHits: atomic.LoadInt64(&s.stats.DelHits),
DelMisses: atomic.LoadInt64(&s.stats.DelMisses),
Collisions: atomic.LoadInt64(&s.stats.Collisions),
}
return stats
}
func (s *cacheShard) getKeyMetadataWithLock(key uint64) Metadata {
s.lock.RLock()
c := s.hashmapStats[key]
s.lock.RUnlock()
return Metadata{
RequestCount: c,
}
}
func (s *cacheShard) getKeyMetadata(key uint64) Metadata {
return Metadata{
RequestCount: s.hashmapStats[key],
}
}
func (s *cacheShard) hit(key uint64) {
atomic.AddInt64(&s.stats.Hits, 1)
if s.statsEnabled {
s.lock.Lock()
s.hashmapStats[key]++
s.lock.Unlock()
}
}
func (s *cacheShard) hitWithoutLock(key uint64) {
atomic.AddInt64(&s.stats.Hits, 1)
if s.statsEnabled {
s.hashmapStats[key]++
}
}
func (s *cacheShard) miss() {
atomic.AddInt64(&s.stats.Misses, 1)
}
func (s *cacheShard) delhit() {
atomic.AddInt64(&s.stats.DelHits, 1)
}
func (s *cacheShard) delmiss() {
atomic.AddInt64(&s.stats.DelMisses, 1)
}
func (s *cacheShard) collision() {
atomic.AddInt64(&s.stats.Collisions, 1)
}
func initNewShard(config Config, callback onRemoveCallback, clock clock) *cacheShard {
bytesQueueInitialCapacity := config.initialShardSize() * config.MaxEntrySize
maximumShardSizeInBytes := config.maximumShardSizeInBytes()
if maximumShardSizeInBytes > 0 && bytesQueueInitialCapacity > maximumShardSizeInBytes {
bytesQueueInitialCapacity = maximumShardSizeInBytes
}
return &cacheShard{
hashmap: make(map[uint64]uint32, config.initialShardSize()),
hashmapStats: make(map[uint64]uint32, config.initialShardSize()),
entries: *queue.NewBytesQueue(bytesQueueInitialCapacity, maximumShardSizeInBytes, config.Verbose),
entryBuffer: make([]byte, config.MaxEntrySize+headersSizeInBytes),
onRemove: callback,
isVerbose: config.Verbose,
logger: newLogger(config.Logger),
clock: clock,
lifeWindow: uint64(config.LifeWindow.Seconds()),
statsEnabled: config.StatsEnabled,
}
}

@ -0,0 +1,15 @@
package bigcache
// Stats stores cache statistics
type Stats struct {
// Hits is a number of successfully found keys
Hits int64 `json:"hits"`
// Misses is a number of not found keys
Misses int64 `json:"misses"`
// DelHits is a number of successfully deleted keys
DelHits int64 `json:"delete_hits"`
// DelMisses is a number of not deleted keys
DelMisses int64 `json:"delete_misses"`
// Collisions is a number of happened key-collisions
Collisions int64 `json:"collisions"`
}

@ -0,0 +1,23 @@
package bigcache
func max(a, b int) int {
if a > b {
return a
}
return b
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
func convertMBToBytes(value int) int {
return value * 1024 * 1024
}
func isPowerOfTwo(number int) bool {
return (number != 0) && (number&(number-1)) == 0
}

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

@ -0,0 +1,3 @@
AWS SDK for Go
Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Copyright 2014-2015 Stripe, Inc.

@ -0,0 +1,93 @@
// Package arn provides a parser for interacting with Amazon Resource Names.
package arn
import (
"errors"
"strings"
)
const (
arnDelimiter = ":"
arnSections = 6
arnPrefix = "arn:"
// zero-indexed
sectionPartition = 1
sectionService = 2
sectionRegion = 3
sectionAccountID = 4
sectionResource = 5
// errors
invalidPrefix = "arn: invalid prefix"
invalidSections = "arn: not enough sections"
)
// ARN captures the individual fields of an Amazon Resource Name.
// See http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html for more information.
type ARN struct {
// The partition that the resource is in. For standard AWS regions, the partition is "aws". If you have resources in
// other partitions, the partition is "aws-partitionname". For example, the partition for resources in the China
// (Beijing) region is "aws-cn".
Partition string
// The service namespace that identifies the AWS product (for example, Amazon S3, IAM, or Amazon RDS). For a list of
// namespaces, see
// http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces.
Service string
// The region the resource resides in. Note that the ARNs for some resources do not require a region, so this
// component might be omitted.
Region string
// The ID of the AWS account that owns the resource, without the hyphens. For example, 123456789012. Note that the
// ARNs for some resources don't require an account number, so this component might be omitted.
AccountID string
// The content of this part of the ARN varies by service. It often includes an indicator of the type of resource —
// for example, an IAM user or Amazon RDS database - followed by a slash (/) or a colon (:), followed by the
// resource name itself. Some services allows paths for resource names, as described in
// http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arns-paths.
Resource string
}
// Parse parses an ARN into its constituent parts.
//
// Some example ARNs:
// arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My App/MyEnvironment
// arn:aws:iam::123456789012:user/David
// arn:aws:rds:eu-west-1:123456789012:db:mysql-db
// arn:aws:s3:::my_corporate_bucket/exampleobject.png
func Parse(arn string) (ARN, error) {
if !strings.HasPrefix(arn, arnPrefix) {
return ARN{}, errors.New(invalidPrefix)
}
sections := strings.SplitN(arn, arnDelimiter, arnSections)
if len(sections) != arnSections {
return ARN{}, errors.New(invalidSections)
}
return ARN{
Partition: sections[sectionPartition],
Service: sections[sectionService],
Region: sections[sectionRegion],
AccountID: sections[sectionAccountID],
Resource: sections[sectionResource],
}, nil
}
// IsARN returns whether the given string is an ARN by looking for
// whether the string starts with "arn:" and contains the correct number
// of sections delimited by colons(:).
func IsARN(arn string) bool {
return strings.HasPrefix(arn, arnPrefix) && strings.Count(arn, ":") >= arnSections-1
}
// String returns the canonical representation of the ARN
func (arn ARN) String() string {
return arnPrefix +
arn.Partition + arnDelimiter +
arn.Service + arnDelimiter +
arn.Region + arnDelimiter +
arn.AccountID + arnDelimiter +
arn.Resource
}

@ -0,0 +1,164 @@
// Package awserr represents API error interface accessors for the SDK.
package awserr
// An Error wraps lower level errors with code, message and an original error.
// The underlying concrete error type may also satisfy other interfaces which
// can be to used to obtain more specific information about the error.
//
// Calling Error() or String() will always include the full information about
// an error based on its underlying type.
//
// Example:
//
// output, err := s3manage.Upload(svc, input, opts)
// if err != nil {
// if awsErr, ok := err.(awserr.Error); ok {
// // Get error details
// log.Println("Error:", awsErr.Code(), awsErr.Message())
//
// // Prints out full error message, including original error if there was one.
// log.Println("Error:", awsErr.Error())
//
// // Get original error
// if origErr := awsErr.OrigErr(); origErr != nil {
// // operate on original error.
// }
// } else {
// fmt.Println(err.Error())
// }
// }
//
type Error interface {
// Satisfy the generic error interface.
error
// Returns the short phrase depicting the classification of the error.
Code() string
// Returns the error details message.
Message() string
// Returns the original error if one was set. Nil is returned if not set.
OrigErr() error
}
// BatchError is a batch of errors which also wraps lower level errors with
// code, message, and original errors. Calling Error() will include all errors
// that occurred in the batch.
//
// Deprecated: Replaced with BatchedErrors. Only defined for backwards
// compatibility.
type BatchError interface {
// Satisfy the generic error interface.
error
// Returns the short phrase depicting the classification of the error.
Code() string
// Returns the error details message.
Message() string
// Returns the original error if one was set. Nil is returned if not set.
OrigErrs() []error
}
// BatchedErrors is a batch of errors which also wraps lower level errors with
// code, message, and original errors. Calling Error() will include all errors
// that occurred in the batch.
//
// Replaces BatchError
type BatchedErrors interface {
// Satisfy the base Error interface.
Error
// Returns the original error if one was set. Nil is returned if not set.
OrigErrs() []error
}
// New returns an Error object described by the code, message, and origErr.
//
// If origErr satisfies the Error interface it will not be wrapped within a new
// Error object and will instead be returned.
func New(code, message string, origErr error) Error {
var errs []error
if origErr != nil {
errs = append(errs, origErr)
}
return newBaseError(code, message, errs)
}
// NewBatchError returns an BatchedErrors with a collection of errors as an
// array of errors.
func NewBatchError(code, message string, errs []error) BatchedErrors {
return newBaseError(code, message, errs)
}
// A RequestFailure is an interface to extract request failure information from
// an Error such as the request ID of the failed request returned by a service.
// RequestFailures may not always have a requestID value if the request failed
// prior to reaching the service such as a connection error.
//
// Example:
//
// output, err := s3manage.Upload(svc, input, opts)
// if err != nil {
// if reqerr, ok := err.(RequestFailure); ok {
// log.Println("Request failed", reqerr.Code(), reqerr.Message(), reqerr.RequestID())
// } else {
// log.Println("Error:", err.Error())
// }
// }
//
// Combined with awserr.Error:
//
// output, err := s3manage.Upload(svc, input, opts)
// if err != nil {
// if awsErr, ok := err.(awserr.Error); ok {
// // Generic AWS Error with Code, Message, and original error (if any)
// fmt.Println(awsErr.Code(), awsErr.Message(), awsErr.OrigErr())
//
// if reqErr, ok := err.(awserr.RequestFailure); ok {
// // A service error occurred
// fmt.Println(reqErr.StatusCode(), reqErr.RequestID())
// }
// } else {
// fmt.Println(err.Error())
// }
// }
//
type RequestFailure interface {
Error
// The status code of the HTTP response.
StatusCode() int
// The request ID returned by the service for a request failure. This will
// be empty if no request ID is available such as the request failed due
// to a connection error.
RequestID() string
}
// NewRequestFailure returns a wrapped error with additional information for
// request status code, and service requestID.
//
// Should be used to wrap all request which involve service requests. Even if
// the request failed without a service response, but had an HTTP status code
// that may be meaningful.
func NewRequestFailure(err Error, statusCode int, reqID string) RequestFailure {
return newRequestError(err, statusCode, reqID)
}
// UnmarshalError provides the interface for the SDK failing to unmarshal data.
type UnmarshalError interface {
awsError
Bytes() []byte
}
// NewUnmarshalError returns an initialized UnmarshalError error wrapper adding
// the bytes that fail to unmarshal to the error.
func NewUnmarshalError(err error, msg string, bytes []byte) UnmarshalError {
return &unmarshalError{
awsError: New("UnmarshalError", msg, err),
bytes: bytes,
}
}

@ -0,0 +1,221 @@
package awserr
import (
"encoding/hex"
"fmt"
)
// SprintError returns a string of the formatted error code.
//
// Both extra and origErr are optional. If they are included their lines
// will be added, but if they are not included their lines will be ignored.
func SprintError(code, message, extra string, origErr error) string {
msg := fmt.Sprintf("%s: %s", code, message)
if extra != "" {
msg = fmt.Sprintf("%s\n\t%s", msg, extra)
}
if origErr != nil {
msg = fmt.Sprintf("%s\ncaused by: %s", msg, origErr.Error())
}
return msg
}
// A baseError wraps the code and message which defines an error. It also
// can be used to wrap an original error object.
//
// Should be used as the root for errors satisfying the awserr.Error. Also
// for any error which does not fit into a specific error wrapper type.
type baseError struct {
// Classification of error
code string
// Detailed information about error
message string
// Optional original error this error is based off of. Allows building
// chained errors.
errs []error
}
// newBaseError returns an error object for the code, message, and errors.
//
// code is a short no whitespace phrase depicting the classification of
// the error that is being created.
//
// message is the free flow string containing detailed information about the
// error.
//
// origErrs is the error objects which will be nested under the new errors to
// be returned.
func newBaseError(code, message string, origErrs []error) *baseError {
b := &baseError{
code: code,
message: message,
errs: origErrs,
}
return b
}
// Error returns the string representation of the error.
//
// See ErrorWithExtra for formatting.
//
// Satisfies the error interface.
func (b baseError) Error() string {
size := len(b.errs)
if size > 0 {
return SprintError(b.code, b.message, "", errorList(b.errs))
}
return SprintError(b.code, b.message, "", nil)
}
// String returns the string representation of the error.
// Alias for Error to satisfy the stringer interface.
func (b baseError) String() string {
return b.Error()
}
// Code returns the short phrase depicting the classification of the error.
func (b baseError) Code() string {
return b.code
}
// Message returns the error details message.
func (b baseError) Message() string {
return b.message
}
// OrigErr returns the original error if one was set. Nil is returned if no
// error was set. This only returns the first element in the list. If the full
// list is needed, use BatchedErrors.
func (b baseError) OrigErr() error {
switch len(b.errs) {
case 0:
return nil
case 1:
return b.errs[0]
default:
if err, ok := b.errs[0].(Error); ok {
return NewBatchError(err.Code(), err.Message(), b.errs[1:])
}
return NewBatchError("BatchedErrors",
"multiple errors occurred", b.errs)
}
}
// OrigErrs returns the original errors if one was set. An empty slice is
// returned if no error was set.
func (b baseError) OrigErrs() []error {
return b.errs
}
// So that the Error interface type can be included as an anonymous field
// in the requestError struct and not conflict with the error.Error() method.
type awsError Error
// A requestError wraps a request or service error.
//
// Composed of baseError for code, message, and original error.
type requestError struct {
awsError
statusCode int
requestID string
bytes []byte
}
// newRequestError returns a wrapped error with additional information for
// request status code, and service requestID.
//
// Should be used to wrap all request which involve service requests. Even if
// the request failed without a service response, but had an HTTP status code
// that may be meaningful.
//
// Also wraps original errors via the baseError.
func newRequestError(err Error, statusCode int, requestID string) *requestError {
return &requestError{
awsError: err,
statusCode: statusCode,
requestID: requestID,
}
}
// Error returns the string representation of the error.
// Satisfies the error interface.
func (r requestError) Error() string {
extra := fmt.Sprintf("status code: %d, request id: %s",
r.statusCode, r.requestID)
return SprintError(r.Code(), r.Message(), extra, r.OrigErr())
}
// String returns the string representation of the error.
// Alias for Error to satisfy the stringer interface.
func (r requestError) String() string {
return r.Error()
}
// StatusCode returns the wrapped status code for the error
func (r requestError) StatusCode() int {
return r.statusCode
}
// RequestID returns the wrapped requestID
func (r requestError) RequestID() string {
return r.requestID
}
// OrigErrs returns the original errors if one was set. An empty slice is
// returned if no error was set.
func (r requestError) OrigErrs() []error {
if b, ok := r.awsError.(BatchedErrors); ok {
return b.OrigErrs()
}
return []error{r.OrigErr()}
}
type unmarshalError struct {
awsError
bytes []byte
}
// Error returns the string representation of the error.
// Satisfies the error interface.
func (e unmarshalError) Error() string {
extra := hex.Dump(e.bytes)
return SprintError(e.Code(), e.Message(), extra, e.OrigErr())
}
// String returns the string representation of the error.
// Alias for Error to satisfy the stringer interface.
func (e unmarshalError) String() string {
return e.Error()
}
// Bytes returns the bytes that failed to unmarshal.
func (e unmarshalError) Bytes() []byte {
return e.bytes
}
// An error list that satisfies the golang interface
type errorList []error
// Error returns the string representation of the error.
//
// Satisfies the error interface.
func (e errorList) Error() string {
msg := ""
// How do we want to handle the array size being zero
if size := len(e); size > 0 {
for i := 0; i < size; i++ {
msg += e[i].Error()
// We check the next index to see if it is within the slice.
// If it is, then we append a newline. We do this, because unit tests
// could be broken with the additional '\n'
if i+1 < size {
msg += "\n"
}
}
}
return msg
}

@ -0,0 +1,108 @@
package awsutil
import (
"io"
"reflect"
"time"
)
// Copy deeply copies a src structure to dst. Useful for copying request and
// response structures.
//
// Can copy between structs of different type, but will only copy fields which
// are assignable, and exist in both structs. Fields which are not assignable,
// or do not exist in both structs are ignored.
func Copy(dst, src interface{}) {
dstval := reflect.ValueOf(dst)
if !dstval.IsValid() {
panic("Copy dst cannot be nil")
}
rcopy(dstval, reflect.ValueOf(src), true)
}
// CopyOf returns a copy of src while also allocating the memory for dst.
// src must be a pointer type or this operation will fail.
func CopyOf(src interface{}) (dst interface{}) {
dsti := reflect.New(reflect.TypeOf(src).Elem())
dst = dsti.Interface()
rcopy(dsti, reflect.ValueOf(src), true)
return
}
// rcopy performs a recursive copy of values from the source to destination.
//
// root is used to skip certain aspects of the copy which are not valid
// for the root node of a object.
func rcopy(dst, src reflect.Value, root bool) {
if !src.IsValid() {
return
}
switch src.Kind() {
case reflect.Ptr:
if _, ok := src.Interface().(io.Reader); ok {
if dst.Kind() == reflect.Ptr && dst.Elem().CanSet() {
dst.Elem().Set(src)
} else if dst.CanSet() {
dst.Set(src)
}
} else {
e := src.Type().Elem()
if dst.CanSet() && !src.IsNil() {
if _, ok := src.Interface().(*time.Time); !ok {
dst.Set(reflect.New(e))
} else {
tempValue := reflect.New(e)
tempValue.Elem().Set(src.Elem())
// Sets time.Time's unexported values
dst.Set(tempValue)
}
}
if src.Elem().IsValid() {
// Keep the current root state since the depth hasn't changed
rcopy(dst.Elem(), src.Elem(), root)
}
}
case reflect.Struct:
t := dst.Type()
for i := 0; i < t.NumField(); i++ {
name := t.Field(i).Name
srcVal := src.FieldByName(name)
dstVal := dst.FieldByName(name)
if srcVal.IsValid() && dstVal.CanSet() {
rcopy(dstVal, srcVal, false)
}
}
case reflect.Slice:
if src.IsNil() {
break
}
s := reflect.MakeSlice(src.Type(), src.Len(), src.Cap())
dst.Set(s)
for i := 0; i < src.Len(); i++ {
rcopy(dst.Index(i), src.Index(i), false)
}
case reflect.Map:
if src.IsNil() {
break
}
s := reflect.MakeMap(src.Type())
dst.Set(s)
for _, k := range src.MapKeys() {
v := src.MapIndex(k)
v2 := reflect.New(v.Type()).Elem()
rcopy(v2, v, false)
dst.SetMapIndex(k, v2)
}
default:
// Assign the value if possible. If its not assignable, the value would
// need to be converted and the impact of that may be unexpected, or is
// not compatible with the dst type.
if src.Type().AssignableTo(dst.Type()) {
dst.Set(src)
}
}
}

@ -0,0 +1,27 @@
package awsutil
import (
"reflect"
)
// DeepEqual returns if the two values are deeply equal like reflect.DeepEqual.
// In addition to this, this method will also dereference the input values if
// possible so the DeepEqual performed will not fail if one parameter is a
// pointer and the other is not.
//
// DeepEqual will not perform indirection of nested values of the input parameters.
func DeepEqual(a, b interface{}) bool {
ra := reflect.Indirect(reflect.ValueOf(a))
rb := reflect.Indirect(reflect.ValueOf(b))
if raValid, rbValid := ra.IsValid(), rb.IsValid(); !raValid && !rbValid {
// If the elements are both nil, and of the same type they are equal
// If they are of different types they are not equal
return reflect.TypeOf(a) == reflect.TypeOf(b)
} else if raValid != rbValid {
// Both values must be valid to be equal
return false
}
return reflect.DeepEqual(ra.Interface(), rb.Interface())
}

@ -0,0 +1,221 @@
package awsutil
import (
"reflect"
"regexp"
"strconv"
"strings"
"github.com/jmespath/go-jmespath"
)
var indexRe = regexp.MustCompile(`(.+)\[(-?\d+)?\]$`)
// rValuesAtPath returns a slice of values found in value v. The values
// in v are explored recursively so all nested values are collected.
func rValuesAtPath(v interface{}, path string, createPath, caseSensitive, nilTerm bool) []reflect.Value {
pathparts := strings.Split(path, "||")
if len(pathparts) > 1 {
for _, pathpart := range pathparts {
vals := rValuesAtPath(v, pathpart, createPath, caseSensitive, nilTerm)
if len(vals) > 0 {
return vals
}
}
return nil
}
values := []reflect.Value{reflect.Indirect(reflect.ValueOf(v))}
components := strings.Split(path, ".")
for len(values) > 0 && len(components) > 0 {
var index *int64
var indexStar bool
c := strings.TrimSpace(components[0])
if c == "" { // no actual component, illegal syntax
return nil
} else if caseSensitive && c != "*" && strings.ToLower(c[0:1]) == c[0:1] {
// TODO normalize case for user
return nil // don't support unexported fields
}
// parse this component
if m := indexRe.FindStringSubmatch(c); m != nil {
c = m[1]
if m[2] == "" {
index = nil
indexStar = true
} else {
i, _ := strconv.ParseInt(m[2], 10, 32)
index = &i
indexStar = false
}
}
nextvals := []reflect.Value{}
for _, value := range values {
// pull component name out of struct member
if value.Kind() != reflect.Struct {
continue
}
if c == "*" { // pull all members
for i := 0; i < value.NumField(); i++ {
if f := reflect.Indirect(value.Field(i)); f.IsValid() {
nextvals = append(nextvals, f)
}
}
continue
}
value = value.FieldByNameFunc(func(name string) bool {
if c == name {
return true
} else if !caseSensitive && strings.EqualFold(name, c) {
return true
}
return false
})
if nilTerm && value.Kind() == reflect.Ptr && len(components[1:]) == 0 {
if !value.IsNil() {
value.Set(reflect.Zero(value.Type()))
}
return []reflect.Value{value}
}
if createPath && value.Kind() == reflect.Ptr && value.IsNil() {
// TODO if the value is the terminus it should not be created
// if the value to be set to its position is nil.
value.Set(reflect.New(value.Type().Elem()))
value = value.Elem()
} else {
value = reflect.Indirect(value)
}
if value.Kind() == reflect.Slice || value.Kind() == reflect.Map {
if !createPath && value.IsNil() {
value = reflect.ValueOf(nil)
}
}
if value.IsValid() {
nextvals = append(nextvals, value)
}
}
values = nextvals
if indexStar || index != nil {
nextvals = []reflect.Value{}
for _, valItem := range values {
value := reflect.Indirect(valItem)
if value.Kind() != reflect.Slice {
continue
}
if indexStar { // grab all indices
for i := 0; i < value.Len(); i++ {
idx := reflect.Indirect(value.Index(i))
if idx.IsValid() {
nextvals = append(nextvals, idx)
}
}
continue
}
// pull out index
i := int(*index)
if i >= value.Len() { // check out of bounds
if createPath {
// TODO resize slice
} else {
continue
}
} else if i < 0 { // support negative indexing
i = value.Len() + i
}
value = reflect.Indirect(value.Index(i))
if value.Kind() == reflect.Slice || value.Kind() == reflect.Map {
if !createPath && value.IsNil() {
value = reflect.ValueOf(nil)
}
}
if value.IsValid() {
nextvals = append(nextvals, value)
}
}
values = nextvals
}
components = components[1:]
}
return values
}
// ValuesAtPath returns a list of values at the case insensitive lexical
// path inside of a structure.
func ValuesAtPath(i interface{}, path string) ([]interface{}, error) {
result, err := jmespath.Search(path, i)
if err != nil {
return nil, err
}
v := reflect.ValueOf(result)
if !v.IsValid() || (v.Kind() == reflect.Ptr && v.IsNil()) {
return nil, nil
}
if s, ok := result.([]interface{}); ok {
return s, err
}
if v.Kind() == reflect.Map && v.Len() == 0 {
return nil, nil
}
if v.Kind() == reflect.Slice {
out := make([]interface{}, v.Len())
for i := 0; i < v.Len(); i++ {
out[i] = v.Index(i).Interface()
}
return out, nil
}
return []interface{}{result}, nil
}
// SetValueAtPath sets a value at the case insensitive lexical path inside
// of a structure.
func SetValueAtPath(i interface{}, path string, v interface{}) {
rvals := rValuesAtPath(i, path, true, false, v == nil)
for _, rval := range rvals {
if rval.Kind() == reflect.Ptr && rval.IsNil() {
continue
}
setValue(rval, v)
}
}
func setValue(dstVal reflect.Value, src interface{}) {
if dstVal.Kind() == reflect.Ptr {
dstVal = reflect.Indirect(dstVal)
}
srcVal := reflect.ValueOf(src)
if !srcVal.IsValid() { // src is literal nil
if dstVal.CanAddr() {
// Convert to pointer so that pointer's value can be nil'ed
// dstVal = dstVal.Addr()
}
dstVal.Set(reflect.Zero(dstVal.Type()))
} else if srcVal.Kind() == reflect.Ptr {
if srcVal.IsNil() {
srcVal = reflect.Zero(dstVal.Type())
} else {
srcVal = reflect.ValueOf(src).Elem()
}
dstVal.Set(srcVal)
} else {
dstVal.Set(srcVal)
}
}

@ -0,0 +1,123 @@
package awsutil
import (
"bytes"
"fmt"
"io"
"reflect"
"strings"
)
// Prettify returns the string representation of a value.
func Prettify(i interface{}) string {
var buf bytes.Buffer
prettify(reflect.ValueOf(i), 0, &buf)
return buf.String()
}
// prettify will recursively walk value v to build a textual
// representation of the value.
func prettify(v reflect.Value, indent int, buf *bytes.Buffer) {
for v.Kind() == reflect.Ptr {
v = v.Elem()
}
switch v.Kind() {
case reflect.Struct:
strtype := v.Type().String()
if strtype == "time.Time" {
fmt.Fprintf(buf, "%s", v.Interface())
break
} else if strings.HasPrefix(strtype, "io.") {
buf.WriteString("<buffer>")
break
}
buf.WriteString("{\n")
names := []string{}
for i := 0; i < v.Type().NumField(); i++ {
name := v.Type().Field(i).Name
f := v.Field(i)
if name[0:1] == strings.ToLower(name[0:1]) {
continue // ignore unexported fields
}
if (f.Kind() == reflect.Ptr || f.Kind() == reflect.Slice || f.Kind() == reflect.Map) && f.IsNil() {
continue // ignore unset fields
}
names = append(names, name)
}
for i, n := range names {
val := v.FieldByName(n)
ft, ok := v.Type().FieldByName(n)
if !ok {
panic(fmt.Sprintf("expected to find field %v on type %v, but was not found", n, v.Type()))
}
buf.WriteString(strings.Repeat(" ", indent+2))
buf.WriteString(n + ": ")
if tag := ft.Tag.Get("sensitive"); tag == "true" {
buf.WriteString("<sensitive>")
} else {
prettify(val, indent+2, buf)
}
if i < len(names)-1 {
buf.WriteString(",\n")
}
}
buf.WriteString("\n" + strings.Repeat(" ", indent) + "}")
case reflect.Slice:
strtype := v.Type().String()
if strtype == "[]uint8" {
fmt.Fprintf(buf, "<binary> len %d", v.Len())
break
}
nl, id, id2 := "", "", ""
if v.Len() > 3 {
nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2)
}
buf.WriteString("[" + nl)
for i := 0; i < v.Len(); i++ {
buf.WriteString(id2)
prettify(v.Index(i), indent+2, buf)
if i < v.Len()-1 {
buf.WriteString("," + nl)
}
}
buf.WriteString(nl + id + "]")
case reflect.Map:
buf.WriteString("{\n")
for i, k := range v.MapKeys() {
buf.WriteString(strings.Repeat(" ", indent+2))
buf.WriteString(k.String() + ": ")
prettify(v.MapIndex(k), indent+2, buf)
if i < v.Len()-1 {
buf.WriteString(",\n")
}
}
buf.WriteString("\n" + strings.Repeat(" ", indent) + "}")
default:
if !v.IsValid() {
fmt.Fprint(buf, "<invalid value>")
return
}
format := "%v"
switch v.Interface().(type) {
case string:
format = "%q"
case io.ReadSeeker, io.Reader:
format = "buffer(%p)"
}
fmt.Fprintf(buf, format, v.Interface())
}
}

@ -0,0 +1,90 @@
package awsutil
import (
"bytes"
"fmt"
"reflect"
"strings"
)
// StringValue returns the string representation of a value.
//
// Deprecated: Use Prettify instead.
func StringValue(i interface{}) string {
var buf bytes.Buffer
stringValue(reflect.ValueOf(i), 0, &buf)
return buf.String()
}
func stringValue(v reflect.Value, indent int, buf *bytes.Buffer) {
for v.Kind() == reflect.Ptr {
v = v.Elem()
}
switch v.Kind() {
case reflect.Struct:
buf.WriteString("{\n")
for i := 0; i < v.Type().NumField(); i++ {
ft := v.Type().Field(i)
fv := v.Field(i)
if ft.Name[0:1] == strings.ToLower(ft.Name[0:1]) {
continue // ignore unexported fields
}
if (fv.Kind() == reflect.Ptr || fv.Kind() == reflect.Slice) && fv.IsNil() {
continue // ignore unset fields
}
buf.WriteString(strings.Repeat(" ", indent+2))
buf.WriteString(ft.Name + ": ")
if tag := ft.Tag.Get("sensitive"); tag == "true" {
buf.WriteString("<sensitive>")
} else {
stringValue(fv, indent+2, buf)
}
buf.WriteString(",\n")
}
buf.WriteString("\n" + strings.Repeat(" ", indent) + "}")
case reflect.Slice:
nl, id, id2 := "", "", ""
if v.Len() > 3 {
nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2)
}
buf.WriteString("[" + nl)
for i := 0; i < v.Len(); i++ {
buf.WriteString(id2)
stringValue(v.Index(i), indent+2, buf)
if i < v.Len()-1 {
buf.WriteString("," + nl)
}
}
buf.WriteString(nl + id + "]")
case reflect.Map:
buf.WriteString("{\n")
for i, k := range v.MapKeys() {
buf.WriteString(strings.Repeat(" ", indent+2))
buf.WriteString(k.String() + ": ")
stringValue(v.MapIndex(k), indent+2, buf)
if i < v.Len()-1 {
buf.WriteString(",\n")
}
}
buf.WriteString("\n" + strings.Repeat(" ", indent) + "}")
default:
format := "%v"
switch v.Interface().(type) {
case string:
format = "%q"
}
fmt.Fprintf(buf, format, v.Interface())
}
}

@ -0,0 +1,94 @@
package client
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/request"
)
// A Config provides configuration to a service client instance.
type Config struct {
Config *aws.Config
Handlers request.Handlers
PartitionID string
Endpoint string
SigningRegion string
SigningName string
ResolvedRegion string
// States that the signing name did not come from a modeled source but
// was derived based on other data. Used by service client constructors
// to determine if the signin name can be overridden based on metadata the
// service has.
SigningNameDerived bool
}
// ConfigProvider provides a generic way for a service client to receive
// the ClientConfig without circular dependencies.
type ConfigProvider interface {
ClientConfig(serviceName string, cfgs ...*aws.Config) Config
}
// ConfigNoResolveEndpointProvider same as ConfigProvider except it will not
// resolve the endpoint automatically. The service client's endpoint must be
// provided via the aws.Config.Endpoint field.
type ConfigNoResolveEndpointProvider interface {
ClientConfigNoResolveEndpoint(cfgs ...*aws.Config) Config
}
// A Client implements the base client request and response handling
// used by all service clients.
type Client struct {
request.Retryer
metadata.ClientInfo
Config aws.Config
Handlers request.Handlers
}
// New will return a pointer to a new initialized service client.
func New(cfg aws.Config, info metadata.ClientInfo, handlers request.Handlers, options ...func(*Client)) *Client {
svc := &Client{
Config: cfg,
ClientInfo: info,
Handlers: handlers.Copy(),
}
switch retryer, ok := cfg.Retryer.(request.Retryer); {
case ok:
svc.Retryer = retryer
case cfg.Retryer != nil && cfg.Logger != nil:
s := fmt.Sprintf("WARNING: %T does not implement request.Retryer; using DefaultRetryer instead", cfg.Retryer)
cfg.Logger.Log(s)
fallthrough
default:
maxRetries := aws.IntValue(cfg.MaxRetries)
if cfg.MaxRetries == nil || maxRetries == aws.UseServiceDefaultRetries {
maxRetries = DefaultRetryerMaxNumRetries
}
svc.Retryer = DefaultRetryer{NumMaxRetries: maxRetries}
}
svc.AddDebugHandlers()
for _, option := range options {
option(svc)
}
return svc
}
// NewRequest returns a new Request pointer for the service API
// operation and parameters.
func (c *Client) NewRequest(operation *request.Operation, params interface{}, data interface{}) *request.Request {
return request.New(c.Config, c.ClientInfo, c.Handlers, c.Retryer, operation, params, data)
}
// AddDebugHandlers injects debug logging handlers into the service to log request
// debug information.
func (c *Client) AddDebugHandlers() {
c.Handlers.Send.PushFrontNamed(LogHTTPRequestHandler)
c.Handlers.Send.PushBackNamed(LogHTTPResponseHandler)
}

@ -0,0 +1,177 @@
package client
import (
"math"
"strconv"
"time"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/internal/sdkrand"
)
// DefaultRetryer implements basic retry logic using exponential backoff for
// most services. If you want to implement custom retry logic, you can implement the
// request.Retryer interface.
//
type DefaultRetryer struct {
// Num max Retries is the number of max retries that will be performed.
// By default, this is zero.
NumMaxRetries int
// MinRetryDelay is the minimum retry delay after which retry will be performed.
// If not set, the value is 0ns.
MinRetryDelay time.Duration
// MinThrottleRetryDelay is the minimum retry delay when throttled.
// If not set, the value is 0ns.
MinThrottleDelay time.Duration
// MaxRetryDelay is the maximum retry delay before which retry must be performed.
// If not set, the value is 0ns.
MaxRetryDelay time.Duration
// MaxThrottleDelay is the maximum retry delay when throttled.
// If not set, the value is 0ns.
MaxThrottleDelay time.Duration
}
const (
// DefaultRetryerMaxNumRetries sets maximum number of retries
DefaultRetryerMaxNumRetries = 3
// DefaultRetryerMinRetryDelay sets minimum retry delay
DefaultRetryerMinRetryDelay = 30 * time.Millisecond
// DefaultRetryerMinThrottleDelay sets minimum delay when throttled
DefaultRetryerMinThrottleDelay = 500 * time.Millisecond
// DefaultRetryerMaxRetryDelay sets maximum retry delay
DefaultRetryerMaxRetryDelay = 300 * time.Second
// DefaultRetryerMaxThrottleDelay sets maximum delay when throttled
DefaultRetryerMaxThrottleDelay = 300 * time.Second
)
// MaxRetries returns the number of maximum returns the service will use to make
// an individual API request.
func (d DefaultRetryer) MaxRetries() int {
return d.NumMaxRetries
}
// setRetryerDefaults sets the default values of the retryer if not set
func (d *DefaultRetryer) setRetryerDefaults() {
if d.MinRetryDelay == 0 {
d.MinRetryDelay = DefaultRetryerMinRetryDelay
}
if d.MaxRetryDelay == 0 {
d.MaxRetryDelay = DefaultRetryerMaxRetryDelay
}
if d.MinThrottleDelay == 0 {
d.MinThrottleDelay = DefaultRetryerMinThrottleDelay
}
if d.MaxThrottleDelay == 0 {
d.MaxThrottleDelay = DefaultRetryerMaxThrottleDelay
}
}
// RetryRules returns the delay duration before retrying this request again
func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration {
// if number of max retries is zero, no retries will be performed.
if d.NumMaxRetries == 0 {
return 0
}
// Sets default value for retryer members
d.setRetryerDefaults()
// minDelay is the minimum retryer delay
minDelay := d.MinRetryDelay
var initialDelay time.Duration
isThrottle := r.IsErrorThrottle()
if isThrottle {
if delay, ok := getRetryAfterDelay(r); ok {
initialDelay = delay
}
minDelay = d.MinThrottleDelay
}
retryCount := r.RetryCount
// maxDelay the maximum retryer delay
maxDelay := d.MaxRetryDelay
if isThrottle {
maxDelay = d.MaxThrottleDelay
}
var delay time.Duration
// Logic to cap the retry count based on the minDelay provided
actualRetryCount := int(math.Log2(float64(minDelay))) + 1
if actualRetryCount < 63-retryCount {
delay = time.Duration(1<<uint64(retryCount)) * getJitterDelay(minDelay)
if delay > maxDelay {
delay = getJitterDelay(maxDelay / 2)
}
} else {
delay = getJitterDelay(maxDelay / 2)
}
return delay + initialDelay
}
// getJitterDelay returns a jittered delay for retry
func getJitterDelay(duration time.Duration) time.Duration {
return time.Duration(sdkrand.SeededRand.Int63n(int64(duration)) + int64(duration))
}
// ShouldRetry returns true if the request should be retried.
func (d DefaultRetryer) ShouldRetry(r *request.Request) bool {
// ShouldRetry returns false if number of max retries is 0.
if d.NumMaxRetries == 0 {
return false
}
// If one of the other handlers already set the retry state
// we don't want to override it based on the service's state
if r.Retryable != nil {
return *r.Retryable
}
return r.IsErrorRetryable() || r.IsErrorThrottle()
}
// This will look in the Retry-After header, RFC 7231, for how long
// it will wait before attempting another request
func getRetryAfterDelay(r *request.Request) (time.Duration, bool) {
if !canUseRetryAfterHeader(r) {
return 0, false
}
delayStr := r.HTTPResponse.Header.Get("Retry-After")
if len(delayStr) == 0 {
return 0, false
}
delay, err := strconv.Atoi(delayStr)
if err != nil {
return 0, false
}
return time.Duration(delay) * time.Second, true
}
// Will look at the status code to see if the retry header pertains to
// the status code.
func canUseRetryAfterHeader(r *request.Request) bool {
switch r.HTTPResponse.StatusCode {
case 429:
case 503:
default:
return false
}
return true
}

@ -0,0 +1,206 @@
package client
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http/httputil"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
)
const logReqMsg = `DEBUG: Request %s/%s Details:
---[ REQUEST POST-SIGN ]-----------------------------
%s
-----------------------------------------------------`
const logReqErrMsg = `DEBUG ERROR: Request %s/%s:
---[ REQUEST DUMP ERROR ]-----------------------------
%s
------------------------------------------------------`
type logWriter struct {
// Logger is what we will use to log the payload of a response.
Logger aws.Logger
// buf stores the contents of what has been read
buf *bytes.Buffer
}
func (logger *logWriter) Write(b []byte) (int, error) {
return logger.buf.Write(b)
}
type teeReaderCloser struct {
// io.Reader will be a tee reader that is used during logging.
// This structure will read from a body and write the contents to a logger.
io.Reader
// Source is used just to close when we are done reading.
Source io.ReadCloser
}
func (reader *teeReaderCloser) Close() error {
return reader.Source.Close()
}
// LogHTTPRequestHandler is a SDK request handler to log the HTTP request sent
// to a service. Will include the HTTP request body if the LogLevel of the
// request matches LogDebugWithHTTPBody.
var LogHTTPRequestHandler = request.NamedHandler{
Name: "awssdk.client.LogRequest",
Fn: logRequest,
}
func logRequest(r *request.Request) {
if !r.Config.LogLevel.AtLeast(aws.LogDebug) || r.Config.Logger == nil {
return
}
logBody := r.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody)
bodySeekable := aws.IsReaderSeekable(r.Body)
b, err := httputil.DumpRequestOut(r.HTTPRequest, logBody)
if err != nil {
r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg,
r.ClientInfo.ServiceName, r.Operation.Name, err))
return
}
if logBody {
if !bodySeekable {
r.SetReaderBody(aws.ReadSeekCloser(r.HTTPRequest.Body))
}
// Reset the request body because dumpRequest will re-wrap the
// r.HTTPRequest's Body as a NoOpCloser and will not be reset after
// read by the HTTP client reader.
if err := r.Error; err != nil {
r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg,
r.ClientInfo.ServiceName, r.Operation.Name, err))
return
}
}
r.Config.Logger.Log(fmt.Sprintf(logReqMsg,
r.ClientInfo.ServiceName, r.Operation.Name, string(b)))
}
// LogHTTPRequestHeaderHandler is a SDK request handler to log the HTTP request sent
// to a service. Will only log the HTTP request's headers. The request payload
// will not be read.
var LogHTTPRequestHeaderHandler = request.NamedHandler{
Name: "awssdk.client.LogRequestHeader",
Fn: logRequestHeader,
}
func logRequestHeader(r *request.Request) {
if !r.Config.LogLevel.AtLeast(aws.LogDebug) || r.Config.Logger == nil {
return
}
b, err := httputil.DumpRequestOut(r.HTTPRequest, false)
if err != nil {
r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg,
r.ClientInfo.ServiceName, r.Operation.Name, err))
return
}
r.Config.Logger.Log(fmt.Sprintf(logReqMsg,
r.ClientInfo.ServiceName, r.Operation.Name, string(b)))
}
const logRespMsg = `DEBUG: Response %s/%s Details:
---[ RESPONSE ]--------------------------------------
%s
-----------------------------------------------------`
const logRespErrMsg = `DEBUG ERROR: Response %s/%s:
---[ RESPONSE DUMP ERROR ]-----------------------------
%s
-----------------------------------------------------`
// LogHTTPResponseHandler is a SDK request handler to log the HTTP response
// received from a service. Will include the HTTP response body if the LogLevel
// of the request matches LogDebugWithHTTPBody.
var LogHTTPResponseHandler = request.NamedHandler{
Name: "awssdk.client.LogResponse",
Fn: logResponse,
}
func logResponse(r *request.Request) {
if !r.Config.LogLevel.AtLeast(aws.LogDebug) || r.Config.Logger == nil {
return
}
lw := &logWriter{r.Config.Logger, bytes.NewBuffer(nil)}
if r.HTTPResponse == nil {
lw.Logger.Log(fmt.Sprintf(logRespErrMsg,
r.ClientInfo.ServiceName, r.Operation.Name, "request's HTTPResponse is nil"))
return
}
logBody := r.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody)
if logBody {
r.HTTPResponse.Body = &teeReaderCloser{
Reader: io.TeeReader(r.HTTPResponse.Body, lw),
Source: r.HTTPResponse.Body,
}
}
handlerFn := func(req *request.Request) {
b, err := httputil.DumpResponse(req.HTTPResponse, false)
if err != nil {
lw.Logger.Log(fmt.Sprintf(logRespErrMsg,
req.ClientInfo.ServiceName, req.Operation.Name, err))
return
}
lw.Logger.Log(fmt.Sprintf(logRespMsg,
req.ClientInfo.ServiceName, req.Operation.Name, string(b)))
if logBody {
b, err := ioutil.ReadAll(lw.buf)
if err != nil {
lw.Logger.Log(fmt.Sprintf(logRespErrMsg,
req.ClientInfo.ServiceName, req.Operation.Name, err))
return
}
lw.Logger.Log(string(b))
}
}
const handlerName = "awsdk.client.LogResponse.ResponseBody"
r.Handlers.Unmarshal.SetBackNamed(request.NamedHandler{
Name: handlerName, Fn: handlerFn,
})
r.Handlers.UnmarshalError.SetBackNamed(request.NamedHandler{
Name: handlerName, Fn: handlerFn,
})
}
// LogHTTPResponseHeaderHandler is a SDK request handler to log the HTTP
// response received from a service. Will only log the HTTP response's headers.
// The response payload will not be read.
var LogHTTPResponseHeaderHandler = request.NamedHandler{
Name: "awssdk.client.LogResponseHeader",
Fn: logResponseHeader,
}
func logResponseHeader(r *request.Request) {
if !r.Config.LogLevel.AtLeast(aws.LogDebug) || r.Config.Logger == nil {
return
}
b, err := httputil.DumpResponse(r.HTTPResponse, false)
if err != nil {
r.Config.Logger.Log(fmt.Sprintf(logRespErrMsg,
r.ClientInfo.ServiceName, r.Operation.Name, err))
return
}
r.Config.Logger.Log(fmt.Sprintf(logRespMsg,
r.ClientInfo.ServiceName, r.Operation.Name, string(b)))
}

@ -0,0 +1,15 @@
package metadata
// ClientInfo wraps immutable data from the client.Client structure.
type ClientInfo struct {
ServiceName string
ServiceID string
APIVersion string
PartitionID string
Endpoint string
SigningName string
SigningRegion string
JSONVersion string
TargetPrefix string
ResolvedRegion string
}

@ -0,0 +1,28 @@
package client
import (
"time"
"github.com/aws/aws-sdk-go/aws/request"
)
// NoOpRetryer provides a retryer that performs no retries.
// It should be used when we do not want retries to be performed.
type NoOpRetryer struct{}
// MaxRetries returns the number of maximum returns the service will use to make
// an individual API; For NoOpRetryer the MaxRetries will always be zero.
func (d NoOpRetryer) MaxRetries() int {
return 0
}
// ShouldRetry will always return false for NoOpRetryer, as it should never retry.
func (d NoOpRetryer) ShouldRetry(_ *request.Request) bool {
return false
}
// RetryRules returns the delay duration before retrying this request again;
// since NoOpRetryer does not retry, RetryRules always returns 0.
func (d NoOpRetryer) RetryRules(_ *request.Request) time.Duration {
return 0
}

@ -0,0 +1,631 @@
package aws
import (
"net/http"
"time"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/endpoints"
)
// UseServiceDefaultRetries instructs the config to use the service's own
// default number of retries. This will be the default action if
// Config.MaxRetries is nil also.
const UseServiceDefaultRetries = -1
// RequestRetryer is an alias for a type that implements the request.Retryer
// interface.
type RequestRetryer interface{}
// A Config provides service configuration for service clients. By default,
// all clients will use the defaults.DefaultConfig structure.
//
// // Create Session with MaxRetries configuration to be shared by multiple
// // service clients.
// sess := session.Must(session.NewSession(&aws.Config{
// MaxRetries: aws.Int(3),
// }))
//
// // Create S3 service client with a specific Region.
// svc := s3.New(sess, &aws.Config{
// Region: aws.String("us-west-2"),
// })
type Config struct {
// Enables verbose error printing of all credential chain errors.
// Should be used when wanting to see all errors while attempting to
// retrieve credentials.
CredentialsChainVerboseErrors *bool
// The credentials object to use when signing requests. Defaults to a
// chain of credential providers to search for credentials in environment
// variables, shared credential file, and EC2 Instance Roles.
Credentials *credentials.Credentials
// An optional endpoint URL (hostname only or fully qualified URI)
// that overrides the default generated endpoint for a client. Set this
// to `nil` or the value to `""` to use the default generated endpoint.
//
// Note: You must still provide a `Region` value when specifying an
// endpoint for a client.
Endpoint *string
// The resolver to use for looking up endpoints for AWS service clients
// to use based on region.
EndpointResolver endpoints.Resolver
// EnforceShouldRetryCheck is used in the AfterRetryHandler to always call
// ShouldRetry regardless of whether or not if request.Retryable is set.
// This will utilize ShouldRetry method of custom retryers. If EnforceShouldRetryCheck
// is not set, then ShouldRetry will only be called if request.Retryable is nil.
// Proper handling of the request.Retryable field is important when setting this field.
EnforceShouldRetryCheck *bool
// The region to send requests to. This parameter is required and must
// be configured globally or on a per-client basis unless otherwise
// noted. A full list of regions is found in the "Regions and Endpoints"
// document.
//
// See http://docs.aws.amazon.com/general/latest/gr/rande.html for AWS
// Regions and Endpoints.
Region *string
// Set this to `true` to disable SSL when sending requests. Defaults
// to `false`.
DisableSSL *bool
// The HTTP client to use when sending requests. Defaults to
// `http.DefaultClient`.
HTTPClient *http.Client
// An integer value representing the logging level. The default log level
// is zero (LogOff), which represents no logging. To enable logging set
// to a LogLevel Value.
LogLevel *LogLevelType
// The logger writer interface to write logging messages to. Defaults to
// standard out.
Logger Logger
// The maximum number of times that a request will be retried for failures.
// Defaults to -1, which defers the max retry setting to the service
// specific configuration.
MaxRetries *int
// Retryer guides how HTTP requests should be retried in case of
// recoverable failures.
//
// When nil or the value does not implement the request.Retryer interface,
// the client.DefaultRetryer will be used.
//
// When both Retryer and MaxRetries are non-nil, the former is used and
// the latter ignored.
//
// To set the Retryer field in a type-safe manner and with chaining, use
// the request.WithRetryer helper function:
//
// cfg := request.WithRetryer(aws.NewConfig(), myRetryer)
//
Retryer RequestRetryer
// Disables semantic parameter validation, which validates input for
// missing required fields and/or other semantic request input errors.
DisableParamValidation *bool
// Disables the computation of request and response checksums, e.g.,
// CRC32 checksums in Amazon DynamoDB.
DisableComputeChecksums *bool
// Set this to `true` to force the request to use path-style addressing,
// i.e., `http://s3.amazonaws.com/BUCKET/KEY`. By default, the S3 client
// will use virtual hosted bucket addressing when possible
// (`http://BUCKET.s3.amazonaws.com/KEY`).
//
// Note: This configuration option is specific to the Amazon S3 service.
//
// See http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
// for Amazon S3: Virtual Hosting of Buckets
S3ForcePathStyle *bool
// Set this to `true` to disable the SDK adding the `Expect: 100-Continue`
// header to PUT requests over 2MB of content. 100-Continue instructs the
// HTTP client not to send the body until the service responds with a
// `continue` status. This is useful to prevent sending the request body
// until after the request is authenticated, and validated.
//
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
//
// 100-Continue is only enabled for Go 1.6 and above. See `http.Transport`'s
// `ExpectContinueTimeout` for information on adjusting the continue wait
// timeout. https://golang.org/pkg/net/http/#Transport
//
// You should use this flag to disable 100-Continue if you experience issues
// with proxies or third party S3 compatible services.
S3Disable100Continue *bool
// Set this to `true` to enable S3 Accelerate feature. For all operations
// compatible with S3 Accelerate will use the accelerate endpoint for
// requests. Requests not compatible will fall back to normal S3 requests.
//
// The bucket must be enable for accelerate to be used with S3 client with
// accelerate enabled. If the bucket is not enabled for accelerate an error
// will be returned. The bucket name must be DNS compatible to also work
// with accelerate.
S3UseAccelerate *bool
// S3DisableContentMD5Validation config option is temporarily disabled,
// For S3 GetObject API calls, #1837.
//
// Set this to `true` to disable the S3 service client from automatically
// adding the ContentMD5 to S3 Object Put and Upload API calls. This option
// will also disable the SDK from performing object ContentMD5 validation
// on GetObject API calls.
S3DisableContentMD5Validation *bool
// Set this to `true` to have the S3 service client to use the region specified
// in the ARN, when an ARN is provided as an argument to a bucket parameter.
S3UseARNRegion *bool
// Set this to `true` to enable the SDK to unmarshal API response header maps to
// normalized lower case map keys.
//
// For example S3's X-Amz-Meta prefixed header will be unmarshaled to lower case
// Metadata member's map keys. The value of the header in the map is unaffected.
//
// The AWS SDK for Go v2, uses lower case header maps by default. The v1
// SDK provides this opt-in for this option, for backwards compatibility.
LowerCaseHeaderMaps *bool
// Set this to `true` to disable the EC2Metadata client from overriding the
// default http.Client's Timeout. This is helpful if you do not want the
// EC2Metadata client to create a new http.Client. This options is only
// meaningful if you're not already using a custom HTTP client with the
// SDK. Enabled by default.
//
// Must be set and provided to the session.NewSession() in order to disable
// the EC2Metadata overriding the timeout for default credentials chain.
//
// Example:
// sess := session.Must(session.NewSession(aws.NewConfig()
// .WithEC2MetadataDisableTimeoutOverride(true)))
//
// svc := s3.New(sess)
//
EC2MetadataDisableTimeoutOverride *bool
// Instructs the endpoint to be generated for a service client to
// be the dual stack endpoint. The dual stack endpoint will support
// both IPv4 and IPv6 addressing.
//
// Setting this for a service which does not support dual stack will fail
// to make requests. It is not recommended to set this value on the session
// as it will apply to all service clients created with the session. Even
// services which don't support dual stack endpoints.
//
// If the Endpoint config value is also provided the UseDualStack flag
// will be ignored.
//
// Only supported with.
//
// sess := session.Must(session.NewSession())
//
// svc := s3.New(sess, &aws.Config{
// UseDualStack: aws.Bool(true),
// })
//
// Deprecated: This option will continue to function for S3 and S3 Control for backwards compatibility.
// UseDualStackEndpoint should be used to enable usage of a service's dual-stack endpoint for all service clients
// moving forward. For S3 and S3 Control, when UseDualStackEndpoint is set to a non-zero value it takes higher
// precedence then this option.
UseDualStack *bool
// Sets the resolver to resolve a dual-stack endpoint for the service.
UseDualStackEndpoint endpoints.DualStackEndpointState
// UseFIPSEndpoint specifies the resolver must resolve a FIPS endpoint.
UseFIPSEndpoint endpoints.FIPSEndpointState
// SleepDelay is an override for the func the SDK will call when sleeping
// during the lifecycle of a request. Specifically this will be used for
// request delays. This value should only be used for testing. To adjust
// the delay of a request see the aws/client.DefaultRetryer and
// aws/request.Retryer.
//
// SleepDelay will prevent any Context from being used for canceling retry
// delay of an API operation. It is recommended to not use SleepDelay at all
// and specify a Retryer instead.
SleepDelay func(time.Duration)
// DisableRestProtocolURICleaning will not clean the URL path when making rest protocol requests.
// Will default to false. This would only be used for empty directory names in s3 requests.
//
// Example:
// sess := session.Must(session.NewSession(&aws.Config{
// DisableRestProtocolURICleaning: aws.Bool(true),
// }))
//
// svc := s3.New(sess)
// out, err := svc.GetObject(&s3.GetObjectInput {
// Bucket: aws.String("bucketname"),
// Key: aws.String("//foo//bar//moo"),
// })
DisableRestProtocolURICleaning *bool
// EnableEndpointDiscovery will allow for endpoint discovery on operations that
// have the definition in its model. By default, endpoint discovery is off.
// To use EndpointDiscovery, Endpoint should be unset or set to an empty string.
//
// Example:
// sess := session.Must(session.NewSession(&aws.Config{
// EnableEndpointDiscovery: aws.Bool(true),
// }))
//
// svc := s3.New(sess)
// out, err := svc.GetObject(&s3.GetObjectInput {
// Bucket: aws.String("bucketname"),
// Key: aws.String("/foo/bar/moo"),
// })
EnableEndpointDiscovery *bool
// DisableEndpointHostPrefix will disable the SDK's behavior of prefixing
// request endpoint hosts with modeled information.
//
// Disabling this feature is useful when you want to use local endpoints
// for testing that do not support the modeled host prefix pattern.
DisableEndpointHostPrefix *bool
// STSRegionalEndpoint will enable regional or legacy endpoint resolving
STSRegionalEndpoint endpoints.STSRegionalEndpoint
// S3UsEast1RegionalEndpoint will enable regional or legacy endpoint resolving
S3UsEast1RegionalEndpoint endpoints.S3UsEast1RegionalEndpoint
}
// NewConfig returns a new Config pointer that can be chained with builder
// methods to set multiple configuration values inline without using pointers.
//
// // Create Session with MaxRetries configuration to be shared by multiple
// // service clients.
// sess := session.Must(session.NewSession(aws.NewConfig().
// WithMaxRetries(3),
// ))
//
// // Create S3 service client with a specific Region.
// svc := s3.New(sess, aws.NewConfig().
// WithRegion("us-west-2"),
// )
func NewConfig() *Config {
return &Config{}
}
// WithCredentialsChainVerboseErrors sets a config verbose errors boolean and returning
// a Config pointer.
func (c *Config) WithCredentialsChainVerboseErrors(verboseErrs bool) *Config {
c.CredentialsChainVerboseErrors = &verboseErrs
return c
}
// WithCredentials sets a config Credentials value returning a Config pointer
// for chaining.
func (c *Config) WithCredentials(creds *credentials.Credentials) *Config {
c.Credentials = creds
return c
}
// WithEndpoint sets a config Endpoint value returning a Config pointer for
// chaining.
func (c *Config) WithEndpoint(endpoint string) *Config {
c.Endpoint = &endpoint
return c
}
// WithEndpointResolver sets a config EndpointResolver value returning a
// Config pointer for chaining.
func (c *Config) WithEndpointResolver(resolver endpoints.Resolver) *Config {
c.EndpointResolver = resolver
return c
}
// WithRegion sets a config Region value returning a Config pointer for
// chaining.
func (c *Config) WithRegion(region string) *Config {
c.Region = &region
return c
}
// WithDisableSSL sets a config DisableSSL value returning a Config pointer
// for chaining.
func (c *Config) WithDisableSSL(disable bool) *Config {
c.DisableSSL = &disable
return c
}
// WithHTTPClient sets a config HTTPClient value returning a Config pointer
// for chaining.
func (c *Config) WithHTTPClient(client *http.Client) *Config {
c.HTTPClient = client
return c
}
// WithMaxRetries sets a config MaxRetries value returning a Config pointer
// for chaining.
func (c *Config) WithMaxRetries(max int) *Config {
c.MaxRetries = &max
return c
}
// WithDisableParamValidation sets a config DisableParamValidation value
// returning a Config pointer for chaining.
func (c *Config) WithDisableParamValidation(disable bool) *Config {
c.DisableParamValidation = &disable
return c
}
// WithDisableComputeChecksums sets a config DisableComputeChecksums value
// returning a Config pointer for chaining.
func (c *Config) WithDisableComputeChecksums(disable bool) *Config {
c.DisableComputeChecksums = &disable
return c
}
// WithLogLevel sets a config LogLevel value returning a Config pointer for
// chaining.
func (c *Config) WithLogLevel(level LogLevelType) *Config {
c.LogLevel = &level
return c
}
// WithLogger sets a config Logger value returning a Config pointer for
// chaining.
func (c *Config) WithLogger(logger Logger) *Config {
c.Logger = logger
return c
}
// WithS3ForcePathStyle sets a config S3ForcePathStyle value returning a Config
// pointer for chaining.
func (c *Config) WithS3ForcePathStyle(force bool) *Config {
c.S3ForcePathStyle = &force
return c
}
// WithS3Disable100Continue sets a config S3Disable100Continue value returning
// a Config pointer for chaining.
func (c *Config) WithS3Disable100Continue(disable bool) *Config {
c.S3Disable100Continue = &disable
return c
}
// WithS3UseAccelerate sets a config S3UseAccelerate value returning a Config
// pointer for chaining.
func (c *Config) WithS3UseAccelerate(enable bool) *Config {
c.S3UseAccelerate = &enable
return c
}
// WithS3DisableContentMD5Validation sets a config
// S3DisableContentMD5Validation value returning a Config pointer for chaining.
func (c *Config) WithS3DisableContentMD5Validation(enable bool) *Config {
c.S3DisableContentMD5Validation = &enable
return c
}
// WithS3UseARNRegion sets a config S3UseARNRegion value and
// returning a Config pointer for chaining
func (c *Config) WithS3UseARNRegion(enable bool) *Config {
c.S3UseARNRegion = &enable
return c
}
// WithUseDualStack sets a config UseDualStack value returning a Config
// pointer for chaining.
func (c *Config) WithUseDualStack(enable bool) *Config {
c.UseDualStack = &enable
return c
}
// WithEC2MetadataDisableTimeoutOverride sets a config EC2MetadataDisableTimeoutOverride value
// returning a Config pointer for chaining.
func (c *Config) WithEC2MetadataDisableTimeoutOverride(enable bool) *Config {
c.EC2MetadataDisableTimeoutOverride = &enable
return c
}
// WithSleepDelay overrides the function used to sleep while waiting for the
// next retry. Defaults to time.Sleep.
func (c *Config) WithSleepDelay(fn func(time.Duration)) *Config {
c.SleepDelay = fn
return c
}
// WithEndpointDiscovery will set whether or not to use endpoint discovery.
func (c *Config) WithEndpointDiscovery(t bool) *Config {
c.EnableEndpointDiscovery = &t
return c
}
// WithDisableEndpointHostPrefix will set whether or not to use modeled host prefix
// when making requests.
func (c *Config) WithDisableEndpointHostPrefix(t bool) *Config {
c.DisableEndpointHostPrefix = &t
return c
}
// WithSTSRegionalEndpoint will set whether or not to use regional endpoint flag
// when resolving the endpoint for a service
func (c *Config) WithSTSRegionalEndpoint(sre endpoints.STSRegionalEndpoint) *Config {
c.STSRegionalEndpoint = sre
return c
}
// WithS3UsEast1RegionalEndpoint will set whether or not to use regional endpoint flag
// when resolving the endpoint for a service
func (c *Config) WithS3UsEast1RegionalEndpoint(sre endpoints.S3UsEast1RegionalEndpoint) *Config {
c.S3UsEast1RegionalEndpoint = sre
return c
}
// WithLowerCaseHeaderMaps sets a config LowerCaseHeaderMaps value
// returning a Config pointer for chaining.
func (c *Config) WithLowerCaseHeaderMaps(t bool) *Config {
c.LowerCaseHeaderMaps = &t
return c
}
// WithDisableRestProtocolURICleaning sets a config DisableRestProtocolURICleaning value
// returning a Config pointer for chaining.
func (c *Config) WithDisableRestProtocolURICleaning(t bool) *Config {
c.DisableRestProtocolURICleaning = &t
return c
}
// MergeIn merges the passed in configs into the existing config object.
func (c *Config) MergeIn(cfgs ...*Config) {
for _, other := range cfgs {
mergeInConfig(c, other)
}
}
func mergeInConfig(dst *Config, other *Config) {
if other == nil {
return
}
if other.CredentialsChainVerboseErrors != nil {
dst.CredentialsChainVerboseErrors = other.CredentialsChainVerboseErrors
}
if other.Credentials != nil {
dst.Credentials = other.Credentials
}
if other.Endpoint != nil {
dst.Endpoint = other.Endpoint
}
if other.EndpointResolver != nil {
dst.EndpointResolver = other.EndpointResolver
}
if other.Region != nil {
dst.Region = other.Region
}
if other.DisableSSL != nil {
dst.DisableSSL = other.DisableSSL
}
if other.HTTPClient != nil {
dst.HTTPClient = other.HTTPClient
}
if other.LogLevel != nil {
dst.LogLevel = other.LogLevel
}
if other.Logger != nil {
dst.Logger = other.Logger
}
if other.MaxRetries != nil {
dst.MaxRetries = other.MaxRetries
}
if other.Retryer != nil {
dst.Retryer = other.Retryer
}
if other.DisableParamValidation != nil {
dst.DisableParamValidation = other.DisableParamValidation
}
if other.DisableComputeChecksums != nil {
dst.DisableComputeChecksums = other.DisableComputeChecksums
}
if other.S3ForcePathStyle != nil {
dst.S3ForcePathStyle = other.S3ForcePathStyle
}
if other.S3Disable100Continue != nil {
dst.S3Disable100Continue = other.S3Disable100Continue
}
if other.S3UseAccelerate != nil {
dst.S3UseAccelerate = other.S3UseAccelerate
}
if other.S3DisableContentMD5Validation != nil {
dst.S3DisableContentMD5Validation = other.S3DisableContentMD5Validation
}
if other.S3UseARNRegion != nil {
dst.S3UseARNRegion = other.S3UseARNRegion
}
if other.UseDualStack != nil {
dst.UseDualStack = other.UseDualStack
}
if other.UseDualStackEndpoint != endpoints.DualStackEndpointStateUnset {
dst.UseDualStackEndpoint = other.UseDualStackEndpoint
}
if other.EC2MetadataDisableTimeoutOverride != nil {
dst.EC2MetadataDisableTimeoutOverride = other.EC2MetadataDisableTimeoutOverride
}
if other.SleepDelay != nil {
dst.SleepDelay = other.SleepDelay
}
if other.DisableRestProtocolURICleaning != nil {
dst.DisableRestProtocolURICleaning = other.DisableRestProtocolURICleaning
}
if other.EnforceShouldRetryCheck != nil {
dst.EnforceShouldRetryCheck = other.EnforceShouldRetryCheck
}
if other.EnableEndpointDiscovery != nil {
dst.EnableEndpointDiscovery = other.EnableEndpointDiscovery
}
if other.DisableEndpointHostPrefix != nil {
dst.DisableEndpointHostPrefix = other.DisableEndpointHostPrefix
}
if other.STSRegionalEndpoint != endpoints.UnsetSTSEndpoint {
dst.STSRegionalEndpoint = other.STSRegionalEndpoint
}
if other.S3UsEast1RegionalEndpoint != endpoints.UnsetS3UsEast1Endpoint {
dst.S3UsEast1RegionalEndpoint = other.S3UsEast1RegionalEndpoint
}
if other.LowerCaseHeaderMaps != nil {
dst.LowerCaseHeaderMaps = other.LowerCaseHeaderMaps
}
if other.UseDualStackEndpoint != endpoints.DualStackEndpointStateUnset {
dst.UseDualStackEndpoint = other.UseDualStackEndpoint
}
if other.UseFIPSEndpoint != endpoints.FIPSEndpointStateUnset {
dst.UseFIPSEndpoint = other.UseFIPSEndpoint
}
}
// Copy will return a shallow copy of the Config object. If any additional
// configurations are provided they will be merged into the new config returned.
func (c *Config) Copy(cfgs ...*Config) *Config {
dst := &Config{}
dst.MergeIn(c)
for _, cfg := range cfgs {
dst.MergeIn(cfg)
}
return dst
}

@ -0,0 +1,38 @@
//go:build !go1.9
// +build !go1.9
package aws
import "time"
// Context is an copy of the Go v1.7 stdlib's context.Context interface.
// It is represented as a SDK interface to enable you to use the "WithContext"
// API methods with Go v1.6 and a Context type such as golang.org/x/net/context.
//
// See https://golang.org/pkg/context on how to use contexts.
type Context interface {
// Deadline returns the time when work done on behalf of this context
// should be canceled. Deadline returns ok==false when no deadline is
// set. Successive calls to Deadline return the same results.
Deadline() (deadline time.Time, ok bool)
// Done returns a channel that's closed when work done on behalf of this
// context should be canceled. Done may return nil if this context can
// never be canceled. Successive calls to Done return the same value.
Done() <-chan struct{}
// Err returns a non-nil error value after Done is closed. Err returns
// Canceled if the context was canceled or DeadlineExceeded if the
// context's deadline passed. No other values for Err are defined.
// After Done is closed, successive calls to Err return the same value.
Err() error
// Value returns the value associated with this context for key, or nil
// if no value is associated with key. Successive calls to Value with
// the same key returns the same result.
//
// Use context values only for request-scoped data that transits
// processes and API boundaries, not for passing optional parameters to
// functions.
Value(key interface{}) interface{}
}

@ -0,0 +1,12 @@
//go:build go1.9
// +build go1.9
package aws
import "context"
// Context is an alias of the Go stdlib's context.Context interface.
// It can be used within the SDK's API operation "WithContext" methods.
//
// See https://golang.org/pkg/context on how to use contexts.
type Context = context.Context

@ -0,0 +1,23 @@
//go:build !go1.7
// +build !go1.7
package aws
import (
"github.com/aws/aws-sdk-go/internal/context"
)
// BackgroundContext returns a context that will never be canceled, has no
// values, and no deadline. This context is used by the SDK to provide
// backwards compatibility with non-context API operations and functionality.
//
// Go 1.6 and before:
// This context function is equivalent to context.Background in the Go stdlib.
//
// Go 1.7 and later:
// The context returned will be the value returned by context.Background()
//
// See https://golang.org/pkg/context for more information on Contexts.
func BackgroundContext() Context {
return context.BackgroundCtx
}

@ -0,0 +1,21 @@
//go:build go1.7
// +build go1.7
package aws
import "context"
// BackgroundContext returns a context that will never be canceled, has no
// values, and no deadline. This context is used by the SDK to provide
// backwards compatibility with non-context API operations and functionality.
//
// Go 1.6 and before:
// This context function is equivalent to context.Background in the Go stdlib.
//
// Go 1.7 and later:
// The context returned will be the value returned by context.Background()
//
// See https://golang.org/pkg/context for more information on Contexts.
func BackgroundContext() Context {
return context.Background()
}

@ -0,0 +1,24 @@
package aws
import (
"time"
)
// SleepWithContext will wait for the timer duration to expire, or the context
// is canceled. Which ever happens first. If the context is canceled the Context's
// error will be returned.
//
// Expects Context to always return a non-nil error if the Done channel is closed.
func SleepWithContext(ctx Context, dur time.Duration) error {
t := time.NewTimer(dur)
defer t.Stop()
select {
case <-t.C:
break
case <-ctx.Done():
return ctx.Err()
}
return nil
}

@ -0,0 +1,918 @@
package aws
import "time"
// String returns a pointer to the string value passed in.
func String(v string) *string {
return &v
}
// StringValue returns the value of the string pointer passed in or
// "" if the pointer is nil.
func StringValue(v *string) string {
if v != nil {
return *v
}
return ""
}
// StringSlice converts a slice of string values into a slice of
// string pointers
func StringSlice(src []string) []*string {
dst := make([]*string, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// StringValueSlice converts a slice of string pointers into a slice of
// string values
func StringValueSlice(src []*string) []string {
dst := make([]string, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// StringMap converts a string map of string values into a string
// map of string pointers
func StringMap(src map[string]string) map[string]*string {
dst := make(map[string]*string)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// StringValueMap converts a string map of string pointers into a string
// map of string values
func StringValueMap(src map[string]*string) map[string]string {
dst := make(map[string]string)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Bool returns a pointer to the bool value passed in.
func Bool(v bool) *bool {
return &v
}
// BoolValue returns the value of the bool pointer passed in or
// false if the pointer is nil.
func BoolValue(v *bool) bool {
if v != nil {
return *v
}
return false
}
// BoolSlice converts a slice of bool values into a slice of
// bool pointers
func BoolSlice(src []bool) []*bool {
dst := make([]*bool, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// BoolValueSlice converts a slice of bool pointers into a slice of
// bool values
func BoolValueSlice(src []*bool) []bool {
dst := make([]bool, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// BoolMap converts a string map of bool values into a string
// map of bool pointers
func BoolMap(src map[string]bool) map[string]*bool {
dst := make(map[string]*bool)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// BoolValueMap converts a string map of bool pointers into a string
// map of bool values
func BoolValueMap(src map[string]*bool) map[string]bool {
dst := make(map[string]bool)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Int returns a pointer to the int value passed in.
func Int(v int) *int {
return &v
}
// IntValue returns the value of the int pointer passed in or
// 0 if the pointer is nil.
func IntValue(v *int) int {
if v != nil {
return *v
}
return 0
}
// IntSlice converts a slice of int values into a slice of
// int pointers
func IntSlice(src []int) []*int {
dst := make([]*int, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// IntValueSlice converts a slice of int pointers into a slice of
// int values
func IntValueSlice(src []*int) []int {
dst := make([]int, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// IntMap converts a string map of int values into a string
// map of int pointers
func IntMap(src map[string]int) map[string]*int {
dst := make(map[string]*int)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// IntValueMap converts a string map of int pointers into a string
// map of int values
func IntValueMap(src map[string]*int) map[string]int {
dst := make(map[string]int)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Uint returns a pointer to the uint value passed in.
func Uint(v uint) *uint {
return &v
}
// UintValue returns the value of the uint pointer passed in or
// 0 if the pointer is nil.
func UintValue(v *uint) uint {
if v != nil {
return *v
}
return 0
}
// UintSlice converts a slice of uint values uinto a slice of
// uint pointers
func UintSlice(src []uint) []*uint {
dst := make([]*uint, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// UintValueSlice converts a slice of uint pointers uinto a slice of
// uint values
func UintValueSlice(src []*uint) []uint {
dst := make([]uint, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// UintMap converts a string map of uint values uinto a string
// map of uint pointers
func UintMap(src map[string]uint) map[string]*uint {
dst := make(map[string]*uint)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// UintValueMap converts a string map of uint pointers uinto a string
// map of uint values
func UintValueMap(src map[string]*uint) map[string]uint {
dst := make(map[string]uint)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Int8 returns a pointer to the int8 value passed in.
func Int8(v int8) *int8 {
return &v
}
// Int8Value returns the value of the int8 pointer passed in or
// 0 if the pointer is nil.
func Int8Value(v *int8) int8 {
if v != nil {
return *v
}
return 0
}
// Int8Slice converts a slice of int8 values into a slice of
// int8 pointers
func Int8Slice(src []int8) []*int8 {
dst := make([]*int8, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Int8ValueSlice converts a slice of int8 pointers into a slice of
// int8 values
func Int8ValueSlice(src []*int8) []int8 {
dst := make([]int8, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Int8Map converts a string map of int8 values into a string
// map of int8 pointers
func Int8Map(src map[string]int8) map[string]*int8 {
dst := make(map[string]*int8)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Int8ValueMap converts a string map of int8 pointers into a string
// map of int8 values
func Int8ValueMap(src map[string]*int8) map[string]int8 {
dst := make(map[string]int8)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Int16 returns a pointer to the int16 value passed in.
func Int16(v int16) *int16 {
return &v
}
// Int16Value returns the value of the int16 pointer passed in or
// 0 if the pointer is nil.
func Int16Value(v *int16) int16 {
if v != nil {
return *v
}
return 0
}
// Int16Slice converts a slice of int16 values into a slice of
// int16 pointers
func Int16Slice(src []int16) []*int16 {
dst := make([]*int16, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Int16ValueSlice converts a slice of int16 pointers into a slice of
// int16 values
func Int16ValueSlice(src []*int16) []int16 {
dst := make([]int16, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Int16Map converts a string map of int16 values into a string
// map of int16 pointers
func Int16Map(src map[string]int16) map[string]*int16 {
dst := make(map[string]*int16)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Int16ValueMap converts a string map of int16 pointers into a string
// map of int16 values
func Int16ValueMap(src map[string]*int16) map[string]int16 {
dst := make(map[string]int16)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Int32 returns a pointer to the int32 value passed in.
func Int32(v int32) *int32 {
return &v
}
// Int32Value returns the value of the int32 pointer passed in or
// 0 if the pointer is nil.
func Int32Value(v *int32) int32 {
if v != nil {
return *v
}
return 0
}
// Int32Slice converts a slice of int32 values into a slice of
// int32 pointers
func Int32Slice(src []int32) []*int32 {
dst := make([]*int32, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Int32ValueSlice converts a slice of int32 pointers into a slice of
// int32 values
func Int32ValueSlice(src []*int32) []int32 {
dst := make([]int32, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Int32Map converts a string map of int32 values into a string
// map of int32 pointers
func Int32Map(src map[string]int32) map[string]*int32 {
dst := make(map[string]*int32)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Int32ValueMap converts a string map of int32 pointers into a string
// map of int32 values
func Int32ValueMap(src map[string]*int32) map[string]int32 {
dst := make(map[string]int32)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Int64 returns a pointer to the int64 value passed in.
func Int64(v int64) *int64 {
return &v
}
// Int64Value returns the value of the int64 pointer passed in or
// 0 if the pointer is nil.
func Int64Value(v *int64) int64 {
if v != nil {
return *v
}
return 0
}
// Int64Slice converts a slice of int64 values into a slice of
// int64 pointers
func Int64Slice(src []int64) []*int64 {
dst := make([]*int64, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Int64ValueSlice converts a slice of int64 pointers into a slice of
// int64 values
func Int64ValueSlice(src []*int64) []int64 {
dst := make([]int64, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Int64Map converts a string map of int64 values into a string
// map of int64 pointers
func Int64Map(src map[string]int64) map[string]*int64 {
dst := make(map[string]*int64)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Int64ValueMap converts a string map of int64 pointers into a string
// map of int64 values
func Int64ValueMap(src map[string]*int64) map[string]int64 {
dst := make(map[string]int64)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Uint8 returns a pointer to the uint8 value passed in.
func Uint8(v uint8) *uint8 {
return &v
}
// Uint8Value returns the value of the uint8 pointer passed in or
// 0 if the pointer is nil.
func Uint8Value(v *uint8) uint8 {
if v != nil {
return *v
}
return 0
}
// Uint8Slice converts a slice of uint8 values into a slice of
// uint8 pointers
func Uint8Slice(src []uint8) []*uint8 {
dst := make([]*uint8, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Uint8ValueSlice converts a slice of uint8 pointers into a slice of
// uint8 values
func Uint8ValueSlice(src []*uint8) []uint8 {
dst := make([]uint8, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Uint8Map converts a string map of uint8 values into a string
// map of uint8 pointers
func Uint8Map(src map[string]uint8) map[string]*uint8 {
dst := make(map[string]*uint8)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Uint8ValueMap converts a string map of uint8 pointers into a string
// map of uint8 values
func Uint8ValueMap(src map[string]*uint8) map[string]uint8 {
dst := make(map[string]uint8)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Uint16 returns a pointer to the uint16 value passed in.
func Uint16(v uint16) *uint16 {
return &v
}
// Uint16Value returns the value of the uint16 pointer passed in or
// 0 if the pointer is nil.
func Uint16Value(v *uint16) uint16 {
if v != nil {
return *v
}
return 0
}
// Uint16Slice converts a slice of uint16 values into a slice of
// uint16 pointers
func Uint16Slice(src []uint16) []*uint16 {
dst := make([]*uint16, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Uint16ValueSlice converts a slice of uint16 pointers into a slice of
// uint16 values
func Uint16ValueSlice(src []*uint16) []uint16 {
dst := make([]uint16, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Uint16Map converts a string map of uint16 values into a string
// map of uint16 pointers
func Uint16Map(src map[string]uint16) map[string]*uint16 {
dst := make(map[string]*uint16)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Uint16ValueMap converts a string map of uint16 pointers into a string
// map of uint16 values
func Uint16ValueMap(src map[string]*uint16) map[string]uint16 {
dst := make(map[string]uint16)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Uint32 returns a pointer to the uint32 value passed in.
func Uint32(v uint32) *uint32 {
return &v
}
// Uint32Value returns the value of the uint32 pointer passed in or
// 0 if the pointer is nil.
func Uint32Value(v *uint32) uint32 {
if v != nil {
return *v
}
return 0
}
// Uint32Slice converts a slice of uint32 values into a slice of
// uint32 pointers
func Uint32Slice(src []uint32) []*uint32 {
dst := make([]*uint32, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Uint32ValueSlice converts a slice of uint32 pointers into a slice of
// uint32 values
func Uint32ValueSlice(src []*uint32) []uint32 {
dst := make([]uint32, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Uint32Map converts a string map of uint32 values into a string
// map of uint32 pointers
func Uint32Map(src map[string]uint32) map[string]*uint32 {
dst := make(map[string]*uint32)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Uint32ValueMap converts a string map of uint32 pointers into a string
// map of uint32 values
func Uint32ValueMap(src map[string]*uint32) map[string]uint32 {
dst := make(map[string]uint32)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Uint64 returns a pointer to the uint64 value passed in.
func Uint64(v uint64) *uint64 {
return &v
}
// Uint64Value returns the value of the uint64 pointer passed in or
// 0 if the pointer is nil.
func Uint64Value(v *uint64) uint64 {
if v != nil {
return *v
}
return 0
}
// Uint64Slice converts a slice of uint64 values into a slice of
// uint64 pointers
func Uint64Slice(src []uint64) []*uint64 {
dst := make([]*uint64, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Uint64ValueSlice converts a slice of uint64 pointers into a slice of
// uint64 values
func Uint64ValueSlice(src []*uint64) []uint64 {
dst := make([]uint64, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Uint64Map converts a string map of uint64 values into a string
// map of uint64 pointers
func Uint64Map(src map[string]uint64) map[string]*uint64 {
dst := make(map[string]*uint64)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Uint64ValueMap converts a string map of uint64 pointers into a string
// map of uint64 values
func Uint64ValueMap(src map[string]*uint64) map[string]uint64 {
dst := make(map[string]uint64)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Float32 returns a pointer to the float32 value passed in.
func Float32(v float32) *float32 {
return &v
}
// Float32Value returns the value of the float32 pointer passed in or
// 0 if the pointer is nil.
func Float32Value(v *float32) float32 {
if v != nil {
return *v
}
return 0
}
// Float32Slice converts a slice of float32 values into a slice of
// float32 pointers
func Float32Slice(src []float32) []*float32 {
dst := make([]*float32, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Float32ValueSlice converts a slice of float32 pointers into a slice of
// float32 values
func Float32ValueSlice(src []*float32) []float32 {
dst := make([]float32, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Float32Map converts a string map of float32 values into a string
// map of float32 pointers
func Float32Map(src map[string]float32) map[string]*float32 {
dst := make(map[string]*float32)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Float32ValueMap converts a string map of float32 pointers into a string
// map of float32 values
func Float32ValueMap(src map[string]*float32) map[string]float32 {
dst := make(map[string]float32)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Float64 returns a pointer to the float64 value passed in.
func Float64(v float64) *float64 {
return &v
}
// Float64Value returns the value of the float64 pointer passed in or
// 0 if the pointer is nil.
func Float64Value(v *float64) float64 {
if v != nil {
return *v
}
return 0
}
// Float64Slice converts a slice of float64 values into a slice of
// float64 pointers
func Float64Slice(src []float64) []*float64 {
dst := make([]*float64, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// Float64ValueSlice converts a slice of float64 pointers into a slice of
// float64 values
func Float64ValueSlice(src []*float64) []float64 {
dst := make([]float64, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// Float64Map converts a string map of float64 values into a string
// map of float64 pointers
func Float64Map(src map[string]float64) map[string]*float64 {
dst := make(map[string]*float64)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// Float64ValueMap converts a string map of float64 pointers into a string
// map of float64 values
func Float64ValueMap(src map[string]*float64) map[string]float64 {
dst := make(map[string]float64)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}
// Time returns a pointer to the time.Time value passed in.
func Time(v time.Time) *time.Time {
return &v
}
// TimeValue returns the value of the time.Time pointer passed in or
// time.Time{} if the pointer is nil.
func TimeValue(v *time.Time) time.Time {
if v != nil {
return *v
}
return time.Time{}
}
// SecondsTimeValue converts an int64 pointer to a time.Time value
// representing seconds since Epoch or time.Time{} if the pointer is nil.
func SecondsTimeValue(v *int64) time.Time {
if v != nil {
return time.Unix((*v / 1000), 0)
}
return time.Time{}
}
// MillisecondsTimeValue converts an int64 pointer to a time.Time value
// representing milliseconds sinch Epoch or time.Time{} if the pointer is nil.
func MillisecondsTimeValue(v *int64) time.Time {
if v != nil {
return time.Unix(0, (*v * 1000000))
}
return time.Time{}
}
// TimeUnixMilli returns a Unix timestamp in milliseconds from "January 1, 1970 UTC".
// The result is undefined if the Unix time cannot be represented by an int64.
// Which includes calling TimeUnixMilli on a zero Time is undefined.
//
// This utility is useful for service API's such as CloudWatch Logs which require
// their unix time values to be in milliseconds.
//
// See Go stdlib https://golang.org/pkg/time/#Time.UnixNano for more information.
func TimeUnixMilli(t time.Time) int64 {
return t.UnixNano() / int64(time.Millisecond/time.Nanosecond)
}
// TimeSlice converts a slice of time.Time values into a slice of
// time.Time pointers
func TimeSlice(src []time.Time) []*time.Time {
dst := make([]*time.Time, len(src))
for i := 0; i < len(src); i++ {
dst[i] = &(src[i])
}
return dst
}
// TimeValueSlice converts a slice of time.Time pointers into a slice of
// time.Time values
func TimeValueSlice(src []*time.Time) []time.Time {
dst := make([]time.Time, len(src))
for i := 0; i < len(src); i++ {
if src[i] != nil {
dst[i] = *(src[i])
}
}
return dst
}
// TimeMap converts a string map of time.Time values into a string
// map of time.Time pointers
func TimeMap(src map[string]time.Time) map[string]*time.Time {
dst := make(map[string]*time.Time)
for k, val := range src {
v := val
dst[k] = &v
}
return dst
}
// TimeValueMap converts a string map of time.Time pointers into a string
// map of time.Time values
func TimeValueMap(src map[string]*time.Time) map[string]time.Time {
dst := make(map[string]time.Time)
for k, val := range src {
if val != nil {
dst[k] = *val
}
}
return dst
}

@ -0,0 +1,232 @@
package corehandlers
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"regexp"
"strconv"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
)
// Interface for matching types which also have a Len method.
type lener interface {
Len() int
}
// BuildContentLengthHandler builds the content length of a request based on the body,
// or will use the HTTPRequest.Header's "Content-Length" if defined. If unable
// to determine request body length and no "Content-Length" was specified it will panic.
//
// The Content-Length will only be added to the request if the length of the body
// is greater than 0. If the body is empty or the current `Content-Length`
// header is <= 0, the header will also be stripped.
var BuildContentLengthHandler = request.NamedHandler{Name: "core.BuildContentLengthHandler", Fn: func(r *request.Request) {
var length int64
if slength := r.HTTPRequest.Header.Get("Content-Length"); slength != "" {
length, _ = strconv.ParseInt(slength, 10, 64)
} else {
if r.Body != nil {
var err error
length, err = aws.SeekerLen(r.Body)
if err != nil {
r.Error = awserr.New(request.ErrCodeSerialization, "failed to get request body's length", err)
return
}
}
}
if length > 0 {
r.HTTPRequest.ContentLength = length
r.HTTPRequest.Header.Set("Content-Length", fmt.Sprintf("%d", length))
} else {
r.HTTPRequest.ContentLength = 0
r.HTTPRequest.Header.Del("Content-Length")
}
}}
var reStatusCode = regexp.MustCompile(`^(\d{3})`)
// ValidateReqSigHandler is a request handler to ensure that the request's
// signature doesn't expire before it is sent. This can happen when a request
// is built and signed significantly before it is sent. Or significant delays
// occur when retrying requests that would cause the signature to expire.
var ValidateReqSigHandler = request.NamedHandler{
Name: "core.ValidateReqSigHandler",
Fn: func(r *request.Request) {
// Unsigned requests are not signed
if r.Config.Credentials == credentials.AnonymousCredentials {
return
}
signedTime := r.Time
if !r.LastSignedAt.IsZero() {
signedTime = r.LastSignedAt
}
// 5 minutes to allow for some clock skew/delays in transmission.
// Would be improved with aws/aws-sdk-go#423
if signedTime.Add(5 * time.Minute).After(time.Now()) {
return
}
fmt.Println("request expired, resigning")
r.Sign()
},
}
// SendHandler is a request handler to send service request using HTTP client.
var SendHandler = request.NamedHandler{
Name: "core.SendHandler",
Fn: func(r *request.Request) {
sender := sendFollowRedirects
if r.DisableFollowRedirects {
sender = sendWithoutFollowRedirects
}
if request.NoBody == r.HTTPRequest.Body {
// Strip off the request body if the NoBody reader was used as a
// place holder for a request body. This prevents the SDK from
// making requests with a request body when it would be invalid
// to do so.
//
// Use a shallow copy of the http.Request to ensure the race condition
// of transport on Body will not trigger
reqOrig, reqCopy := r.HTTPRequest, *r.HTTPRequest
reqCopy.Body = nil
r.HTTPRequest = &reqCopy
defer func() {
r.HTTPRequest = reqOrig
}()
}
var err error
r.HTTPResponse, err = sender(r)
if err != nil {
handleSendError(r, err)
}
},
}
func sendFollowRedirects(r *request.Request) (*http.Response, error) {
return r.Config.HTTPClient.Do(r.HTTPRequest)
}
func sendWithoutFollowRedirects(r *request.Request) (*http.Response, error) {
transport := r.Config.HTTPClient.Transport
if transport == nil {
transport = http.DefaultTransport
}
return transport.RoundTrip(r.HTTPRequest)
}
func handleSendError(r *request.Request, err error) {
// Prevent leaking if an HTTPResponse was returned. Clean up
// the body.
if r.HTTPResponse != nil {
r.HTTPResponse.Body.Close()
}
// Capture the case where url.Error is returned for error processing
// response. e.g. 301 without location header comes back as string
// error and r.HTTPResponse is nil. Other URL redirect errors will
// comeback in a similar method.
if e, ok := err.(*url.Error); ok && e.Err != nil {
if s := reStatusCode.FindStringSubmatch(e.Err.Error()); s != nil {
code, _ := strconv.ParseInt(s[1], 10, 64)
r.HTTPResponse = &http.Response{
StatusCode: int(code),
Status: http.StatusText(int(code)),
Body: ioutil.NopCloser(bytes.NewReader([]byte{})),
}
return
}
}
if r.HTTPResponse == nil {
// Add a dummy request response object to ensure the HTTPResponse
// value is consistent.
r.HTTPResponse = &http.Response{
StatusCode: int(0),
Status: http.StatusText(int(0)),
Body: ioutil.NopCloser(bytes.NewReader([]byte{})),
}
}
// Catch all request errors, and let the default retrier determine
// if the error is retryable.
r.Error = awserr.New(request.ErrCodeRequestError, "send request failed", err)
// Override the error with a context canceled error, if that was canceled.
ctx := r.Context()
select {
case <-ctx.Done():
r.Error = awserr.New(request.CanceledErrorCode,
"request context canceled", ctx.Err())
r.Retryable = aws.Bool(false)
default:
}
}
// ValidateResponseHandler is a request handler to validate service response.
var ValidateResponseHandler = request.NamedHandler{Name: "core.ValidateResponseHandler", Fn: func(r *request.Request) {
if r.HTTPResponse.StatusCode == 0 || r.HTTPResponse.StatusCode >= 300 {
// this may be replaced by an UnmarshalError handler
r.Error = awserr.New("UnknownError", "unknown error", r.Error)
}
}}
// AfterRetryHandler performs final checks to determine if the request should
// be retried and how long to delay.
var AfterRetryHandler = request.NamedHandler{
Name: "core.AfterRetryHandler",
Fn: func(r *request.Request) {
// If one of the other handlers already set the retry state
// we don't want to override it based on the service's state
if r.Retryable == nil || aws.BoolValue(r.Config.EnforceShouldRetryCheck) {
r.Retryable = aws.Bool(r.ShouldRetry(r))
}
if r.WillRetry() {
r.RetryDelay = r.RetryRules(r)
if sleepFn := r.Config.SleepDelay; sleepFn != nil {
// Support SleepDelay for backwards compatibility and testing
sleepFn(r.RetryDelay)
} else if err := aws.SleepWithContext(r.Context(), r.RetryDelay); err != nil {
r.Error = awserr.New(request.CanceledErrorCode,
"request context canceled", err)
r.Retryable = aws.Bool(false)
return
}
// when the expired token exception occurs the credentials
// need to be expired locally so that the next request to
// get credentials will trigger a credentials refresh.
if r.IsErrorExpired() {
r.Config.Credentials.Expire()
}
r.RetryCount++
r.Error = nil
}
}}
// ValidateEndpointHandler is a request handler to validate a request had the
// appropriate Region and Endpoint set. Will set r.Error if the endpoint or
// region is not valid.
var ValidateEndpointHandler = request.NamedHandler{Name: "core.ValidateEndpointHandler", Fn: func(r *request.Request) {
if r.ClientInfo.SigningRegion == "" && aws.StringValue(r.Config.Region) == "" {
r.Error = aws.ErrMissingRegion
} else if r.ClientInfo.Endpoint == "" {
// Was any endpoint provided by the user, or one was derived by the
// SDK's endpoint resolver?
r.Error = aws.ErrMissingEndpoint
}
}}

@ -0,0 +1,17 @@
package corehandlers
import "github.com/aws/aws-sdk-go/aws/request"
// ValidateParametersHandler is a request handler to validate the input parameters.
// Validating parameters only has meaning if done prior to the request being sent.
var ValidateParametersHandler = request.NamedHandler{Name: "core.ValidateParametersHandler", Fn: func(r *request.Request) {
if !r.ParamsFilled() {
return
}
if v, ok := r.Params.(request.Validator); ok {
if err := v.Validate(); err != nil {
r.Error = err
}
}
}}

@ -0,0 +1,37 @@
package corehandlers
import (
"os"
"runtime"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
)
// SDKVersionUserAgentHandler is a request handler for adding the SDK Version
// to the user agent.
var SDKVersionUserAgentHandler = request.NamedHandler{
Name: "core.SDKVersionUserAgentHandler",
Fn: request.MakeAddToUserAgentHandler(aws.SDKName, aws.SDKVersion,
runtime.Version(), runtime.GOOS, runtime.GOARCH),
}
const execEnvVar = `AWS_EXECUTION_ENV`
const execEnvUAKey = `exec-env`
// AddHostExecEnvUserAgentHander is a request handler appending the SDK's
// execution environment to the user agent.
//
// If the environment variable AWS_EXECUTION_ENV is set, its value will be
// appended to the user agent string.
var AddHostExecEnvUserAgentHander = request.NamedHandler{
Name: "core.AddHostExecEnvUserAgentHander",
Fn: func(r *request.Request) {
v := os.Getenv(execEnvVar)
if len(v) == 0 {
return
}
request.AddToUserAgent(r, execEnvUAKey+"/"+v)
},
}

@ -0,0 +1,100 @@
package credentials
import (
"github.com/aws/aws-sdk-go/aws/awserr"
)
var (
// ErrNoValidProvidersFoundInChain Is returned when there are no valid
// providers in the ChainProvider.
//
// This has been deprecated. For verbose error messaging set
// aws.Config.CredentialsChainVerboseErrors to true.
ErrNoValidProvidersFoundInChain = awserr.New("NoCredentialProviders",
`no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors`,
nil)
)
// A ChainProvider will search for a provider which returns credentials
// and cache that provider until Retrieve is called again.
//
// The ChainProvider provides a way of chaining multiple providers together
// which will pick the first available using priority order of the Providers
// in the list.
//
// If none of the Providers retrieve valid credentials Value, ChainProvider's
// Retrieve() will return the error ErrNoValidProvidersFoundInChain.
//
// If a Provider is found which returns valid credentials Value ChainProvider
// will cache that Provider for all calls to IsExpired(), until Retrieve is
// called again.
//
// Example of ChainProvider to be used with an EnvProvider and EC2RoleProvider.
// In this example EnvProvider will first check if any credentials are available
// via the environment variables. If there are none ChainProvider will check
// the next Provider in the list, EC2RoleProvider in this case. If EC2RoleProvider
// does not return any credentials ChainProvider will return the error
// ErrNoValidProvidersFoundInChain
//
// creds := credentials.NewChainCredentials(
// []credentials.Provider{
// &credentials.EnvProvider{},
// &ec2rolecreds.EC2RoleProvider{
// Client: ec2metadata.New(sess),
// },
// })
//
// // Usage of ChainCredentials with aws.Config
// svc := ec2.New(session.Must(session.NewSession(&aws.Config{
// Credentials: creds,
// })))
//
type ChainProvider struct {
Providers []Provider
curr Provider
VerboseErrors bool
}
// NewChainCredentials returns a pointer to a new Credentials object
// wrapping a chain of providers.
func NewChainCredentials(providers []Provider) *Credentials {
return NewCredentials(&ChainProvider{
Providers: append([]Provider{}, providers...),
})
}
// Retrieve returns the credentials value or error if no provider returned
// without error.
//
// If a provider is found it will be cached and any calls to IsExpired()
// will return the expired state of the cached provider.
func (c *ChainProvider) Retrieve() (Value, error) {
var errs []error
for _, p := range c.Providers {
creds, err := p.Retrieve()
if err == nil {
c.curr = p
return creds, nil
}
errs = append(errs, err)
}
c.curr = nil
var err error
err = ErrNoValidProvidersFoundInChain
if c.VerboseErrors {
err = awserr.NewBatchError("NoCredentialProviders", "no valid providers in chain", errs)
}
return Value{}, err
}
// IsExpired will returned the expired state of the currently cached provider
// if there is one. If there is no current provider, true will be returned.
func (c *ChainProvider) IsExpired() bool {
if c.curr != nil {
return c.curr.IsExpired()
}
return true
}

@ -0,0 +1,23 @@
//go:build !go1.7
// +build !go1.7
package credentials
import (
"github.com/aws/aws-sdk-go/internal/context"
)
// backgroundContext returns a context that will never be canceled, has no
// values, and no deadline. This context is used by the SDK to provide
// backwards compatibility with non-context API operations and functionality.
//
// Go 1.6 and before:
// This context function is equivalent to context.Background in the Go stdlib.
//
// Go 1.7 and later:
// The context returned will be the value returned by context.Background()
//
// See https://golang.org/pkg/context for more information on Contexts.
func backgroundContext() Context {
return context.BackgroundCtx
}

@ -0,0 +1,21 @@
//go:build go1.7
// +build go1.7
package credentials
import "context"
// backgroundContext returns a context that will never be canceled, has no
// values, and no deadline. This context is used by the SDK to provide
// backwards compatibility with non-context API operations and functionality.
//
// Go 1.6 and before:
// This context function is equivalent to context.Background in the Go stdlib.
//
// Go 1.7 and later:
// The context returned will be the value returned by context.Background()
//
// See https://golang.org/pkg/context for more information on Contexts.
func backgroundContext() Context {
return context.Background()
}

@ -0,0 +1,40 @@
//go:build !go1.9
// +build !go1.9
package credentials
import "time"
// Context is an copy of the Go v1.7 stdlib's context.Context interface.
// It is represented as a SDK interface to enable you to use the "WithContext"
// API methods with Go v1.6 and a Context type such as golang.org/x/net/context.
//
// This type, aws.Context, and context.Context are equivalent.
//
// See https://golang.org/pkg/context on how to use contexts.
type Context interface {
// Deadline returns the time when work done on behalf of this context
// should be canceled. Deadline returns ok==false when no deadline is
// set. Successive calls to Deadline return the same results.
Deadline() (deadline time.Time, ok bool)
// Done returns a channel that's closed when work done on behalf of this
// context should be canceled. Done may return nil if this context can
// never be canceled. Successive calls to Done return the same value.
Done() <-chan struct{}
// Err returns a non-nil error value after Done is closed. Err returns
// Canceled if the context was canceled or DeadlineExceeded if the
// context's deadline passed. No other values for Err are defined.
// After Done is closed, successive calls to Err return the same value.
Err() error
// Value returns the value associated with this context for key, or nil
// if no value is associated with key. Successive calls to Value with
// the same key returns the same result.
//
// Use context values only for request-scoped data that transits
// processes and API boundaries, not for passing optional parameters to
// functions.
Value(key interface{}) interface{}
}

@ -0,0 +1,14 @@
//go:build go1.9
// +build go1.9
package credentials
import "context"
// Context is an alias of the Go stdlib's context.Context interface.
// It can be used within the SDK's API operation "WithContext" methods.
//
// This type, aws.Context, and context.Context are equivalent.
//
// See https://golang.org/pkg/context on how to use contexts.
type Context = context.Context

@ -0,0 +1,383 @@
// Package credentials provides credential retrieval and management
//
// The Credentials is the primary method of getting access to and managing
// credentials Values. Using dependency injection retrieval of the credential
// values is handled by a object which satisfies the Provider interface.
//
// By default the Credentials.Get() will cache the successful result of a
// Provider's Retrieve() until Provider.IsExpired() returns true. At which
// point Credentials will call Provider's Retrieve() to get new credential Value.
//
// The Provider is responsible for determining when credentials Value have expired.
// It is also important to note that Credentials will always call Retrieve the
// first time Credentials.Get() is called.
//
// Example of using the environment variable credentials.
//
// creds := credentials.NewEnvCredentials()
//
// // Retrieve the credentials value
// credValue, err := creds.Get()
// if err != nil {
// // handle error
// }
//
// Example of forcing credentials to expire and be refreshed on the next Get().
// This may be helpful to proactively expire credentials and refresh them sooner
// than they would naturally expire on their own.
//
// creds := credentials.NewCredentials(&ec2rolecreds.EC2RoleProvider{})
// creds.Expire()
// credsValue, err := creds.Get()
// // New credentials will be retrieved instead of from cache.
//
//
// Custom Provider
//
// Each Provider built into this package also provides a helper method to generate
// a Credentials pointer setup with the provider. To use a custom Provider just
// create a type which satisfies the Provider interface and pass it to the
// NewCredentials method.
//
// type MyProvider struct{}
// func (m *MyProvider) Retrieve() (Value, error) {...}
// func (m *MyProvider) IsExpired() bool {...}
//
// creds := credentials.NewCredentials(&MyProvider{})
// credValue, err := creds.Get()
//
package credentials
import (
"fmt"
"sync"
"time"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/internal/sync/singleflight"
)
// AnonymousCredentials is an empty Credential object that can be used as
// dummy placeholder credentials for requests that do not need signed.
//
// This Credentials can be used to configure a service to not sign requests
// when making service API calls. For example, when accessing public
// s3 buckets.
//
// svc := s3.New(session.Must(session.NewSession(&aws.Config{
// Credentials: credentials.AnonymousCredentials,
// })))
// // Access public S3 buckets.
var AnonymousCredentials = NewStaticCredentials("", "", "")
// A Value is the AWS credentials value for individual credential fields.
type Value struct {
// AWS Access key ID
AccessKeyID string
// AWS Secret Access Key
SecretAccessKey string
// AWS Session Token
SessionToken string
// Provider used to get credentials
ProviderName string
}
// HasKeys returns if the credentials Value has both AccessKeyID and
// SecretAccessKey value set.
func (v Value) HasKeys() bool {
return len(v.AccessKeyID) != 0 && len(v.SecretAccessKey) != 0
}
// A Provider is the interface for any component which will provide credentials
// Value. A provider is required to manage its own Expired state, and what to
// be expired means.
//
// The Provider should not need to implement its own mutexes, because
// that will be managed by Credentials.
type Provider interface {
// Retrieve returns nil if it successfully retrieved the value.
// Error is returned if the value were not obtainable, or empty.
Retrieve() (Value, error)
// IsExpired returns if the credentials are no longer valid, and need
// to be retrieved.
IsExpired() bool
}
// ProviderWithContext is a Provider that can retrieve credentials with a Context
type ProviderWithContext interface {
Provider
RetrieveWithContext(Context) (Value, error)
}
// An Expirer is an interface that Providers can implement to expose the expiration
// time, if known. If the Provider cannot accurately provide this info,
// it should not implement this interface.
type Expirer interface {
// The time at which the credentials are no longer valid
ExpiresAt() time.Time
}
// An ErrorProvider is a stub credentials provider that always returns an error
// this is used by the SDK when construction a known provider is not possible
// due to an error.
type ErrorProvider struct {
// The error to be returned from Retrieve
Err error
// The provider name to set on the Retrieved returned Value
ProviderName string
}
// Retrieve will always return the error that the ErrorProvider was created with.
func (p ErrorProvider) Retrieve() (Value, error) {
return Value{ProviderName: p.ProviderName}, p.Err
}
// IsExpired will always return not expired.
func (p ErrorProvider) IsExpired() bool {
return false
}
// A Expiry provides shared expiration logic to be used by credentials
// providers to implement expiry functionality.
//
// The best method to use this struct is as an anonymous field within the
// provider's struct.
//
// Example:
// type EC2RoleProvider struct {
// Expiry
// ...
// }
type Expiry struct {
// The date/time when to expire on
expiration time.Time
// If set will be used by IsExpired to determine the current time.
// Defaults to time.Now if CurrentTime is not set. Available for testing
// to be able to mock out the current time.
CurrentTime func() time.Time
}
// SetExpiration sets the expiration IsExpired will check when called.
//
// If window is greater than 0 the expiration time will be reduced by the
// window value.
//
// Using a window is helpful to trigger credentials to expire sooner than
// the expiration time given to ensure no requests are made with expired
// tokens.
func (e *Expiry) SetExpiration(expiration time.Time, window time.Duration) {
// Passed in expirations should have the monotonic clock values stripped.
// This ensures time comparisons will be based on wall-time.
e.expiration = expiration.Round(0)
if window > 0 {
e.expiration = e.expiration.Add(-window)
}
}
// IsExpired returns if the credentials are expired.
func (e *Expiry) IsExpired() bool {
curTime := e.CurrentTime
if curTime == nil {
curTime = time.Now
}
return e.expiration.Before(curTime())
}
// ExpiresAt returns the expiration time of the credential
func (e *Expiry) ExpiresAt() time.Time {
return e.expiration
}
// A Credentials provides concurrency safe retrieval of AWS credentials Value.
// Credentials will cache the credentials value until they expire. Once the value
// expires the next Get will attempt to retrieve valid credentials.
//
// Credentials is safe to use across multiple goroutines and will manage the
// synchronous state so the Providers do not need to implement their own
// synchronization.
//
// The first Credentials.Get() will always call Provider.Retrieve() to get the
// first instance of the credentials Value. All calls to Get() after that
// will return the cached credentials Value until IsExpired() returns true.
type Credentials struct {
sf singleflight.Group
m sync.RWMutex
creds Value
provider Provider
}
// NewCredentials returns a pointer to a new Credentials with the provider set.
func NewCredentials(provider Provider) *Credentials {
c := &Credentials{
provider: provider,
}
return c
}
// GetWithContext returns the credentials value, or error if the credentials
// Value failed to be retrieved. Will return early if the passed in context is
// canceled.
//
// Will return the cached credentials Value if it has not expired. If the
// credentials Value has expired the Provider's Retrieve() will be called
// to refresh the credentials.
//
// If Credentials.Expire() was called the credentials Value will be force
// expired, and the next call to Get() will cause them to be refreshed.
//
// Passed in Context is equivalent to aws.Context, and context.Context.
func (c *Credentials) GetWithContext(ctx Context) (Value, error) {
// Check if credentials are cached, and not expired.
select {
case curCreds, ok := <-c.asyncIsExpired():
// ok will only be true, of the credentials were not expired. ok will
// be false and have no value if the credentials are expired.
if ok {
return curCreds, nil
}
case <-ctx.Done():
return Value{}, awserr.New("RequestCanceled",
"request context canceled", ctx.Err())
}
// Cannot pass context down to the actual retrieve, because the first
// context would cancel the whole group when there is not direct
// association of items in the group.
resCh := c.sf.DoChan("", func() (interface{}, error) {
return c.singleRetrieve(&suppressedContext{ctx})
})
select {
case res := <-resCh:
return res.Val.(Value), res.Err
case <-ctx.Done():
return Value{}, awserr.New("RequestCanceled",
"request context canceled", ctx.Err())
}
}
func (c *Credentials) singleRetrieve(ctx Context) (interface{}, error) {
c.m.Lock()
defer c.m.Unlock()
if curCreds := c.creds; !c.isExpiredLocked(curCreds) {
return curCreds, nil
}
var creds Value
var err error
if p, ok := c.provider.(ProviderWithContext); ok {
creds, err = p.RetrieveWithContext(ctx)
} else {
creds, err = c.provider.Retrieve()
}
if err == nil {
c.creds = creds
}
return creds, err
}
// Get returns the credentials value, or error if the credentials Value failed
// to be retrieved.
//
// Will return the cached credentials Value if it has not expired. If the
// credentials Value has expired the Provider's Retrieve() will be called
// to refresh the credentials.
//
// If Credentials.Expire() was called the credentials Value will be force
// expired, and the next call to Get() will cause them to be refreshed.
func (c *Credentials) Get() (Value, error) {
return c.GetWithContext(backgroundContext())
}
// Expire expires the credentials and forces them to be retrieved on the
// next call to Get().
//
// This will override the Provider's expired state, and force Credentials
// to call the Provider's Retrieve().
func (c *Credentials) Expire() {
c.m.Lock()
defer c.m.Unlock()
c.creds = Value{}
}
// IsExpired returns if the credentials are no longer valid, and need
// to be retrieved.
//
// If the Credentials were forced to be expired with Expire() this will
// reflect that override.
func (c *Credentials) IsExpired() bool {
c.m.RLock()
defer c.m.RUnlock()
return c.isExpiredLocked(c.creds)
}
// asyncIsExpired returns a channel of credentials Value. If the channel is
// closed the credentials are expired and credentials value are not empty.
func (c *Credentials) asyncIsExpired() <-chan Value {
ch := make(chan Value, 1)
go func() {
c.m.RLock()
defer c.m.RUnlock()
if curCreds := c.creds; !c.isExpiredLocked(curCreds) {
ch <- curCreds
}
close(ch)
}()
return ch
}
// isExpiredLocked helper method wrapping the definition of expired credentials.
func (c *Credentials) isExpiredLocked(creds interface{}) bool {
return creds == nil || creds.(Value) == Value{} || c.provider.IsExpired()
}
// ExpiresAt provides access to the functionality of the Expirer interface of
// the underlying Provider, if it supports that interface. Otherwise, it returns
// an error.
func (c *Credentials) ExpiresAt() (time.Time, error) {
c.m.RLock()
defer c.m.RUnlock()
expirer, ok := c.provider.(Expirer)
if !ok {
return time.Time{}, awserr.New("ProviderNotExpirer",
fmt.Sprintf("provider %s does not support ExpiresAt()",
c.creds.ProviderName),
nil)
}
if c.creds == (Value{}) {
// set expiration time to the distant past
return time.Time{}, nil
}
return expirer.ExpiresAt(), nil
}
type suppressedContext struct {
Context
}
func (s *suppressedContext) Deadline() (deadline time.Time, ok bool) {
return time.Time{}, false
}
func (s *suppressedContext) Done() <-chan struct{} {
return nil
}
func (s *suppressedContext) Err() error {
return nil
}

@ -0,0 +1,188 @@
package ec2rolecreds
import (
"bufio"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/client"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/internal/sdkuri"
)
// ProviderName provides a name of EC2Role provider
const ProviderName = "EC2RoleProvider"
// A EC2RoleProvider retrieves credentials from the EC2 service, and keeps track if
// those credentials are expired.
//
// Example how to configure the EC2RoleProvider with custom http Client, Endpoint
// or ExpiryWindow
//
// p := &ec2rolecreds.EC2RoleProvider{
// // Pass in a custom timeout to be used when requesting
// // IAM EC2 Role credentials.
// Client: ec2metadata.New(sess, aws.Config{
// HTTPClient: &http.Client{Timeout: 10 * time.Second},
// }),
//
// // Do not use early expiry of credentials. If a non zero value is
// // specified the credentials will be expired early
// ExpiryWindow: 0,
// }
type EC2RoleProvider struct {
credentials.Expiry
// Required EC2Metadata client to use when connecting to EC2 metadata service.
Client *ec2metadata.EC2Metadata
// ExpiryWindow will allow the credentials to trigger refreshing prior to
// the credentials actually expiring. This is beneficial so race conditions
// with expiring credentials do not cause request to fail unexpectedly
// due to ExpiredTokenException exceptions.
//
// So a ExpiryWindow of 10s would cause calls to IsExpired() to return true
// 10 seconds before the credentials are actually expired.
//
// If ExpiryWindow is 0 or less it will be ignored.
ExpiryWindow time.Duration
}
// NewCredentials returns a pointer to a new Credentials object wrapping
// the EC2RoleProvider. Takes a ConfigProvider to create a EC2Metadata client.
// The ConfigProvider is satisfied by the session.Session type.
func NewCredentials(c client.ConfigProvider, options ...func(*EC2RoleProvider)) *credentials.Credentials {
p := &EC2RoleProvider{
Client: ec2metadata.New(c),
}
for _, option := range options {
option(p)
}
return credentials.NewCredentials(p)
}
// NewCredentialsWithClient returns a pointer to a new Credentials object wrapping
// the EC2RoleProvider. Takes a EC2Metadata client to use when connecting to EC2
// metadata service.
func NewCredentialsWithClient(client *ec2metadata.EC2Metadata, options ...func(*EC2RoleProvider)) *credentials.Credentials {
p := &EC2RoleProvider{
Client: client,
}
for _, option := range options {
option(p)
}
return credentials.NewCredentials(p)
}
// Retrieve retrieves credentials from the EC2 service.
// Error will be returned if the request fails, or unable to extract
// the desired credentials.
func (m *EC2RoleProvider) Retrieve() (credentials.Value, error) {
return m.RetrieveWithContext(aws.BackgroundContext())
}
// RetrieveWithContext retrieves credentials from the EC2 service.
// Error will be returned if the request fails, or unable to extract
// the desired credentials.
func (m *EC2RoleProvider) RetrieveWithContext(ctx credentials.Context) (credentials.Value, error) {
credsList, err := requestCredList(ctx, m.Client)
if err != nil {
return credentials.Value{ProviderName: ProviderName}, err
}
if len(credsList) == 0 {
return credentials.Value{ProviderName: ProviderName}, awserr.New("EmptyEC2RoleList", "empty EC2 Role list", nil)
}
credsName := credsList[0]
roleCreds, err := requestCred(ctx, m.Client, credsName)
if err != nil {
return credentials.Value{ProviderName: ProviderName}, err
}
m.SetExpiration(roleCreds.Expiration, m.ExpiryWindow)
return credentials.Value{
AccessKeyID: roleCreds.AccessKeyID,
SecretAccessKey: roleCreds.SecretAccessKey,
SessionToken: roleCreds.Token,
ProviderName: ProviderName,
}, nil
}
// A ec2RoleCredRespBody provides the shape for unmarshaling credential
// request responses.
type ec2RoleCredRespBody struct {
// Success State
Expiration time.Time
AccessKeyID string
SecretAccessKey string
Token string
// Error state
Code string
Message string
}
const iamSecurityCredsPath = "iam/security-credentials/"
// requestCredList requests a list of credentials from the EC2 service.
// If there are no credentials, or there is an error making or receiving the request
func requestCredList(ctx aws.Context, client *ec2metadata.EC2Metadata) ([]string, error) {
resp, err := client.GetMetadataWithContext(ctx, iamSecurityCredsPath)
if err != nil {
return nil, awserr.New("EC2RoleRequestError", "no EC2 instance role found", err)
}
credsList := []string{}
s := bufio.NewScanner(strings.NewReader(resp))
for s.Scan() {
credsList = append(credsList, s.Text())
}
if err := s.Err(); err != nil {
return nil, awserr.New(request.ErrCodeSerialization,
"failed to read EC2 instance role from metadata service", err)
}
return credsList, nil
}
// requestCred requests the credentials for a specific credentials from the EC2 service.
//
// If the credentials cannot be found, or there is an error reading the response
// and error will be returned.
func requestCred(ctx aws.Context, client *ec2metadata.EC2Metadata, credsName string) (ec2RoleCredRespBody, error) {
resp, err := client.GetMetadataWithContext(ctx, sdkuri.PathJoin(iamSecurityCredsPath, credsName))
if err != nil {
return ec2RoleCredRespBody{},
awserr.New("EC2RoleRequestError",
fmt.Sprintf("failed to get %s EC2 instance role credentials", credsName),
err)
}
respCreds := ec2RoleCredRespBody{}
if err := json.NewDecoder(strings.NewReader(resp)).Decode(&respCreds); err != nil {
return ec2RoleCredRespBody{},
awserr.New(request.ErrCodeSerialization,
fmt.Sprintf("failed to decode %s EC2 instance role credentials", credsName),
err)
}
if respCreds.Code != "Success" {
// If an error code was returned something failed requesting the role.
return ec2RoleCredRespBody{}, awserr.New(respCreds.Code, respCreds.Message, nil)
}
return respCreds, nil
}

@ -0,0 +1,210 @@
// Package endpointcreds provides support for retrieving credentials from an
// arbitrary HTTP endpoint.
//
// The credentials endpoint Provider can receive both static and refreshable
// credentials that will expire. Credentials are static when an "Expiration"
// value is not provided in the endpoint's response.
//
// Static credentials will never expire once they have been retrieved. The format
// of the static credentials response:
// {
// "AccessKeyId" : "MUA...",
// "SecretAccessKey" : "/7PC5om....",
// }
//
// Refreshable credentials will expire within the "ExpiryWindow" of the Expiration
// value in the response. The format of the refreshable credentials response:
// {
// "AccessKeyId" : "MUA...",
// "SecretAccessKey" : "/7PC5om....",
// "Token" : "AQoDY....=",
// "Expiration" : "2016-02-25T06:03:31Z"
// }
//
// Errors should be returned in the following format and only returned with 400
// or 500 HTTP status codes.
// {
// "code": "ErrorCode",
// "message": "Helpful error message."
// }
package endpointcreds
import (
"encoding/json"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/client"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/private/protocol/json/jsonutil"
)
// ProviderName is the name of the credentials provider.
const ProviderName = `CredentialsEndpointProvider`
// Provider satisfies the credentials.Provider interface, and is a client to
// retrieve credentials from an arbitrary endpoint.
type Provider struct {
staticCreds bool
credentials.Expiry
// Requires a AWS Client to make HTTP requests to the endpoint with.
// the Endpoint the request will be made to is provided by the aws.Config's
// Endpoint value.
Client *client.Client
// ExpiryWindow will allow the credentials to trigger refreshing prior to
// the credentials actually expiring. This is beneficial so race conditions
// with expiring credentials do not cause request to fail unexpectedly
// due to ExpiredTokenException exceptions.
//
// So a ExpiryWindow of 10s would cause calls to IsExpired() to return true
// 10 seconds before the credentials are actually expired.
//
// If ExpiryWindow is 0 or less it will be ignored.
ExpiryWindow time.Duration
// Optional authorization token value if set will be used as the value of
// the Authorization header of the endpoint credential request.
AuthorizationToken string
}
// NewProviderClient returns a credentials Provider for retrieving AWS credentials
// from arbitrary endpoint.
func NewProviderClient(cfg aws.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) credentials.Provider {
p := &Provider{
Client: client.New(
cfg,
metadata.ClientInfo{
ServiceName: "CredentialsEndpoint",
Endpoint: endpoint,
},
handlers,
),
}
p.Client.Handlers.Unmarshal.PushBack(unmarshalHandler)
p.Client.Handlers.UnmarshalError.PushBack(unmarshalError)
p.Client.Handlers.Validate.Clear()
p.Client.Handlers.Validate.PushBack(validateEndpointHandler)
for _, option := range options {
option(p)
}
return p
}
// NewCredentialsClient returns a pointer to a new Credentials object
// wrapping the endpoint credentials Provider.
func NewCredentialsClient(cfg aws.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) *credentials.Credentials {
return credentials.NewCredentials(NewProviderClient(cfg, handlers, endpoint, options...))
}
// IsExpired returns true if the credentials retrieved are expired, or not yet
// retrieved.
func (p *Provider) IsExpired() bool {
if p.staticCreds {
return false
}
return p.Expiry.IsExpired()
}
// Retrieve will attempt to request the credentials from the endpoint the Provider
// was configured for. And error will be returned if the retrieval fails.
func (p *Provider) Retrieve() (credentials.Value, error) {
return p.RetrieveWithContext(aws.BackgroundContext())
}
// RetrieveWithContext will attempt to request the credentials from the endpoint the Provider
// was configured for. And error will be returned if the retrieval fails.
func (p *Provider) RetrieveWithContext(ctx credentials.Context) (credentials.Value, error) {
resp, err := p.getCredentials(ctx)
if err != nil {
return credentials.Value{ProviderName: ProviderName},
awserr.New("CredentialsEndpointError", "failed to load credentials", err)
}
if resp.Expiration != nil {
p.SetExpiration(*resp.Expiration, p.ExpiryWindow)
} else {
p.staticCreds = true
}
return credentials.Value{
AccessKeyID: resp.AccessKeyID,
SecretAccessKey: resp.SecretAccessKey,
SessionToken: resp.Token,
ProviderName: ProviderName,
}, nil
}
type getCredentialsOutput struct {
Expiration *time.Time
AccessKeyID string
SecretAccessKey string
Token string
}
type errorOutput struct {
Code string `json:"code"`
Message string `json:"message"`
}
func (p *Provider) getCredentials(ctx aws.Context) (*getCredentialsOutput, error) {
op := &request.Operation{
Name: "GetCredentials",
HTTPMethod: "GET",
}
out := &getCredentialsOutput{}
req := p.Client.NewRequest(op, nil, out)
req.SetContext(ctx)
req.HTTPRequest.Header.Set("Accept", "application/json")
if authToken := p.AuthorizationToken; len(authToken) != 0 {
req.HTTPRequest.Header.Set("Authorization", authToken)
}
return out, req.Send()
}
func validateEndpointHandler(r *request.Request) {
if len(r.ClientInfo.Endpoint) == 0 {
r.Error = aws.ErrMissingEndpoint
}
}
func unmarshalHandler(r *request.Request) {
defer r.HTTPResponse.Body.Close()
out := r.Data.(*getCredentialsOutput)
if err := json.NewDecoder(r.HTTPResponse.Body).Decode(&out); err != nil {
r.Error = awserr.New(request.ErrCodeSerialization,
"failed to decode endpoint credentials",
err,
)
}
}
func unmarshalError(r *request.Request) {
defer r.HTTPResponse.Body.Close()
var errOut errorOutput
err := jsonutil.UnmarshalJSONError(&errOut, r.HTTPResponse.Body)
if err != nil {
r.Error = awserr.NewRequestFailure(
awserr.New(request.ErrCodeSerialization,
"failed to decode error message", err),
r.HTTPResponse.StatusCode,
r.RequestID,
)
return
}
// Response body format is not consistent between metadata endpoints.
// Grab the error message as a string and include that as the source error
r.Error = awserr.New(errOut.Code, errOut.Message, nil)
}

@ -0,0 +1,74 @@
package credentials
import (
"os"
"github.com/aws/aws-sdk-go/aws/awserr"
)
// EnvProviderName provides a name of Env provider
const EnvProviderName = "EnvProvider"
var (
// ErrAccessKeyIDNotFound is returned when the AWS Access Key ID can't be
// found in the process's environment.
ErrAccessKeyIDNotFound = awserr.New("EnvAccessKeyNotFound", "AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment", nil)
// ErrSecretAccessKeyNotFound is returned when the AWS Secret Access Key
// can't be found in the process's environment.
ErrSecretAccessKeyNotFound = awserr.New("EnvSecretNotFound", "AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment", nil)
)
// A EnvProvider retrieves credentials from the environment variables of the
// running process. Environment credentials never expire.
//
// Environment variables used:
//
// * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
//
// * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
type EnvProvider struct {
retrieved bool
}
// NewEnvCredentials returns a pointer to a new Credentials object
// wrapping the environment variable provider.
func NewEnvCredentials() *Credentials {
return NewCredentials(&EnvProvider{})
}
// Retrieve retrieves the keys from the environment.
func (e *EnvProvider) Retrieve() (Value, error) {
e.retrieved = false
id := os.Getenv("AWS_ACCESS_KEY_ID")
if id == "" {
id = os.Getenv("AWS_ACCESS_KEY")
}
secret := os.Getenv("AWS_SECRET_ACCESS_KEY")
if secret == "" {
secret = os.Getenv("AWS_SECRET_KEY")
}
if id == "" {
return Value{ProviderName: EnvProviderName}, ErrAccessKeyIDNotFound
}
if secret == "" {
return Value{ProviderName: EnvProviderName}, ErrSecretAccessKeyNotFound
}
e.retrieved = true
return Value{
AccessKeyID: id,
SecretAccessKey: secret,
SessionToken: os.Getenv("AWS_SESSION_TOKEN"),
ProviderName: EnvProviderName,
}, nil
}
// IsExpired returns if the credentials have been retrieved.
func (e *EnvProvider) IsExpired() bool {
return !e.retrieved
}

@ -0,0 +1,12 @@
[default]
aws_access_key_id = accessKey
aws_secret_access_key = secret
aws_session_token = token
[no_token]
aws_access_key_id = accessKey
aws_secret_access_key = secret
[with_colon]
aws_access_key_id: accessKey
aws_secret_access_key: secret

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save