aboutsummaryrefslogtreecommitdiff
path: root/vendor/github.com/rubenv/sql-migrate
diff options
context:
space:
mode:
Diffstat (limited to 'vendor/github.com/rubenv/sql-migrate')
-rw-r--r--vendor/github.com/rubenv/sql-migrate/LICENSE21
-rw-r--r--vendor/github.com/rubenv/sql-migrate/README.md289
-rw-r--r--vendor/github.com/rubenv/sql-migrate/doc.go239
-rw-r--r--vendor/github.com/rubenv/sql-migrate/migrate.go674
-rw-r--r--vendor/github.com/rubenv/sql-migrate/sqlparse/LICENSE22
-rw-r--r--vendor/github.com/rubenv/sql-migrate/sqlparse/README.md7
-rw-r--r--vendor/github.com/rubenv/sql-migrate/sqlparse/sqlparse.go235
7 files changed, 1487 insertions, 0 deletions
diff --git a/vendor/github.com/rubenv/sql-migrate/LICENSE b/vendor/github.com/rubenv/sql-migrate/LICENSE
new file mode 100644
index 0000000..2b19587
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (C) 2014-2017 by Ruben Vermeersch <ruben@rocketeer.be>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/rubenv/sql-migrate/README.md b/vendor/github.com/rubenv/sql-migrate/README.md
new file mode 100644
index 0000000..9ac432e
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/README.md
@@ -0,0 +1,289 @@
+# sql-migrate
+
+> SQL Schema migration tool for [Go](http://golang.org/). Based on [gorp](https://github.com/go-gorp/gorp) and [goose](https://bitbucket.org/liamstask/goose).
+
+[![Build Status](https://travis-ci.org/rubenv/sql-migrate.svg?branch=master)](https://travis-ci.org/rubenv/sql-migrate) [![GoDoc](https://godoc.org/github.com/rubenv/sql-migrate?status.png)](https://godoc.org/github.com/rubenv/sql-migrate)
+
+Using [modl](https://github.com/jmoiron/modl)? Check out [modl-migrate](https://github.com/rubenv/modl-migrate).
+
+## Features
+
+* Usable as a CLI tool or as a library
+* Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through [gorp](https://github.com/go-gorp/gorp))
+* Can embed migrations into your application
+* Migrations are defined with SQL for full flexibility
+* Atomic migrations
+* Up/down migrations to allow rollback
+* Supports multiple database types in one project
+
+## Installation
+
+To install the library and command line program, use the following:
+
+```bash
+go get -v github.com/rubenv/sql-migrate/...
+```
+
+## Usage
+
+### As a standalone tool
+
+```
+$ sql-migrate --help
+usage: sql-migrate [--version] [--help] <command> [<args>]
+
+Available commands are:
+ down Undo a database migration
+ new Create a new migration
+ redo Reapply the last migration
+ status Show migration status
+ up Migrates the database to the most recent version available
+```
+
+Each command requires a configuration file (which defaults to `dbconfig.yml`, but can be specified with the `-config` flag). This config file should specify one or more environments:
+
+```yml
+development:
+ dialect: sqlite3
+ datasource: test.db
+ dir: migrations/sqlite3
+
+production:
+ dialect: postgres
+ datasource: dbname=myapp sslmode=disable
+ dir: migrations/postgres
+ table: migrations
+```
+
+The `table` setting is optional and will default to `gorp_migrations`.
+
+The environment that will be used can be specified with the `-env` flag (defaults to `development`).
+
+Use the `--help` flag in combination with any of the commands to get an overview of its usage:
+
+```
+$ sql-migrate up --help
+Usage: sql-migrate up [options] ...
+
+ Migrates the database to the most recent version available.
+
+Options:
+
+ -config=config.yml Configuration file to use.
+ -env="development" Environment.
+ -limit=0 Limit the number of migrations (0 = unlimited).
+ -dryrun Don't apply migrations, just print them.
+```
+
+The `new` command creates a new empty migration template using the following pattern `<current time>-<name>.sql`.
+
+The `up` command applies all available migrations. By contrast, `down` will only apply one migration by default. This behavior can be changed for both by using the `-limit` parameter.
+
+The `redo` command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.
+
+Use the `status` command to see the state of the applied migrations:
+
+```bash
+$ sql-migrate status
++---------------+-----------------------------------------+
+| MIGRATION | APPLIED |
++---------------+-----------------------------------------+
+| 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
+| 2_record.sql | no |
++---------------+-----------------------------------------+
+```
+
+### MySQL Caveat
+
+If you are using MySQL, you must append `?parseTime=true` to the `datasource` configuration. For example:
+
+```yml
+production:
+ dialect: mysql
+ datasource: root@/dbname?parseTime=true
+ dir: migrations/mysql
+ table: migrations
+```
+
+See [here](https://github.com/go-sql-driver/mysql#parsetime) for more information.
+
+### As a library
+
+Import sql-migrate into your application:
+
+```go
+import "github.com/rubenv/sql-migrate"
+```
+
+Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later):
+
+```go
+// Hardcoded strings in memory:
+migrations := &migrate.MemoryMigrationSource{
+ Migrations: []*migrate.Migration{
+ &migrate.Migration{
+ Id: "123",
+ Up: []string{"CREATE TABLE people (id int)"},
+ Down: []string{"DROP TABLE people"},
+ },
+ },
+}
+
+// OR: Read migrations from a folder:
+migrations := &migrate.FileMigrationSource{
+ Dir: "db/migrations",
+}
+
+// OR: Use migrations from a packr box
+migrations := &migrate.PackrMigrationSource{
+ Box: packr.NewBox("./migrations"),
+}
+
+// OR: Use migrations from bindata:
+migrations := &migrate.AssetMigrationSource{
+ Asset: Asset,
+ AssetDir: AssetDir,
+ Dir: "migrations",
+}
+```
+
+Then use the `Exec` function to upgrade your database:
+
+```go
+db, err := sql.Open("sqlite3", filename)
+if err != nil {
+ // Handle errors!
+}
+
+n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
+if err != nil {
+ // Handle errors!
+}
+fmt.Printf("Applied %d migrations!\n", n)
+```
+
+Note that `n` can be greater than `0` even if there is an error: any migration that succeeded will remain applied even if a later one fails.
+
+Check [the GoDoc reference](https://godoc.org/github.com/rubenv/sql-migrate) for the full documentation.
+
+## Writing migrations
+Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.
+
+```sql
+-- +migrate Up
+-- SQL in section 'Up' is executed when this migration is applied
+CREATE TABLE people (id int);
+
+
+-- +migrate Down
+-- SQL section 'Down' is executed when this migration is rolled back
+DROP TABLE people;
+```
+
+You can put multiple statements in each block, as long as you end them with a semicolon (`;`).
+
+You can alternatively set up a separator string that matches an entire line by setting `sqlparse.LineSeparator`. This
+can be used to imitate, for example, MS SQL Query Analyzer functionality where commands can be separated by a line with
+contents of `GO`. If `sqlparse.LineSeparator` is matched, it will not be included in the resulting migration scripts.
+
+If you have complex statements which contain semicolons, use `StatementBegin` and `StatementEnd` to indicate boundaries:
+
+```sql
+-- +migrate Up
+CREATE TABLE people (id int);
+
+-- +migrate StatementBegin
+CREATE OR REPLACE FUNCTION do_something()
+returns void AS $$
+DECLARE
+ create_query text;
+BEGIN
+ -- Do something here
+END;
+$$
+language plpgsql;
+-- +migrate StatementEnd
+
+-- +migrate Down
+DROP FUNCTION do_something();
+DROP TABLE people;
+```
+
+The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.
+
+Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the `notransaction` option:
+
+```sql
+-- +migrate Up notransaction
+CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);
+
+-- +migrate Down
+DROP INDEX people_unique_id_idx;
+```
+
+## Embedding migrations with [packr](https://github.com/gobuffalo/packr)
+
+If you like your Go applications self-contained (that is: a single binary): use [packr](https://github.com/gobuffalo/packr) to embed the migration files.
+
+Just write your migration files as usual, as a set of SQL files in a folder.
+
+Use the `PackrMigrationSource` in your application to find the migrations:
+
+```go
+migrations := &migrate.PackrMigrationSource{
+ Box: packr.NewBox("./migrations"),
+}
+```
+
+If you already have a box and would like to use a subdirectory:
+
+```go
+migrations := &migrate.PackrMigrationSource{
+ Box: myBox,
+ Dir: "./migrations",
+}
+```
+
+## Embedding migrations with [bindata](https://github.com/shuLhan/go-bindata)
+
+As an alternative, but slightly less maintained, you can use [bindata](https://github.com/shuLhan/go-bindata) to embed the migration files.
+
+Just write your migration files as usual, as a set of SQL files in a folder.
+
+Then use bindata to generate a `.go` file with the migrations embedded:
+
+```bash
+go-bindata -pkg myapp -o bindata.go db/migrations/
+```
+
+The resulting `bindata.go` file will contain your migrations. Remember to regenerate your `bindata.go` file whenever you add/modify a migration (`go generate` will help here, once it arrives).
+
+Use the `AssetMigrationSource` in your application to find the migrations:
+
+```go
+migrations := &migrate.AssetMigrationSource{
+ Asset: Asset,
+ AssetDir: AssetDir,
+ Dir: "db/migrations",
+}
+```
+
+Both `Asset` and `AssetDir` are functions provided by bindata.
+
+Then proceed as usual.
+
+## Extending
+
+Adding a new migration source means implementing `MigrationSource`.
+
+```go
+type MigrationSource interface {
+ FindMigrations() ([]*Migration, error)
+}
+```
+
+The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the `Id` field.
+
+## License
+
+This library is distributed under the [MIT](LICENSE) license.
diff --git a/vendor/github.com/rubenv/sql-migrate/doc.go b/vendor/github.com/rubenv/sql-migrate/doc.go
new file mode 100644
index 0000000..eb4ed85
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/doc.go
@@ -0,0 +1,239 @@
+/*
+
+SQL Schema migration tool for Go.
+
+Key features:
+
+ * Usable as a CLI tool or as a library
+ * Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through gorp)
+ * Can embed migrations into your application
+ * Migrations are defined with SQL for full flexibility
+ * Atomic migrations
+ * Up/down migrations to allow rollback
+ * Supports multiple database types in one project
+
+Installation
+
+To install the library and command line program, use the following:
+
+ go get -v github.com/rubenv/sql-migrate/...
+
+Command-line tool
+
+The main command is called sql-migrate.
+
+ $ sql-migrate --help
+ usage: sql-migrate [--version] [--help] <command> [<args>]
+
+ Available commands are:
+ down Undo a database migration
+ new Create a new migration
+ redo Reapply the last migration
+ status Show migration status
+ up Migrates the database to the most recent version available
+
+Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments:
+
+ development:
+ dialect: sqlite3
+ datasource: test.db
+ dir: migrations/sqlite3
+
+ production:
+ dialect: postgres
+ datasource: dbname=myapp sslmode=disable
+ dir: migrations/postgres
+ table: migrations
+
+The `table` setting is optional and will default to `gorp_migrations`.
+
+The environment that will be used can be specified with the -env flag (defaults to development).
+
+Use the --help flag in combination with any of the commands to get an overview of its usage:
+
+ $ sql-migrate up --help
+ Usage: sql-migrate up [options] ...
+
+ Migrates the database to the most recent version available.
+
+ Options:
+
+ -config=config.yml Configuration file to use.
+ -env="development" Environment.
+ -limit=0 Limit the number of migrations (0 = unlimited).
+ -dryrun Don't apply migrations, just print them.
+
+The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter.
+
+The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.
+
+Use the status command to see the state of the applied migrations:
+
+ $ sql-migrate status
+ +---------------+-----------------------------------------+
+ | MIGRATION | APPLIED |
+ +---------------+-----------------------------------------+
+ | 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
+ | 2_record.sql | no |
+ +---------------+-----------------------------------------+
+
+MySQL Caveat
+
+If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example:
+
+ production:
+ dialect: mysql
+ datasource: root@/dbname?parseTime=true
+ dir: migrations/mysql
+ table: migrations
+
+See https://github.com/go-sql-driver/mysql#parsetime for more information.
+
+Library
+
+Import sql-migrate into your application:
+
+ import "github.com/rubenv/sql-migrate"
+
+Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later):
+
+ // Hardcoded strings in memory:
+ migrations := &migrate.MemoryMigrationSource{
+ Migrations: []*migrate.Migration{
+ &migrate.Migration{
+ Id: "123",
+ Up: []string{"CREATE TABLE people (id int)"},
+ Down: []string{"DROP TABLE people"},
+ },
+ },
+ }
+
+ // OR: Read migrations from a folder:
+ migrations := &migrate.FileMigrationSource{
+ Dir: "db/migrations",
+ }
+
+ // OR: Use migrations from bindata:
+ migrations := &migrate.AssetMigrationSource{
+ Asset: Asset,
+ AssetDir: AssetDir,
+ Dir: "migrations",
+ }
+
+Then use the Exec function to upgrade your database:
+
+ db, err := sql.Open("sqlite3", filename)
+ if err != nil {
+ // Handle errors!
+ }
+
+ n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
+ if err != nil {
+ // Handle errors!
+ }
+ fmt.Printf("Applied %d migrations!\n", n)
+
+Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails.
+
+The full set of capabilities can be found in the API docs below.
+
+Writing migrations
+
+Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.
+
+ -- +migrate Up
+ -- SQL in section 'Up' is executed when this migration is applied
+ CREATE TABLE people (id int);
+
+
+ -- +migrate Down
+ -- SQL section 'Down' is executed when this migration is rolled back
+ DROP TABLE people;
+
+You can put multiple statements in each block, as long as you end them with a semicolon (;).
+
+If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries:
+
+ -- +migrate Up
+ CREATE TABLE people (id int);
+
+ -- +migrate StatementBegin
+ CREATE OR REPLACE FUNCTION do_something()
+ returns void AS $$
+ DECLARE
+ create_query text;
+ BEGIN
+ -- Do something here
+ END;
+ $$
+ language plpgsql;
+ -- +migrate StatementEnd
+
+ -- +migrate Down
+ DROP FUNCTION do_something();
+ DROP TABLE people;
+
+The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.
+
+Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option:
+
+ -- +migrate Up notransaction
+ CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);
+
+ -- +migrate Down
+ DROP INDEX people_unique_id_idx;
+
+Embedding migrations with packr
+
+If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files.
+
+Just write your migration files as usual, as a set of SQL files in a folder.
+
+Use the PackrMigrationSource in your application to find the migrations:
+
+ migrations := &migrate.PackrMigrationSource{
+ Box: packr.NewBox("./migrations"),
+ }
+
+If you already have a box and would like to use a subdirectory:
+
+ migrations := &migrate.PackrMigrationSource{
+ Box: myBox,
+ Dir: "./migrations",
+ }
+
+Embedding migrations with bindata
+
+As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files.
+
+Just write your migration files as usual, as a set of SQL files in a folder.
+
+Then use bindata to generate a .go file with the migrations embedded:
+
+ go-bindata -pkg myapp -o bindata.go db/migrations/
+
+The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives).
+
+Use the AssetMigrationSource in your application to find the migrations:
+
+ migrations := &migrate.AssetMigrationSource{
+ Asset: Asset,
+ AssetDir: AssetDir,
+ Dir: "db/migrations",
+ }
+
+Both Asset and AssetDir are functions provided by bindata.
+
+Then proceed as usual.
+
+Extending
+
+Adding a new migration source means implementing MigrationSource.
+
+ type MigrationSource interface {
+ FindMigrations() ([]*Migration, error)
+ }
+
+The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
+*/
+package migrate
diff --git a/vendor/github.com/rubenv/sql-migrate/migrate.go b/vendor/github.com/rubenv/sql-migrate/migrate.go
new file mode 100644
index 0000000..02f59f7
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/migrate.go
@@ -0,0 +1,674 @@
+package migrate
+
+import (
+ "bytes"
+ "database/sql"
+ "errors"
+ "fmt"
+ "io"
+ "net/http"
+ "path"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/rubenv/sql-migrate/sqlparse"
+ "gopkg.in/gorp.v1"
+)
+
+type MigrationDirection int
+
+const (
+ Up MigrationDirection = iota
+ Down
+)
+
+var tableName = "gorp_migrations"
+var schemaName = ""
+var numberPrefixRegex = regexp.MustCompile(`^(\d+).*$`)
+
+// PlanError happens where no migration plan could be created between the sets
+// of already applied migrations and the currently found. For example, when the database
+// contains a migration which is not among the migrations list found for an operation.
+type PlanError struct {
+ Migration *Migration
+ ErrorMessag string
+}
+
+func newPlanError(migration *Migration, errorMessage string) error {
+ return &PlanError{
+ Migration: migration,
+ ErrorMessag: errorMessage,
+ }
+}
+
+func (p *PlanError) Error() string {
+ return fmt.Sprintf("Unable to create migration plan because of %s: %s",
+ p.Migration.Id, p.ErrorMessag)
+}
+
+// TxError is returned when any error is encountered during a database
+// transaction. It contains the relevant *Migration and notes it's Id in the
+// Error function output.
+type TxError struct {
+ Migration *Migration
+ Err error
+}
+
+func newTxError(migration *PlannedMigration, err error) error {
+ return &TxError{
+ Migration: migration.Migration,
+ Err: err,
+ }
+}
+
+func (e *TxError) Error() string {
+ return e.Err.Error() + " handling " + e.Migration.Id
+}
+
+// Set the name of the table used to store migration info.
+//
+// Should be called before any other call such as (Exec, ExecMax, ...).
+func SetTable(name string) {
+ if name != "" {
+ tableName = name
+ }
+}
+
+// SetSchema sets the name of a schema that the migration table be referenced.
+func SetSchema(name string) {
+ if name != "" {
+ schemaName = name
+ }
+}
+
+type Migration struct {
+ Id string
+ Up []string
+ Down []string
+
+ DisableTransactionUp bool
+ DisableTransactionDown bool
+}
+
+func (m Migration) Less(other *Migration) bool {
+ switch {
+ case m.isNumeric() && other.isNumeric() && m.VersionInt() != other.VersionInt():
+ return m.VersionInt() < other.VersionInt()
+ case m.isNumeric() && !other.isNumeric():
+ return true
+ case !m.isNumeric() && other.isNumeric():
+ return false
+ default:
+ return m.Id < other.Id
+ }
+}
+
+func (m Migration) isNumeric() bool {
+ return len(m.NumberPrefixMatches()) > 0
+}
+
+func (m Migration) NumberPrefixMatches() []string {
+ return numberPrefixRegex.FindStringSubmatch(m.Id)
+}
+
+func (m Migration) VersionInt() int64 {
+ v := m.NumberPrefixMatches()[1]
+ value, err := strconv.ParseInt(v, 10, 64)
+ if err != nil {
+ panic(fmt.Sprintf("Could not parse %q into int64: %s", v, err))
+ }
+ return value
+}
+
+type PlannedMigration struct {
+ *Migration
+
+ DisableTransaction bool
+ Queries []string
+}
+
+type byId []*Migration
+
+func (b byId) Len() int { return len(b) }
+func (b byId) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
+func (b byId) Less(i, j int) bool { return b[i].Less(b[j]) }
+
+type MigrationRecord struct {
+ Id string `db:"id"`
+ AppliedAt time.Time `db:"applied_at"`
+}
+
+var MigrationDialects = map[string]gorp.Dialect{
+ "sqlite3": gorp.SqliteDialect{},
+ "postgres": gorp.PostgresDialect{},
+ "mysql": gorp.MySQLDialect{Engine: "InnoDB", Encoding: "UTF8"},
+ "mssql": gorp.SqlServerDialect{},
+ "oci8": gorp.OracleDialect{},
+}
+
+type MigrationSource interface {
+ // Finds the migrations.
+ //
+ // The resulting slice of migrations should be sorted by Id.
+ FindMigrations() ([]*Migration, error)
+}
+
+// A hardcoded set of migrations, in-memory.
+type MemoryMigrationSource struct {
+ Migrations []*Migration
+}
+
+var _ MigrationSource = (*MemoryMigrationSource)(nil)
+
+func (m MemoryMigrationSource) FindMigrations() ([]*Migration, error) {
+ // Make sure migrations are sorted. In order to make the MemoryMigrationSource safe for
+ // concurrent use we should not mutate it in place. So `FindMigrations` would sort a copy
+ // of the m.Migrations.
+ migrations := make([]*Migration, len(m.Migrations))
+ copy(migrations, m.Migrations)
+ sort.Sort(byId(migrations))
+ return migrations, nil
+}
+
+// A set of migrations loaded from an http.FileServer
+
+type HttpFileSystemMigrationSource struct {
+ FileSystem http.FileSystem
+}
+
+var _ MigrationSource = (*HttpFileSystemMigrationSource)(nil)
+
+func (f HttpFileSystemMigrationSource) FindMigrations() ([]*Migration, error) {
+ return findMigrations(f.FileSystem)
+}
+
+// A set of migrations loaded from a directory.
+type FileMigrationSource struct {
+ Dir string
+}
+
+var _ MigrationSource = (*FileMigrationSource)(nil)
+
+func (f FileMigrationSource) FindMigrations() ([]*Migration, error) {
+ filesystem := http.Dir(f.Dir)
+ return findMigrations(filesystem)
+}
+
+func findMigrations(dir http.FileSystem) ([]*Migration, error) {
+ migrations := make([]*Migration, 0)
+
+ file, err := dir.Open("/")
+ if err != nil {
+ return nil, err
+ }
+
+ files, err := file.Readdir(0)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, info := range files {
+ if strings.HasSuffix(info.Name(), ".sql") {
+ file, err := dir.Open(info.Name())
+ if err != nil {
+ return nil, fmt.Errorf("Error while opening %s: %s", info.Name(), err)
+ }
+
+ migration, err := ParseMigration(info.Name(), file)
+ if err != nil {
+ return nil, fmt.Errorf("Error while parsing %s: %s", info.Name(), err)
+ }
+
+ migrations = append(migrations, migration)
+ }
+ }
+
+ // Make sure migrations are sorted
+ sort.Sort(byId(migrations))
+
+ return migrations, nil
+}
+
+// Migrations from a bindata asset set.
+type AssetMigrationSource struct {
+ // Asset should return content of file in path if exists
+ Asset func(path string) ([]byte, error)
+
+ // AssetDir should return list of files in the path
+ AssetDir func(path string) ([]string, error)
+
+ // Path in the bindata to use.
+ Dir string
+}
+
+var _ MigrationSource = (*AssetMigrationSource)(nil)
+
+func (a AssetMigrationSource) FindMigrations() ([]*Migration, error) {
+ migrations := make([]*Migration, 0)
+
+ files, err := a.AssetDir(a.Dir)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, name := range files {
+ if strings.HasSuffix(name, ".sql") {
+ file, err := a.Asset(path.Join(a.Dir, name))
+ if err != nil {
+ return nil, err
+ }
+
+ migration, err := ParseMigration(name, bytes.NewReader(file))
+ if err != nil {
+ return nil, err
+ }
+
+ migrations = append(migrations, migration)
+ }
+ }
+
+ // Make sure migrations are sorted
+ sort.Sort(byId(migrations))
+
+ return migrations, nil
+}
+
+// Avoids pulling in the packr library for everyone, mimicks the bits of
+// packr.Box that we need.
+type PackrBox interface {
+ List() []string
+ Bytes(name string) []byte
+}
+
+// Migrations from a packr box.
+type PackrMigrationSource struct {
+ Box PackrBox
+
+ // Path in the box to use.
+ Dir string
+}
+
+var _ MigrationSource = (*PackrMigrationSource)(nil)
+
+func (p PackrMigrationSource) FindMigrations() ([]*Migration, error) {
+ migrations := make([]*Migration, 0)
+ items := p.Box.List()
+
+ prefix := ""
+ dir := path.Clean(p.Dir)
+ if dir != "." {
+ prefix = fmt.Sprintf("%s/", dir)
+ }
+
+ for _, item := range items {
+ if !strings.HasPrefix(item, prefix) {
+ continue
+ }
+ name := strings.TrimPrefix(item, prefix)
+ if strings.Contains(name, "/") {
+ continue
+ }
+
+ if strings.HasSuffix(name, ".sql") {
+ file := p.Box.Bytes(item)
+
+ migration, err := ParseMigration(name, bytes.NewReader(file))
+ if err != nil {
+ return nil, err
+ }
+
+ migrations = append(migrations, migration)
+ }
+ }
+
+ // Make sure migrations are sorted
+ sort.Sort(byId(migrations))
+
+ return migrations, nil
+}
+
+// Migration parsing
+func ParseMigration(id string, r io.ReadSeeker) (*Migration, error) {
+ m := &Migration{
+ Id: id,
+ }
+
+ parsed, err := sqlparse.ParseMigration(r)
+ if err != nil {
+ return nil, fmt.Errorf("Error parsing migration (%s): %s", id, err)
+ }
+
+ m.Up = parsed.UpStatements
+ m.Down = parsed.DownStatements
+
+ m.DisableTransactionUp = parsed.DisableTransactionUp
+ m.DisableTransactionDown = parsed.DisableTransactionDown
+
+ return m, nil
+}
+
+type SqlExecutor interface {
+ Exec(query string, args ...interface{}) (sql.Result, error)
+ Insert(list ...interface{}) error
+ Delete(list ...interface{}) (int64, error)
+}
+
+// Execute a set of migrations
+//
+// Returns the number of applied migrations.
+func Exec(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection) (int, error) {
+ return ExecMax(db, dialect, m, dir, 0)
+}
+
+// Execute a set of migrations
+//
+// Will apply at most `max` migrations. Pass 0 for no limit (or use Exec).
+//
+// Returns the number of applied migrations.
+func ExecMax(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) (int, error) {
+ migrations, dbMap, err := PlanMigration(db, dialect, m, dir, max)
+ if err != nil {
+ return 0, err
+ }
+
+ // Apply migrations
+ applied := 0
+ for _, migration := range migrations {
+ var executor SqlExecutor
+
+ if migration.DisableTransaction {
+ executor = dbMap
+ } else {
+ executor, err = dbMap.Begin()
+ if err != nil {
+ return applied, newTxError(migration, err)
+ }
+ }
+
+ for _, stmt := range migration.Queries {
+ if _, err := executor.Exec(stmt); err != nil {
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ trans.Rollback()
+ }
+
+ return applied, newTxError(migration, err)
+ }
+ }
+
+ switch dir {
+ case Up:
+ err = executor.Insert(&MigrationRecord{
+ Id: migration.Id,
+ AppliedAt: time.Now(),
+ })
+ if err != nil {
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ trans.Rollback()
+ }
+
+ return applied, newTxError(migration, err)
+ }
+ case Down:
+ _, err := executor.Delete(&MigrationRecord{
+ Id: migration.Id,
+ })
+ if err != nil {
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ trans.Rollback()
+ }
+
+ return applied, newTxError(migration, err)
+ }
+ default:
+ panic("Not possible")
+ }
+
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ if err := trans.Commit(); err != nil {
+ return applied, newTxError(migration, err)
+ }
+ }
+
+ applied++
+ }
+
+ return applied, nil
+}
+
+// Plan a migration.
+func PlanMigration(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) ([]*PlannedMigration, *gorp.DbMap, error) {
+ dbMap, err := getMigrationDbMap(db, dialect)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ migrations, err := m.FindMigrations()
+ if err != nil {
+ return nil, nil, err
+ }
+
+ var migrationRecords []MigrationRecord
+ _, err = dbMap.Select(&migrationRecords, fmt.Sprintf("SELECT * FROM %s", dbMap.Dialect.QuotedTableForQuery(schemaName, tableName)))
+ if err != nil {
+ return nil, nil, err
+ }
+
+ // Sort migrations that have been run by Id.
+ var existingMigrations []*Migration
+ for _, migrationRecord := range migrationRecords {
+ existingMigrations = append(existingMigrations, &Migration{
+ Id: migrationRecord.Id,
+ })
+ }
+ sort.Sort(byId(existingMigrations))
+
+ // Make sure all migrations in the database are among the found migrations which
+ // are to be applied.
+ migrationsSearch := make(map[string]struct{})
+ for _, migration := range migrations {
+ migrationsSearch[migration.Id] = struct{}{}
+ }
+ for _, existingMigration := range existingMigrations {
+ if _, ok := migrationsSearch[existingMigration.Id]; !ok {
+ return nil, nil, newPlanError(existingMigration, "unknown migration in database")
+ }
+ }
+
+ // Get last migration that was run
+ record := &Migration{}
+ if len(existingMigrations) > 0 {
+ record = existingMigrations[len(existingMigrations)-1]
+ }
+
+ result := make([]*PlannedMigration, 0)
+
+ // Add missing migrations up to the last run migration.
+ // This can happen for example when merges happened.
+ if len(existingMigrations) > 0 {
+ result = append(result, ToCatchup(migrations, existingMigrations, record)...)
+ }
+
+ // Figure out which migrations to apply
+ toApply := ToApply(migrations, record.Id, dir)
+ toApplyCount := len(toApply)
+ if max > 0 && max < toApplyCount {
+ toApplyCount = max
+ }
+ for _, v := range toApply[0:toApplyCount] {
+
+ if dir == Up {
+ result = append(result, &PlannedMigration{
+ Migration: v,
+ Queries: v.Up,
+ DisableTransaction: v.DisableTransactionUp,
+ })
+ } else if dir == Down {
+ result = append(result, &PlannedMigration{
+ Migration: v,
+ Queries: v.Down,
+ DisableTransaction: v.DisableTransactionDown,
+ })
+ }
+ }
+
+ return result, dbMap, nil
+}
+
+// Skip a set of migrations
+//
+// Will skip at most `max` migrations. Pass 0 for no limit.
+//
+// Returns the number of skipped migrations.
+func SkipMax(db *sql.DB, dialect string, m MigrationSource, dir MigrationDirection, max int) (int, error) {
+ migrations, dbMap, err := PlanMigration(db, dialect, m, dir, max)
+ if err != nil {
+ return 0, err
+ }
+
+ // Skip migrations
+ applied := 0
+ for _, migration := range migrations {
+ var executor SqlExecutor
+
+ if migration.DisableTransaction {
+ executor = dbMap
+ } else {
+ executor, err = dbMap.Begin()
+ if err != nil {
+ return applied, newTxError(migration, err)
+ }
+ }
+
+ err = executor.Insert(&MigrationRecord{
+ Id: migration.Id,
+ AppliedAt: time.Now(),
+ })
+ if err != nil {
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ trans.Rollback()
+ }
+
+ return applied, newTxError(migration, err)
+ }
+
+ if trans, ok := executor.(*gorp.Transaction); ok {
+ if err := trans.Commit(); err != nil {
+ return applied, newTxError(migration, err)
+ }
+ }
+
+ applied++
+ }
+
+ return applied, nil
+}
+
+// Filter a slice of migrations into ones that should be applied.
+func ToApply(migrations []*Migration, current string, direction MigrationDirection) []*Migration {
+ var index = -1
+ if current != "" {
+ for index < len(migrations)-1 {
+ index++
+ if migrations[index].Id == current {
+ break
+ }
+ }
+ }
+
+ if direction == Up {
+ return migrations[index+1:]
+ } else if direction == Down {
+ if index == -1 {
+ return []*Migration{}
+ }
+
+ // Add in reverse order
+ toApply := make([]*Migration, index+1)
+ for i := 0; i < index+1; i++ {
+ toApply[index-i] = migrations[i]
+ }
+ return toApply
+ }
+
+ panic("Not possible")
+}
+
+func ToCatchup(migrations, existingMigrations []*Migration, lastRun *Migration) []*PlannedMigration {
+ missing := make([]*PlannedMigration, 0)
+ for _, migration := range migrations {
+ found := false
+ for _, existing := range existingMigrations {
+ if existing.Id == migration.Id {
+ found = true
+ break
+ }
+ }
+ if !found && migration.Less(lastRun) {
+ missing = append(missing, &PlannedMigration{
+ Migration: migration,
+ Queries: migration.Up,
+ DisableTransaction: migration.DisableTransactionUp,
+ })
+ }
+ }
+ return missing
+}
+
+func GetMigrationRecords(db *sql.DB, dialect string) ([]*MigrationRecord, error) {
+ dbMap, err := getMigrationDbMap(db, dialect)
+ if err != nil {
+ return nil, err
+ }
+
+ var records []*MigrationRecord
+ query := fmt.Sprintf("SELECT * FROM %s ORDER BY id ASC", dbMap.Dialect.QuotedTableForQuery(schemaName, tableName))
+ _, err = dbMap.Select(&records, query)
+ if err != nil {
+ return nil, err
+ }
+
+ return records, nil
+}
+
+func getMigrationDbMap(db *sql.DB, dialect string) (*gorp.DbMap, error) {
+ d, ok := MigrationDialects[dialect]
+ if !ok {
+ return nil, fmt.Errorf("Unknown dialect: %s", dialect)
+ }
+
+ // When using the mysql driver, make sure that the parseTime option is
+ // configured, otherwise it won't map time columns to time.Time. See
+ // https://github.com/rubenv/sql-migrate/issues/2
+ if dialect == "mysql" {
+ var out *time.Time
+ err := db.QueryRow("SELECT NOW()").Scan(&out)
+ if err != nil {
+ if err.Error() == "sql: Scan error on column index 0: unsupported driver -> Scan pair: []uint8 -> *time.Time" ||
+ err.Error() == "sql: Scan error on column index 0: unsupported Scan, storing driver.Value type []uint8 into type *time.Time" {
+ return nil, errors.New(`Cannot parse dates.
+
+Make sure that the parseTime option is supplied to your database connection.
+Check https://github.com/go-sql-driver/mysql#parsetime for more info.`)
+ } else {
+ return nil, err
+ }
+ }
+ }
+
+ // Create migration database map
+ dbMap := &gorp.DbMap{Db: db, Dialect: d}
+ dbMap.AddTableWithNameAndSchema(MigrationRecord{}, schemaName, tableName).SetKeys(false, "Id")
+ //dbMap.TraceOn("", log.New(os.Stdout, "migrate: ", log.Lmicroseconds))
+
+ err := dbMap.CreateTablesIfNotExists()
+ if err != nil {
+ return nil, err
+ }
+
+ return dbMap, nil
+}
+
+// TODO: Run migration + record insert in transaction.
diff --git a/vendor/github.com/rubenv/sql-migrate/sqlparse/LICENSE b/vendor/github.com/rubenv/sql-migrate/sqlparse/LICENSE
new file mode 100644
index 0000000..9c12525
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/sqlparse/LICENSE
@@ -0,0 +1,22 @@
+MIT License
+
+Copyright (C) 2014-2017 by Ruben Vermeersch <ruben@rocketeer.be>
+Copyright (C) 2012-2014 by Liam Staskawicz
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/rubenv/sql-migrate/sqlparse/README.md b/vendor/github.com/rubenv/sql-migrate/sqlparse/README.md
new file mode 100644
index 0000000..fa5341a
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/sqlparse/README.md
@@ -0,0 +1,7 @@
+# SQL migration parser
+
+Based on the [goose](https://bitbucket.org/liamstask/goose) migration parser.
+
+## License
+
+This library is distributed under the [MIT](LICENSE) license.
diff --git a/vendor/github.com/rubenv/sql-migrate/sqlparse/sqlparse.go b/vendor/github.com/rubenv/sql-migrate/sqlparse/sqlparse.go
new file mode 100644
index 0000000..d336e77
--- /dev/null
+++ b/vendor/github.com/rubenv/sql-migrate/sqlparse/sqlparse.go
@@ -0,0 +1,235 @@
+package sqlparse
+
+import (
+ "bufio"
+ "bytes"
+ "errors"
+ "fmt"
+ "io"
+
+ "strings"
+)
+
+const (
+ sqlCmdPrefix = "-- +migrate "
+ optionNoTransaction = "notransaction"
+)
+
+type ParsedMigration struct {
+ UpStatements []string
+ DownStatements []string
+
+ DisableTransactionUp bool
+ DisableTransactionDown bool
+}
+
+var (
+ // LineSeparator can be used to split migrations by an exact line match. This line
+ // will be removed from the output. If left blank, it is not considered. It is defaulted
+ // to blank so you will have to set it manually.
+ // Use case: in MSSQL, it is convenient to separate commands by GO statements like in
+ // SQL Query Analyzer.
+ LineSeparator = ""
+)
+
+func errNoTerminator() error {
+ if len(LineSeparator) == 0 {
+ return errors.New(`ERROR: The last statement must be ended by a semicolon or '-- +migrate StatementEnd' marker.
+ See https://github.com/rubenv/sql-migrate for details.`)
+ }
+
+ return errors.New(fmt.Sprintf(`ERROR: The last statement must be ended by a semicolon, a line whose contents are %q, or '-- +migrate StatementEnd' marker.
+ See https://github.com/rubenv/sql-migrate for details.`, LineSeparator))
+}
+
+// Checks the line to see if the line has a statement-ending semicolon
+// or if the line contains a double-dash comment.
+func endsWithSemicolon(line string) bool {
+
+ prev := ""
+ scanner := bufio.NewScanner(strings.NewReader(line))
+ scanner.Split(bufio.ScanWords)
+
+ for scanner.Scan() {
+ word := scanner.Text()
+ if strings.HasPrefix(word, "--") {
+ break
+ }
+ prev = word
+ }
+
+ return strings.HasSuffix(prev, ";")
+}
+
+type migrationDirection int
+
+const (
+ directionNone migrationDirection = iota
+ directionUp
+ directionDown
+)
+
+type migrateCommand struct {
+ Command string
+ Options []string
+}
+
+func (c *migrateCommand) HasOption(opt string) bool {
+ for _, specifiedOption := range c.Options {
+ if specifiedOption == opt {
+ return true
+ }
+ }
+
+ return false
+}
+
+func parseCommand(line string) (*migrateCommand, error) {
+ cmd := &migrateCommand{}
+
+ if !strings.HasPrefix(line, sqlCmdPrefix) {
+ return nil, errors.New("ERROR: not a sql-migrate command")
+ }
+
+ fields := strings.Fields(line[len(sqlCmdPrefix):])
+ if len(fields) == 0 {
+ return nil, errors.New(`ERROR: incomplete migration command`)
+ }
+
+ cmd.Command = fields[0]
+
+ cmd.Options = fields[1:]
+
+ return cmd, nil
+}
+
+// Split the given sql script into individual statements.
+//
+// The base case is to simply split on semicolons, as these
+// naturally terminate a statement.
+//
+// However, more complex cases like pl/pgsql can have semicolons
+// within a statement. For these cases, we provide the explicit annotations
+// 'StatementBegin' and 'StatementEnd' to allow the script to
+// tell us to ignore semicolons.
+func ParseMigration(r io.ReadSeeker) (*ParsedMigration, error) {
+ p := &ParsedMigration{}
+
+ _, err := r.Seek(0, 0)
+ if err != nil {
+ return nil, err
+ }
+
+ var buf bytes.Buffer
+ scanner := bufio.NewScanner(r)
+ scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
+
+ statementEnded := false
+ ignoreSemicolons := false
+ currentDirection := directionNone
+
+ for scanner.Scan() {
+ line := scanner.Text()
+ // ignore comment except beginning with '-- +'
+ if strings.HasPrefix(line, "-- ") && !strings.HasPrefix(line, "-- +") {
+ continue
+ }
+
+ // handle any migrate-specific commands
+ if strings.HasPrefix(line, sqlCmdPrefix) {
+ cmd, err := parseCommand(line)
+ if err != nil {
+ return nil, err
+ }
+
+ switch cmd.Command {
+ case "Up":
+ if len(strings.TrimSpace(buf.String())) > 0 {
+ return nil, errNoTerminator()
+ }
+ currentDirection = directionUp
+ if cmd.HasOption(optionNoTransaction) {
+ p.DisableTransactionUp = true
+ }
+ break
+
+ case "Down":
+ if len(strings.TrimSpace(buf.String())) > 0 {
+ return nil, errNoTerminator()
+ }
+ currentDirection = directionDown
+ if cmd.HasOption(optionNoTransaction) {
+ p.DisableTransactionDown = true
+ }
+ break
+
+ case "StatementBegin":
+ if currentDirection != directionNone {
+ ignoreSemicolons = true
+ }
+ break
+
+ case "StatementEnd":
+ if currentDirection != directionNone {
+ statementEnded = (ignoreSemicolons == true)
+ ignoreSemicolons = false
+ }
+ break
+ }
+ }
+
+ if currentDirection == directionNone {
+ continue
+ }
+
+ isLineSeparator := !ignoreSemicolons && len(LineSeparator) > 0 && line == LineSeparator
+
+ if !isLineSeparator && !strings.HasPrefix(line, "-- +") {
+ if _, err := buf.WriteString(line + "\n"); err != nil {
+ return nil, err
+ }
+ }
+
+ // Wrap up the two supported cases: 1) basic with semicolon; 2) psql statement
+ // Lines that end with semicolon that are in a statement block
+ // do not conclude statement.
+ if (!ignoreSemicolons && (endsWithSemicolon(line) || isLineSeparator)) || statementEnded {
+ statementEnded = false
+ switch currentDirection {
+ case directionUp:
+ p.UpStatements = append(p.UpStatements, buf.String())
+
+ case directionDown:
+ p.DownStatements = append(p.DownStatements, buf.String())
+
+ default:
+ panic("impossible state")
+ }
+
+ buf.Reset()
+ }
+ }
+
+ if err := scanner.Err(); err != nil {
+ return nil, err
+ }
+
+ // diagnose likely migration script errors
+ if ignoreSemicolons {
+ return nil, errors.New("ERROR: saw '-- +migrate StatementBegin' with no matching '-- +migrate StatementEnd'")
+ }
+
+ if currentDirection == directionNone {
+ return nil, errors.New(`ERROR: no Up/Down annotations found, so no statements were executed.
+ See https://github.com/rubenv/sql-migrate for details.`)
+ }
+
+ // allow comment without sql instruction. Example:
+ // -- +migrate Down
+ // -- nothing to downgrade!
+ if len(strings.TrimSpace(buf.String())) > 0 && !strings.HasPrefix(buf.String(), "-- +") {
+ return nil, errNoTerminator()
+ }
+
+ return p, nil
+}