https://about.gitlab.com/update/#centos-7

gitlab-ee 업그레이드 하려고 하는데
현재 사용중인 버전이 10.4.x 버전이라
gitlab-ee 11.9.x 버전으로 바로 업그레이드 되지 않는다.

중간에 10.8.x 버전으로 업그레이드 한 후 11버전으로 업그레이드 해야 한다.

https://packages.gitlab.com/gitlab/gitlab-ee
해당 페이지에서 10.8.x 버전을 찾을 수 있다.

sudo yum install gitlab-ee-10.8.7-ee.0.el7.x86_64
sudo yum install -y gitlab-ee

위 과정으로 진행

[root@dev ~]# yum list | grep gitlab-ee
gitlab-ee.x86_64 10.4.4-ee.0.el7 @gitlab_gitlab-ee
gitlab-ee.x86_64 11.9.6-ee.0.el7 gitlab_gitlab-ee

[root@dev ~]# sudo gitlab-rake gitlab:backup:create STRATEGY=copy
Dumping database …
Dumping PostgreSQL database gitlabhq_production … [DONE]
done
Dumping repositories …
* arduino/arduino … [DONE]
* arduino/arduino.wiki … [SKIPPED]

—————————————————–
백업 진행 중
—————————————————–

done
Dumping uploads …
done
Dumping builds …
done
Dumping artifacts …
done
Dumping pages …
done
Dumping lfs objects …
done
Dumping container registry images …
[DISABLED]
Creating backup archive: 1555383625_2019_04_16_10.4.4-ee_gitlab_backup.tar … done
Uploading backup archive to remote storage … skipped
Deleting tmp directories … done
done
done
done
done
done
done
done
Deleting old backups … done. (1 removed)

gitlab-ee 업그레이드 시도
[root@withkdev ~]# sudo yum install -y gitlab-ee
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 8.5 kB 00:00
* base: centos.mirror.moack.net
* epel: mirror.premi.st
* extras: centos.mirror.moack.net
* remi-php71: ftp.riken.jp
* remi-php73: ftp.riken.jp
* remi-safe: ftp.riken.jp
* updates: centos.mirror.moack.net
base | 3.6 kB 00:00
epel | 4.7 kB 00:00
extras | 3.4 kB 00:00
gitlab_gitlab-ee/x86_64/signature | 836 B 00:00
gitlab_gitlab-ee/x86_64/signature | 1.0 kB 00:00
gitlab_gitlab-ee-source/signature | 836 B 00:00
gitlab_gitlab-ee-source/signature | 951 B 00:00
mysql-connectors-community | 2.5 kB 00:00
mysql-tools-community | 2.5 kB 00:00
mysql56-community | 2.5 kB 00:00
nginx | 2.9 kB 00:00
remi-php71 | 3.0 kB 00:00
remi-php73 | 3.0 kB 00:00
remi-safe | 3.0 kB 00:00
updates | 3.4 kB 00:00
(1/7): epel/x86_64/updateinfo | 987 kB 00:00
(2/7): epel/x86_64/primary_db | 6.7 MB 00:00
(3/7): updates/7/x86_64/primary_db | 3.4 MB 00:00
(4/7): nginx/primary_db | 139 kB 00:00
(5/7): remi-php73/primary_db | 195 kB 00:00
(6/7): remi-php71/primary_db | 240 kB 00:00
(7/7): remi-safe/primary_db | 1.4 MB 00:00
gitlab_gitlab-ee/x86_64/primary | 1.8 MB 00:00
gitlab_gitlab-ee
Resolving Dependencies
–> Running transaction check
—> Package gitlab-ee.x86_64 0:10.4.4-ee.0.el7 will be updated
—> Package gitlab-ee.x86_64 0:11.9.8-ee.0.el7 will be an update
–> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================
Package Arch Version Repository
==================================================================================================================
Updating:
gitlab-ee x86_64 11.9.8-ee.0.el7 gitlab_gitlab-ee

Transaction Summary
==================================================================================================================
Upgrade 1 Package

Total download size: 611 M
Downloading packages:
No Presto metadata available for gitlab_gitlab-ee
gitlab-ee-11.9.8-ee.0.el7.x86_64.rpm | 611 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
gitlab preinstall: It seems you are upgrading from 10.x version series
gitlab preinstall: to 11.x series. It is recommended to upgrade
gitlab preinstall: to the last minor version in a major version series first before
gitlab preinstall: jumping to the next major version.
gitlab preinstall: Please follow the upgrade documentation at https://docs.gitlab.com/ee/policy/maintenance.html#uecommendations
gitlab preinstall: and upgrade to 10.8 first.
error: %pre(gitlab-ee-11.9.8-ee.0.el7.x86_64) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package gitlab-ee-11.9.8-ee.0.el7.x86_64
gitlab-ee-10.4.4-ee.0.el7.x86_64 was supposed to be removed but is not!
Verifying : gitlab-ee-10.4.4-ee.0.el7.x86_64
Verifying : gitlab-ee-11.9.8-ee.0.el7.x86_64

Failed:
gitlab-ee.x86_64 0:10.4.4-ee.0.el7 gitlab-ee.x86_64 0:11.9.8-ee.0.el7

Complete!

오류 발생
10.4.4 에서 바로 업그레드 되지 않는다.
10.8 먼저 설치하로고 함.

[root@withkdev ~]# sudo yum install gitlab-ee-10.8.7-ee.0.el7.x86_64
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: centos.mirror.moack.net
* epel: mirror.premi.st
* extras: centos.mirror.moack.net
* remi-php71: ftp.riken.jp
* remi-php73: ftp.riken.jp
* remi-safe: ftp.riken.jp
* updates: centos.mirror.moack.net
gitlab_gitlab-ee/x86_64/signature | 836 B 00:00
gitlab_gitlab-ee/x86_64/signature | 1.0 kB 00:00
gitlab_gitlab-ee-source/signature | 836 B 00:00
gitlab_gitlab-ee-source/signature | 951 B 00:00
Resolving Dependencies
–> Running transaction check
—> Package gitlab-ee.x86_64 0:10.4.4-ee.0.el7 will be updated
—> Package gitlab-ee.x86_64 0:10.8.7-ee.0.el7 will be an update
–> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================
Package Arch Version Repository
==================================================================================================================
Updating:
gitlab-ee x86_64 10.8.7-ee.0.el7 gitlab_gitlab-ee

Transaction Summary
==================================================================================================================
Upgrade 1 Package

Total download size: 455 M
Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for gitlab_gitlab-ee
gitlab-ee-10.8.7-ee.0.el7.x86_64.rpm | 455 MB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
Dumping database …
Dumping PostgreSQL database gitlabhq_production … [DONE]
done
Dumping repositories …
[SKIPPED]
Dumping uploads …
[SKIPPED]
Dumping builds …
[SKIPPED]
Dumping artifacts …
[SKIPPED]
Dumping pages …
[SKIPPED]
Dumping lfs objects …
[SKIPPED]
Dumping container registry images …
[DISABLED]
Creating backup archive: 1555384363_2019_04_16_10.4.4-ee_gitlab_backup.tar … done
Uploading backup archive to remote storage … skipped
Deleting tmp directories … done
done
Deleting old backups … done. (0 removed)
Updating : gitlab-ee-10.8.7-ee.0.el7.x86_64
Cleanup : gitlab-ee-10.4.4-ee.0.el7.x86_64
Checking PostgreSQL executables:Starting Chef Client, version 13.6.4
resolving cookbooks for run list: [“gitlab::config”, “postgresql::bin”]
Synchronizing Cookbooks:
– gitlab (0.0.1)
– postgresql (0.1.0)
– package (0.1.0)
– mattermost (0.1.0)
– registry (0.1.0)
– consul (0.0.0)
– gitaly (0.1.0)
– letsencrypt (0.1.0)
– nginx (0.1.0)
– runit (0.14.2)
– acme (3.1.0)
– crond (0.1.0)
– compat_resource (12.19.0)
Installing Cookbook Gems:
Compiling Cookbooks…
Converging 1 resources
Recipe: postgresql::bin
* ruby_block[Link postgresql bin files to the correct version] action run (skipped due to only_if)

Running handlers:
Running handlers complete
Chef Client finished, 0/1 resources updated in 33 seconds
Checking PostgreSQL executables: OK
Shutting down all GitLab services except those needed for migrations
ok: down: gitlab-monitor: 0s, normally up
ok: down: gitlab-workhorse: 1s, normally up
ok: down: logrotate: 0s, normally up
ok: down: nginx: 0s, normally up
ok: down: node-exporter: 1s, normally up
ok: down: postgres-exporter: 0s, normally up
ok: down: prometheus: 0s, normally up
ok: down: redis-exporter: 0s, normally up
ok: down: sidekiq: 0s, normally up
ok: down: unicorn: 0s, normally up
Ensuring the required services are running
ok: run: postgresql: (pid 4393) 224929s
ok: run: redis: (pid 4395) 224929s
ok: run: gitaly: (pid 4482) 224929s
run: postgresql: (pid 4393) 224930s; run: log: (pid 4387) 224930s
run: redis: (pid 4395) 224930s; run: log: (pid 4389) 224930s
run: gitaly: (pid 4482) 224930s; run: log: (pid 4473) 224930s
Reconfiguring GitLab to apply migrations
Starting Chef Client, version 13.6.4
resolving cookbooks for run list: [“gitlab-ee”]
Synchronizing Cookbooks:
– package (0.1.0)
– gitlab (0.0.1)
– consul (0.0.0)
– runit (0.14.2)
– repmgr (0.1.0)
– postgresql (0.1.0)
– registry (0.1.0)
– mattermost (0.1.0)
– gitaly (0.1.0)
– letsencrypt (0.1.0)
– nginx (0.1.0)
– gitlab-ee (0.0.1)
– acme (3.1.0)
– crond (0.1.0)
– compat_resource (12.19.0)
Installing Cookbook Gems:
Compiling Cookbooks…
Recipe: gitlab::default
* directory[/etc/gitlab] action create (up to date)
Converging 471 resources
* directory[/etc/gitlab] action create (up to date)
* directory[Create /var/opt/gitlab] action create (up to date)
* directory[/opt/gitlab/embedded/etc] action create (up to date)
* template[/opt/gitlab/embedded/etc/gitconfig] action create (up to date)
Recipe: gitlab::web-server
* account[Webserver user and group] action create
* group[Webserver user and group] action create (up to date)
* linux_user[Webserver user and group] action create (up to date)
(up to date)
Recipe: gitlab::users
* directory[/var/opt/gitlab] action create (up to date)
* account[GitLab user and group] action create
* group[GitLab user and group] action create (up to date)
* linux_user[GitLab user and group] action create (up to date)
(up to date)
* template[/var/opt/gitlab/.gitconfig] action create (up to date)
Recipe: gitlab::gitlab-shell
* storage_directory[/var/opt/gitlab/.ssh] action create
* ruby_block[directory resource: /var/opt/gitlab/.ssh] action run (skipped due to not_if)
(up to date)
* directory[/var/log/gitlab/gitlab-shell/] action create (up to date)
* directory[/var/opt/gitlab/gitlab-shell] action create (up to date)
* templatesymlink[Create a config.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-shell/config.yml] action create
– update content in file /var/opt/gitlab/gitlab-shell/config.yml from 3c2596 to 681cf5
— /var/opt/gitlab/gitlab-shell/config.yml 2018-03-09 14:15:17.240738823 +0900
+++ /var/opt/gitlab/gitlab-shell/.chef-config20190416-23896-1uopiqr.yml 2019-04-16 12:15:56.533842720 +090
@@ -34,6 +34,7 @@
# Log level. INFO by default
log_level:

+
# Audit usernames.
# Set to true to see real usernames in the logs instead of key ids, which is easier to follow, but
# incurs an extra API call on every gitlab-shell command.
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-shell/config.yml to /var/opt/gitlab/gitlab-shell/config.yml] a

* link[/opt/gitlab/embedded/service/gitlab-shell/.gitlab_shell_secret] action create (up to date)
* execute[/opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-keys check-permissions] action run
– execute /opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-keys check-permissions
* bash[Set proper security context on ssh files for selinux] action run
[execute] restorecon reset /var/opt/gitlab/gitlab-shell/config.yml context unconfined_u:object_r:var_t:s0->unc
restorecon reset /var/opt/gitlab/gitlab-rails/etc/gitlab_shell_secret context unconfined_u:object_r:
– execute “bash” “/tmp/chef-script20190416-23896-nij721”
Recipe: gitlab::gitlab-rails
* storage_directory[/var/opt/gitlab/git-data] action create
* ruby_block[directory resource: /var/opt/gitlab/git-data] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/git-data/repositories] action create
* ruby_block[directory resource: /var/opt/gitlab/git-data/repositories] action run (skipped due to not_if)
(up to date)
* directory[/var/log/gitlab] action create (up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/artifacts] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/artifacts] action run (skipped due to not
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/lfs-objects] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/lfs-objects] action run (skipped due to n
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/uploads] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/uploads] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-ci/builds] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-ci/builds] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/cache] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/cache] action run
– execute the ruby block directory resource: /var/opt/gitlab/gitlab-rails/shared/cache

* storage_directory[/var/opt/gitlab/gitlab-rails/shared/tmp] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/tmp] action run
– execute the ruby block directory resource: /var/opt/gitlab/gitlab-rails/shared/tmp

* storage_directory[/var/opt/gitlab/gitlab-rails/shared/pages] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/pages] action run (skipped due to not_if)
(up to date)
* directory[create /var/opt/gitlab/gitlab-rails/etc] action create (up to date)
* directory[create /opt/gitlab/etc/gitlab-rails] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/working] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/tmp] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/upgrade-status] action create (up to date)
* directory[create /var/log/gitlab/gitlab-rails] action create (up to date)
* storage_directory[/var/opt/gitlab/backups] action create
* ruby_block[directory resource: /var/opt/gitlab/backups] action run (skipped due to not_if)
(up to date)
* directory[/var/opt/gitlab/gitlab-rails] action create (up to date)
* directory[/var/opt/gitlab/gitlab-ci] action create (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key] action create (skipped due to only_if)
* template[/opt/gitlab/etc/gitlab-rails/gitlab-rails-rc] action create (up to date)
* file[/opt/gitlab/embedded/service/gitlab-rails/.secret] action delete (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/secret] action delete (up to date)
* templatesymlink[Create a database.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/database.yml] action create
– update content in file /var/opt/gitlab/gitlab-rails/etc/database.yml from d6bb37 to 00a743
— /var/opt/gitlab/gitlab-rails/etc/database.yml 2018-02-20 13:39:49.675881360 +0900
+++ /var/opt/gitlab/gitlab-rails/etc/.chef-database20190416-23896-17k3v1g.yml 2019-04-16 12:16:10.841420
@@ -19,4 +19,5 @@
load_balancing: {“hosts”:[]}
prepared_statements: false
statements_limit: 1000
+ fdw:
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/database.yml to /var/opt/gitlab/gitlab-rails/etc/

* templatesymlink[Create a secrets.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/secrets.yml] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/secrets.yml to /var/opt/gitlab/gitlab-rails/etc/s
(up to date)
* templatesymlink[Create a resque.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/resque.yml] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/resque.yml to /var/opt/gitlab/gitlab-rails/etc/re
(up to date)
* templatesymlink[Create a redis.cache.yml and create a symlink to Rails root] action create (skipped due to not
* templatesymlink[Create a redis.queues.yml and create a symlink to Rails root] action create (skipped due to no
* templatesymlink[Create a redis.shared_state.yml and create a symlink to Rails root] action create (skipped due
* templatesymlink[Create a aws.yml and create a symlink to Rails root] action delete
* file[/var/opt/gitlab/gitlab-rails/etc/aws.yml] action delete (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/aws.yml] action delete (up to date)
(up to date)
* templatesymlink[Create a smtp_settings.rb and create a symlink to Rails root] action delete
* file[/var/opt/gitlab/gitlab-rails/etc/smtp_settings.rb] action delete (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/initializers/smtp_settings.rb] action delete (up to da
(up to date)
* templatesymlink[Create a gitlab.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab.yml] action create
– update content in file /var/opt/gitlab/gitlab-rails/etc/gitlab.yml from 39e800 to ff91b8
— /var/opt/gitlab/gitlab-rails/etc/gitlab.yml 2018-11-21 12:11:57.119835009 +0900
+++ /var/opt/gitlab/gitlab-rails/etc/.chef-gitlab20190416-23896-ttas69.yml 2019-04-16 12:16:10.897418
@@ -122,7 +122,9 @@
path: /var/opt/gitlab/gitlab-rails/shared/artifacts
object_store:
enabled: false
+ direct_upload: false
background_upload: true
+ proxy_download: false
remote_directory: “artifacts”
connection: {}

@@ -133,10 +135,24 @@
storage_path: /var/opt/gitlab/gitlab-rails/shared/lfs-objects
object_store:
enabled: false
+ direct_upload: false
background_upload: true
+ proxy_download: false
remote_directory: “lfs-objects”
connection: {}

+ ## Uploads
+ uploads:
+ # The location where uploads objects are stored (default: public/).
+ storage_path: /opt/gitlab/embedded/service/gitlab-rails/public
+ object_store:
+ enabled: false
+ direct_upload: false
+ background_upload: true
+ proxy_download: false
+ remote_directory: “uploads”
+ connection: {}
+
## Container Registry
registry:
enabled: false
@@ -169,6 +185,10 @@
plain_url: # default: http://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon
ssl_url: # default: https://secure.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon

+ ## Sidekiq
+ sidekiq:
+ log_format: default
+
## Auxiliary jobs
# Periodically executed jobs, to self-heal GitLab, do external synchronizations, etc.
# Please read here for more information: https://github.com/ondrejbartas/sidekiq-cron#adding-cron-job
@@ -194,6 +214,10 @@
repository_archive_cache_worker:
cron:

+ # Verify custom GitLab Pages domains
+ pages_domain_verification_cron_worker:
+ cron:
+
##
# GitLab EE only jobs:

@@ -212,6 +236,15 @@
# GitLab Geo file download dispatch worker
# NOTE: This will only take effect if Geo is enabled

+ # GitLab Geo repository verification primary batch worker
+ # NOTE: This will only take effect if Geo is enabled
+
+ # GitLab Geo repository verification secondary scheduler worker
+ # NOTE: This will only take effect if Geo is enabled
+
+ # GitLab Geo migrated local files clean up worker
+ # NOTE: This will only take effect if Geo is enabled (secondary nodes only)
+
#
# 2. GitLab CI settings
# ==========================
@@ -246,6 +279,7 @@
password:
active_directory:
allow_username_or_email_login:
+ lowercase_usernames:
base:
user_filter:

@@ -368,9 +402,9 @@
## Backup settings
backup:
path: “/var/opt/gitlab/backups” # Relative paths are relative to Rails.root (default: tmp/backups/)
– archive_permissions: # Permissions for the resulting backup.tar file (default: 0600)
– keep_time: 259200 # default: 0 (forever) (in seconds)
– pg_schema: # default: nil, it means that all schemas will be backed up
+ archive_permissions: # Permissions for the resulting backup.tar file (default: 0600)
+ keep_time: # default: 0 (forever) (in seconds)
+ pg_schema: # default: nil, it means that all schemas will be backed up
upload:
# Fog storage connection settings, see http://fog.io/storage/ .
connection:
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml to /var/opt/gitlab/gitlab-rails/etc/gi

* templatesymlink[Create a rack_attack.rb and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/initializers/rack_attack.rb to /var/opt/gitlab/gi
(up to date)
* templatesymlink[Create a gitlab_workhorse_secret and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab_workhorse_secret] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/.gitlab_workhorse_secret to /var/opt/gitlab/gitlab-rails
(up to date)
* templatesymlink[Create a gitlab_shell_secret and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab_shell_secret] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/.gitlab_shell_secret to /var/opt/gitlab/gitlab-rails/etc
(up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/initializers/relative_url.rb] action delete (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/relative_url.rb] action delete (up to date)
* env_dir[/opt/gitlab/etc/gitlab-rails/env] action create
* directory[/opt/gitlab/etc/gitlab-rails/env] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/HOME] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/RAILS_ENV] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/LD_PRELOAD] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/SIDEKIQ_MEMORY_KILLER_MAX_RSS] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/BUNDLE_GEMFILE] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/PATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/ICU_DATA] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/PYTHONPATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/EXECJS_RUNTIME] action create (up to date)
(up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/tmp] action create (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/public/uploads] action create (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/log] action create (up to date)
* link[/var/log/gitlab/gitlab-rails/sidekiq.log] action create (skipped due to not_if)
* file[/opt/gitlab/embedded/service/gitlab-rails/db/schema.rb] action create
– change owner from ‘root’ to ‘git’
– restore selinux security context
* remote_file[/var/opt/gitlab/gitlab-rails/VERSION] action create
– update content in file /var/opt/gitlab/gitlab-rails/VERSION from fdd432 to 796bef
— /var/opt/gitlab/gitlab-rails/VERSION 2018-02-20 13:39:50.637857614 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-VERSION20190416-23896-9onjw3 2019-04-16 12:16:11.021415249 +0900
@@ -1,2 +1,2 @@
-10.4.4-ee
+10.8.7-ee
– restore selinux security context
* remote_file[/var/opt/gitlab/gitlab-rails/REVISION] action create
– update content in file /var/opt/gitlab/gitlab-rails/REVISION from 2e1846 to 657e64
— /var/opt/gitlab/gitlab-rails/REVISION 2018-02-20 13:39:50.687856380 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-REVISION20190416-23896-2chnm9 2019-04-16 12:16:11.049414423 +090
@@ -1,2 +1,2 @@
-e8a592b
+075705a
– restore selinux security context
* file[/var/opt/gitlab/gitlab-rails/RUBY_VERSION] action create
– update content in file /var/opt/gitlab/gitlab-rails/RUBY_VERSION from 05b5bf to 3dd12e
— /var/opt/gitlab/gitlab-rails/RUBY_VERSION 2018-02-20 13:39:50.724855467 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-RUBY_VERSION20190416-23896-12qbcax 2019-04-16 12:16:11.081413479 +090
@@ -1,2 +1,2 @@
-ruby 2.3.6p384 (2017-12-14 revision 61254) [x86_64-linux]
+ruby 2.3.7p456 (2018-03-28 revision 63024) [x86_64-linux]
– restore selinux security context
* execute[chown -R root:root /opt/gitlab/embedded/service/gitlab-rails/public] action run
– execute chown -R root:root /opt/gitlab/embedded/service/gitlab-rails/public
* execute[clear the gitlab-rails cache] action nothing (skipped due to action :nothing)
* file[/var/opt/gitlab/gitlab-rails/config.ru] action delete (up to date)
Recipe: gitlab::selinux
* execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-7.2.0-ssh-keygen.pp] action run (skipped due to
* execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-10.5.0-ssh-authorized-keys.pp] action run
– execute semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-10.5.0-ssh-authorized-keys.pp
Recipe: gitlab::add_trusted_certs
* directory[/etc/gitlab/trusted-certs] action create (up to date)
* directory[/opt/gitlab/embedded/ssl/certs] action create (up to date)
* file[/opt/gitlab/embedded/ssl/certs/README] action create (up to date)
* ruby_block[Move existing certs and link to /opt/gitlab/embedded/ssl/certs] action run

* Moving existing certificates found in /opt/gitlab/embedded/ssl/certs

* Symlinking existing certificates found in /etc/gitlab/trusted-certs

– execute the ruby block Move existing certs and link to /opt/gitlab/embedded/ssl/certs
Recipe: gitlab::default
* service[create a temporary unicorn service] action nothing (skipped due to action :nothing)
* service[create a temporary sidekiq service] action nothing (skipped due to action :nothing)
* service[create a temporary mailroom service] action nothing (skipped due to action :nothing)
Recipe: runit::systemd
* directory[/usr/lib/systemd/system] action create (up to date)
* cookbook_file[/usr/lib/systemd/system/gitlab-runsvdir.service] action create (up to date)
* file[/etc/systemd/system/default.target.wants/gitlab-runsvdir.service] action delete (up to date)
* execute[systemctl daemon-reload] action nothing (skipped due to action :nothing)
* execute[systemctl enable gitlab-runsvdir] action nothing (skipped due to action :nothing)
* execute[systemctl start gitlab-runsvdir] action nothing (skipped due to action :nothing)
Recipe: gitlab::redis
* account[user and group for redis] action create
* group[user and group for redis] action create (up to date)
* linux_user[user and group for redis] action create (up to date)
(up to date)
* group[Socket group] action create (up to date)
* directory[/var/opt/gitlab/redis] action create (up to date)
* directory[/var/log/gitlab/redis] action create (up to date)
* template[/var/opt/gitlab/redis/redis.conf] action create
– update content in file /var/opt/gitlab/redis/redis.conf from c1ed62 to d493be
— /var/opt/gitlab/redis/redis.conf 2018-02-20 13:40:06.409468258 +0900
+++ /var/opt/gitlab/redis/.chef-redis20190416-23896-r3uo1b.conf 2019-04-16 12:16:24.488017903 +0900
@@ -467,9 +467,9 @@
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
-# slave-announce-ip 5.5.5.5
-# slave-announce-port 1234

+
+
################################## SECURITY ###################################

# Require clients to issue AUTH before processing any other
@@ -541,6 +541,7 @@
# output buffers (but this is not needed if the policy is ‘noeviction’).
#
# maxmemory
+maxmemory 0

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
@@ -564,6 +565,7 @@
# The default is:
#
# maxmemory-policy noeviction
+maxmemory-policy noeviction

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
@@ -575,6 +577,7 @@
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
+maxmemory-samples 5

############################## APPEND ONLY MODE ###############################

– restore selinux security context
* service[redis] action restart
– restart service service[redis]
* directory[/opt/gitlab/sv/redis] action create (up to date)
* directory[/opt/gitlab/sv/redis/log] action create (up to date)
* directory[/opt/gitlab/sv/redis/log/main] action create (up to date)
* template[/opt/gitlab/sv/redis/run] action create (up to date)
* template[/opt/gitlab/sv/redis/log/run] action create (up to date)
* template[/var/log/gitlab/redis/config] action create (up to date)
* ruby_block[reload redis svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart redis svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/redis/down] action delete (up to date)
* link[/opt/gitlab/init/redis] action create (up to date)
* link[/opt/gitlab/service/redis] action create (up to date)
* ruby_block[supervise_redis_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/redis/supervise] action create (up to date)
* directory[/opt/gitlab/sv/redis/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/redis/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis/log/supervise/control] action touch (skipped due to only_if)
* service[redis] action nothing (skipped due to action :nothing)
Recipe: postgresql::user
* account[Postgresql user and group] action create
* group[Postgresql user and group] action create (up to date)
* linux_user[Postgresql user and group] action create (up to date)
(up to date)
Recipe: postgresql::enable
* directory[/var/opt/gitlab/postgresql] action create (up to date)
* directory[/var/opt/gitlab/postgresql/data] action create (up to date)
* directory[/var/log/gitlab/postgresql] action create (up to date)
* link[/var/opt/gitlab/postgresql/data] action create (skipped due to not_if)
* file[/var/opt/gitlab/postgresql/.profile] action create (up to date)
* sysctl[kernel.shmmax] action create
* directory[create /etc/sysctl.d for kernel.shmmax] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.shmmax.conf kernel.shmmax] action create (up t
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.shmmax.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.shmmax] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.shmmax] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.shmmax] action delete (skipped due to onl
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.shmmax] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.shmmax] action nothing (skipped due to action :nothing)
(up to date)
* sysctl[kernel.shmall] action create
* directory[create /etc/sysctl.d for kernel.shmall] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.shmall.conf kernel.shmall] action create (up t
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.shmall.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.shmall] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.shmall] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.shmall] action delete (skipped due to onl
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.shmall] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.shmall] action nothing (skipped due to action :nothing)
(up to date)
* sysctl[kernel.sem] action create
* directory[create /etc/sysctl.d for kernel.sem] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.sem.conf kernel.sem] action create (up to date
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.sem.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.sem] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.sem] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.sem] action delete (skipped due to only_i
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.sem] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.sem] action nothing (skipped due to action :nothing)
(up to date)
* execute[/opt/gitlab/embedded/bin/initdb -D /var/opt/gitlab/postgresql/data -E UTF8] action run (skipped due to
* file[/var/opt/gitlab/postgresql/data/server.crt] action create (up to date)
* file[/var/opt/gitlab/postgresql/data/server.key] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/postgresql.conf] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/runtime.conf] action create
– update content in file /var/opt/gitlab/postgresql/data/runtime.conf from 643b59 to f801d2
— /var/opt/gitlab/postgresql/data/runtime.conf 2018-02-20 13:40:15.260249729 +0900
+++ /var/opt/gitlab/postgresql/data/.chef-runtime20190416-23896-tvprkl.conf 2019-04-16 12:16:25.107999610 +090
@@ -20,7 +20,7 @@

# – Archiving –
archive_command = ” # command to use to archive a logfile segment
-archive_timeout = 60 # force a logfile segment switch after this
+archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables

# – Replication
@@ -40,7 +40,7 @@
#seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 2.0 # same scale as above

-effective_cache_size = 7921MB # Default 128MB
+effective_cache_size = 7920MB # Default 128MB

log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
@@ -72,6 +72,7 @@
log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
+

# – Autovacuum parameters –
autovacuum = on # Enable autovacuum subprocess? ‘on’
– restore selinux security context
* execute[reload postgresql] action run
– execute /opt/gitlab/bin/gitlab-ctl hup postgresql
* execute[start postgresql] action run (skipped due to not_if)
* template[/var/opt/gitlab/postgresql/data/pg_hba.conf] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/pg_ident.conf] action create (up to date)
* directory[/opt/gitlab/sv/postgresql] action create (up to date)
* directory[/opt/gitlab/sv/postgresql/log] action create (up to date)
* directory[/opt/gitlab/sv/postgresql/log/main] action create (up to date)
* template[/opt/gitlab/sv/postgresql/run] action create (up to date)
* template[/opt/gitlab/sv/postgresql/log/run] action create (up to date)
* template[/var/log/gitlab/postgresql/config] action create (up to date)
* ruby_block[reload postgresql svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart postgresql svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/postgresql/down] action delete (up to date)
* directory[/opt/gitlab/sv/postgresql/control] action create (up to date)
* template[/opt/gitlab/sv/postgresql/control/t] action create (up to date)
* link[/opt/gitlab/init/postgresql] action create (up to date)
* link[/opt/gitlab/service/postgresql] action create (up to date)
* ruby_block[supervise_postgresql_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/postgresql/supervise] action create (up to date)
* directory[/opt/gitlab/sv/postgresql/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/postgresql/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/log/supervise/control] action touch (skipped due to only_if)
* service[postgresql] action nothing (skipped due to action :nothing)
Recipe: postgresql::bin
* ruby_block[Link postgresql bin files to the correct version] action run (skipped due to only_if)
Recipe: postgresql::enable
* template[/opt/gitlab/etc/gitlab-psql-rc] action create
– update content in file /opt/gitlab/etc/gitlab-psql-rc from eaeb56 to 4fdb89
— /opt/gitlab/etc/gitlab-psql-rc 2018-02-20 13:40:18.407172027 +0900
+++ /opt/gitlab/etc/.chef-gitlab-psql-rc20190416-23896-1gsb1uc 2019-04-16 12:16:25.860977392 +0900
@@ -1,4 +1,5 @@
psql_user=’gitlab-psql’
psql_host=’/var/opt/gitlab/postgresql’
psql_port=’5432′
+psql_dbname=’gitlabhq_production’
– restore selinux security context
* postgresql_user[gitlab] action create
* execute[create gitlab postgresql user] action run
[execute] psql: FATAL: the database system is starting up
(skipped due to not_if)
(up to date)
* execute[create gitlabhq_production database] action run (skipped due to not_if)
* postgresql_user[gitlab_replicator] action create
* execute[create gitlab_replicator postgresql user] action run (skipped due to not_if)
* execute[set options for gitlab_replicator postgresql user] action run (skipped due to not_if)
(up to date)
* postgresql_extension[pg_trgm] action enable
* postgresql_query[enable pg_trgm extension] action run (skipped due to only_if)
(up to date)
* execute[reload postgresql] action nothing (skipped due to action :nothing)
* execute[start postgresql] action nothing (skipped due to action :nothing)
Recipe: gitlab::database_migrations
* bash[migrate gitlab-rails database] action run
[execute] == 20170301101006 AddCiRunnerNamespaces: migrating ============================
— create_table(:ci_runner_namespaces)
-> 0.1775s
== 20170301101006 AddCiRunnerNamespaces: migrated (0.1776s) ===================

== 20170827123848 AddIndexOnMergeRequestDiffCommitSha: migrating ==============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:merge_request_diff_commits, :sha, {:length=>nil, :algorithm=>:concurrently})
-> 0.0044s
— add_index(:merge_request_diff_commits, :sha, {:length=>nil, :algorithm=>:concurrently})
-> 0.0389s
== 20170827123848 AddIndexOnMergeRequestDiffCommitSha: migrated (0.0440s) =====

== 20170906133745 AddRunnersTokenToGroups: migrating ==========================
— add_column(:namespaces, :runners_token, :string)
-> 0.0013s
== 20170906133745 AddRunnersTokenToGroups: migrated (0.0014s) =================

== 20171130151759 CreateGeoUploadDeletedEvents: migrating =====================
— create_table(:geo_upload_deleted_events, {:id=>:bigserial})
-> 0.0511s
— add_column(:geo_event_log, :upload_deleted_event_id, :integer, {:limit=>8})
-> 0.0121s
== 20171130151759 CreateGeoUploadDeletedEvents: migrated (0.0635s) ============

== 20171130152602 AddGeoUploadDeletedEventsForeignKey: migrating ==============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— foreign_keys(:geo_event_log)
-> 0.0113s
— execute(“ALTER TABLE geo_event_log\nADD CONSTRAINT fk_c1f241c70d\nFOREIGN KEY (upload_deleted_eve
-> 0.0116s
— execute(“ALTER TABLE geo_event_log VALIDATE CONSTRAINT fk_c1f241c70d;”)
-> 0.0164s
== 20171130152602 AddGeoUploadDeletedEventsForeignKey: migrated (0.0406s) =====

== 20171207185153 AddMergeRequestStateIndex: migrating ========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0008s
— index_exists?(:merge_requests, [:source_project_id, :source_branch], {:where=>”state = ‘opened'”,
-> 0.0151s
— add_index(:merge_requests, [:source_project_id, :source_branch], {:where=>”state = ‘opened'”, :na
-> 0.0355s
== 20171207185153 AddMergeRequestStateIndex: migrated (0.0518s) ===============

== 20171211131502 AddExternalClassificationAuthorizationSettingsToApplictionSettings: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:application_settings, :external_authorization_service_enabled, :boolean, {:default=>n
-> 0.0026s
— change_column_default(:application_settings, :external_authorization_service_enabled, false)
-> 0.0215s
-> 0.0318s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0011s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”app
-> 0.0009s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”applica
-> 0.0007s
— execute(“UPDATE \”application_settings\” SET \”external_authorization_service_enabled\” = ‘f’ WHE
-> 0.0077s
— change_column_null(:application_settings, :external_authorization_service_enabled, false)
-> 0.0083s
— add_column(:application_settings, :external_authorization_service_url, :string)
-> 0.0083s
— add_column(:application_settings, :external_authorization_service_default_label, :string)
-> 0.0083s
== 20171211131502 AddExternalClassificationAuthorizationSettingsToApplictionSettings: migrated (0.09

== 20171214144320 AddStoreColumnToUploads: migrating ==========================
— add_column(:uploads, :store, :integer)
-> 0.0186s
== 20171214144320 AddStoreColumnToUploads: migrated (0.0187s) =================

== 20171218140451 AddExternalAuthorizationServiceClassificationLabelToProjects: migrating
— add_column(:projects, :external_authorization_classification_label, :string)
-> 0.0009s
== 20171218140451 AddExternalAuthorizationServiceClassificationLabelToProjects: migrated (0.0010s)

== 20171222115326 AddConfidentialNoteEventsToWebHooks: migrating ==============
— add_column(:web_hooks, :confidential_note_events, :boolean)
-> 0.0076s
== 20171222115326 AddConfidentialNoteEventsToWebHooks: migrated (0.0077s) =====

== 20171222151344 AddRegexpUsesRe2ToPushRules: migrating ======================
— add_column(:push_rules, :regexp_uses_re2, :boolean)
-> 0.0005s
— change_column_default(:push_rules, :regexp_uses_re2, true)
-> 0.0022s
== 20171222151344 AddRegexpUsesRe2ToPushRules: migrated (0.0028s) =============

== 20180101160629 CreatePrometheusMetrics: migrating ==========================
— create_table(:prometheus_metrics)
-> 0.1155s
== 20180101160629 CreatePrometheusMetrics: migrated (0.1156s) =================

== 20180102220145 AddPagesHttpsOnlyToProjects: migrating ======================
— add_column(:projects, :pages_https_only, :boolean)
-> 0.0013s
== 20180102220145 AddPagesHttpsOnlyToProjects: migrated (0.0014s) =============

== 20180103123548 AddConfidentialNoteEventsToServices: migrating ==============
— add_column(:services, :confidential_note_events, :boolean)
-> 0.0078s
— change_column_default(:services, :confidential_note_events, true)
-> 0.0083s
== 20180103123548 AddConfidentialNoteEventsToServices: migrated (0.0164s) =====

== 20180104131052 ScheduleSetConfidentialNoteEventsOnWebhooks: migrating ======
== 20180104131052 ScheduleSetConfidentialNoteEventsOnWebhooks: migrated (0.0216s)

== 20180105212544 AddCommitsCountToMergeRequestDiff: migrating ================
— add_column(:merge_request_diffs, :commits_count, :integer)
-> 0.0062s
— Populating the MergeRequestDiff `commits_count`
== 20180105212544 AddCommitsCountToMergeRequestDiff: migrated (0.0129s) =======

== 20180109150457 AddRemoteNameToRemoteMirrors: migrating =====================
— column_exists?(:remote_mirrors, :remote_name)
-> 0.0033s
— add_column(:remote_mirrors, :remote_name, :string)
-> 0.0011s
== 20180109150457 AddRemoteNameToRemoteMirrors: migrated (0.0046s) ============

== 20180109183319 ChangeDefaultValueForPagesHttpsOnly: migrating ==============
— change_column_default(:projects, :pages_https_only, true)
-> 0.0058s
== 20180109183319 ChangeDefaultValueForPagesHttpsOnly: migrated (0.0059s) =====

== 20180115013218 CreateSamlProviders: migrating ==============================
— create_table(:saml_providers)
-> 0.0658s
— add_foreign_key(:saml_providers, :namespaces, {:column=>:group_id, :on_delete=>:cascade})
-> 0.0030s
== 20180115013218 CreateSamlProviders: migrated (0.0690s) =====================

== 20180115094742 AddDefaultProjectCreationSetting: migrating =================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— transaction()
— add_column(:application_settings, :default_project_creation, :integer, {:default=>nil})
-> 0.0017s
— change_column_default(:application_settings, :default_project_creation, 2)
-> 0.0218s
-> 0.0313s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0015s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”app
-> 0.0009s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”applica
-> 0.0008s
— execute(“UPDATE \”application_settings\” SET \”default_project_creation\” = 2 WHERE \”application
-> 0.0035s
— change_column_null(:application_settings, :default_project_creation, false)
-> 0.0082s
== 20180115094742 AddDefaultProjectCreationSetting: migrated (0.0486s) ========

== 20180115113902 AddProjectCreationLevelToGroups: migrating ==================
— add_column(:namespaces, :project_creation_level, :integer)
-> 0.0011s
== 20180115113902 AddProjectCreationLevelToGroups: migrated (0.0012s) =========

== 20180115201419 AddIndexUpdatedAtToIssues: migrating ========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— index_exists?(:issues, :updated_at, {:algorithm=>:concurrently})
-> 0.0043s
— add_index(:issues, :updated_at, {:algorithm=>:concurrently})
-> 0.0445s
== 20180115201419 AddIndexUpdatedAtToIssues: migrated (0.0491s) ===============

== 20180116193854 CreateLfsFileLocks: migrating ===============================
— create_table(:lfs_file_locks)
-> 0.1413s
— add_index(:lfs_file_locks, [:project_id, :path], {:unique=>true})
-> 0.0334s
== 20180116193854 CreateLfsFileLocks: migrated (0.1749s) ======================

== 20180119121225 RemoveRedundantPipelineStages: migrating ====================
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“UPDATE ci_builds SET stage_id = NULL WHERE stage_id IN (SELECT id FROM ci_stages WHERE ()\n)\n”)
-> 0.0144s
— execute(“DELETE FROM ci_stages WHERE id IN (SELECT id FROM ci_stages WHERE (pipeline_id, name) IN
-> 0.0019s
— index_exists?(:ci_stages, [:pipeline_id, :name])
-> 0.0046s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0009s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:ci_stages, [:pipeline_id, :name], {:algorithm=>:concurrently})
-> 0.0046s
— remove_index(:ci_stages, {:algorithm=>:concurrently, :column=>[:pipeline_id, :name]})
-> 0.0558s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— index_exists?(:ci_stages, [:pipeline_id, :name], {:unique=>true, :algorithm=>:concurrently})
-> 0.0019s
— add_index(:ci_stages, [:pipeline_id, :name], {:unique=>true, :algorithm=>:concurrently})
-> 0.0351s
== 20180119121225 RemoveRedundantPipelineStages: migrated (0.1215s) ===========

== 20180119135717 AddUploaderIndexToUploads: migrating ========================
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— index_exists?(:uploads, :path, {:algorithm=>:concurrently})
-> 0.0045s
— remove_index(:uploads, {:algorithm=>:concurrently, :column=>:path})
-> 0.0144s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:uploads, [:uploader, :path], {:using=>:btree, :algorithm=>:concurrently})
-> 0.0038s
— add_index(:uploads, [:uploader, :path], {:using=>:btree, :algorithm=>:concurrently})
-> 0.0307s
== 20180119135717 AddUploaderIndexToUploads: migrated (0.0557s) ===============

== 20180119160751 OptimizeCiJobArtifacts: migrating ===========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:ci_job_artifacts, [:expire_at, :job_id], {:algorithm=>:concurrently})
-> 0.0039s
— add_index(:ci_job_artifacts, [:expire_at, :job_id], {:algorithm=>:concurrently})
-> 0.0336s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— index_exists?(:ci_builds, [:artifacts_expire_at], {:where=>”artifacts_file <> ””, :algorithm=>:c
-> 0.0033s
— add_index(:ci_builds, [:artifacts_expire_at], {:where=>”artifacts_file <> ””, :algorithm=>:concu
-> 0.0214s
== 20180119160751 OptimizeCiJobArtifacts: migrated (0.0632s) ==================

== 20180122154930 ScheduleSetConfidentialNoteEventsOnServices: migrating ======
== 20180122154930 ScheduleSetConfidentialNoteEventsOnServices: migrated (0.0148s)

== 20180122162010 AddAutoDevopsDomainToApplicationSettings: migrating =========
— add_column(:application_settings, :auto_devops_domain, :string)
-> 0.0005s
== 20180122162010 AddAutoDevopsDomainToApplicationSettings: migrated (0.0005s)

== 20180125214301 CreateUserCallouts: migrating ===============================
— create_table(:user_callouts)
-> 0.0341s
— add_index(:user_callouts, [:user_id, :feature_name], {:unique=>true})
-> 0.0215s
== 20180125214301 CreateUserCallouts: migrated (0.0558s) ======================

== 20180126165535 GeoSelectiveSyncByShard: migrating ==========================
— add_column(:geo_nodes, :selective_sync_type, :string)
-> 0.0016s
— add_column(:geo_nodes, :selective_sync_shards, :text)
-> 0.0007s
== 20180126165535 GeoSelectiveSyncByShard: migrated (0.0141s) =================

== 20180129193323 AddUploadsBuilderContext: migrating =========================
— add_column(:uploads, :mount_point, :string)
-> 0.0009s
— add_column(:uploads, :secret, :string)
-> 0.0007s
== 20180129193323 AddUploadsBuilderContext: migrated (0.0017s) ================

== 20180131104538 AddDateIndexesToEpics: migrating ============================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— index_exists?(:epics, :start_date, {:algorithm=>:concurrently})
-> 0.0069s
— add_index(:epics, :start_date, {:algorithm=>:concurrently})
-> 0.0302s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:epics, :end_date, {:algorithm=>:concurrently})
-> 0.0065s
— add_index(:epics, :end_date, {:algorithm=>:concurrently})
-> 0.0342s
== 20180131104538 AddDateIndexesToEpics: migrated (0.0795s) ===================

== 20180201101405 ChangeGeoNodeStatusColumnSize: migrating ====================
— change_column(:geo_node_statuses, :replication_slots_max_retained_wal_bytes, :integer, {:limit=>8
-> 0.0640s
== 20180201101405 ChangeGeoNodeStatusColumnSize: migrated (0.0642s) ===========

== 20180201102129 AddUniqueConstraintToTrendingProjectsProjectId: migrating ===
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:trending_projects, :project_id, {:unique=>true, :name=>”index_trending_projects_on
-> 0.0027s
— add_index(:trending_projects, :project_id, {:unique=>true, :name=>”index_trending_projects_on_pro
-> 0.0329s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0008s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— indexes(:trending_projects)
-> 0.0035s
— remove_index(:trending_projects, {:algorithm=>:concurrently, :name=>”index_trending_projects_on_p
-> 0.0193s
— rename_index(:trending_projects, “index_trending_projects_on_project_id_unique”, “index_trending_
-> 0.0034s
== 20180201102129 AddUniqueConstraintToTrendingProjectsProjectId: migrated (0.0643s)

== 20180201110056 AddForeignKeysToTodos: migrating ============================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— foreign_keys(:todos)
-> 0.0063s
— execute(“ALTER TABLE todos\nADD CONSTRAINT fk_d94154aa95\nFOREIGN KEY (user_id)\nREFERENCES users
-> 0.0098s
— execute(“ALTER TABLE todos VALIDATE CONSTRAINT fk_d94154aa95;”)
-> 0.0253s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— foreign_keys(:todos)
-> 0.0057s
— execute(“ALTER TABLE todos\nADD CONSTRAINT fk_ccf0373936\nFOREIGN KEY (author_id)\nREFERENCES use
-> 0.0098s
— execute(“ALTER TABLE todos VALIDATE CONSTRAINT fk_ccf0373936;”)
-> 0.0083s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— foreign_keys(:todos)
-> 0.0059s
— execute(“ALTER TABLE todos\nADD CONSTRAINT fk_91d1f47b13\nFOREIGN KEY (note_id)\nREFERENCES notes
-> 0.0100s
— execute(“ALTER TABLE todos VALIDATE CONSTRAINT fk_91d1f47b13;”)
-> 0.0081s
== 20180201110056 AddForeignKeysToTodos: migrated (0.1146s) ===================

== 20180201145907 MigrateRemainingIssuesClosedAt: migrating ===================
— columns(“issues”)
-> 0.0024s
— columns(“issues”)
-> 0.0022s
— transaction_open?()
-> 0.0000s
— columns(:issues)
-> 0.0021s
— add_column(:issues, “closed_at_for_type_change”, :datetime_with_timezone, {:limit=>nil, :precisio
-> 0.0056s
— quote_table_name(:issues)
-> 0.0001s
— quote_column_name(:closed_at)
-> 0.0000s
— quote_column_name(“closed_at_for_type_change”)
-> 0.0000s
— execute(“CREATE OR REPLACE FUNCTION trigger_08acb26c5ecf()\nRETURNS trigger AS\n$BODY$\nBEGIN\n
-> 0.0161s
— execute(“CREATE TRIGGER trigger_08acb26c5ecf\nBEFORE INSERT OR UPDATE\nON \”issues\”\nFOR EACH RO
-> 0.0164s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”issues\””)
-> 0.0191s
— exec_query(“SELECT \”issues\”.\”id\” FROM \”issues\” ORDER BY \”issues\”.\”id\” ASC LIMIT 1″)
-> 0.0009s
— exec_query(“SELECT \”issues\”.\”id\” FROM \”issues\” WHERE \”issues\”.\”id\” >= 1 ORDER BY \”is
-> 0.0010s
— execute(“UPDATE \”issues\” SET \”closed_at_for_type_change\” = \”issues\”.\”closed_at\” WHERE \”i
-> 0.0073s
— exec_query(“SELECT \”issues\”.\”id\” FROM \”issues\” WHERE \”issues\”.\”id\” >= 2 ORDER BY \”is
-> 0.0009s
— execute(“UPDATE \”issues\” SET \”closed_at_for_type_change\” = \”issues\”.\”closed_at\” WHERE \”i
-> 0.0069s
— indexes(:issues)
-> 0.0137s
— foreign_keys(:issues)
-> 0.0062s
— transaction()
— execute(“DROP TRIGGER IF EXISTS trigger_08acb26c5ecf ON issues”)
-> 0.0006s
— execute(“DROP FUNCTION IF EXISTS trigger_08acb26c5ecf()”)
-> 0.0073s
— remove_column(:issues, :closed_at)
-> 0.0017s
— rename_column(:issues, “closed_at_for_type_change”, :closed_at)
-> 0.0137s
-> 0.0383s
== 20180201145907 MigrateRemainingIssuesClosedAt: migrated (0.1567s) ==========

== 20180201192230 StoreVersionAndRevisionInGeoNodeStatus: migrating ===========
— add_column(:geo_node_statuses, :version, :string)
-> 0.0010s
— add_column(:geo_node_statuses, :revision, :string)
-> 0.0007s
== 20180201192230 StoreVersionAndRevisionInGeoNodeStatus: migrated (0.0019s) ==

== 20180204200836 ChangeAuthorIdToNotNullInTodos: migrating ===================
— change_column_null(:todos, :author_id, false)
-> 0.0036s
== 20180204200836 ChangeAuthorIdToNotNullInTodos: migrated (0.0063s) ==========

== 20180206184810 CreateProjectRepositoryStates: migrating ====================
— create_table(:project_repository_states)
-> 0.0572s
== 20180206184810 CreateProjectRepositoryStates: migrated (0.0573s) ===========

== 20180206200543 ResetEventsPrimaryKeySequence: migrating ====================
— reset_pk_sequence!(“events”)
-> 0.0413s
== 20180206200543 ResetEventsPrimaryKeySequence: migrated (0.0414s) ===========

== 20180209115333 CreateChatopsTables: migrating ==============================
— create_table(:ci_pipeline_chat_data, {:id=>:bigserial})
-> 0.0662s
— add_foreign_key(:ci_pipeline_chat_data, :ci_pipelines, {:column=>:pipeline_id, :on_delete=>:casca
-> 0.0170s
== 20180209115333 CreateChatopsTables: migrated (0.0834s) =====================

== 20180209165249 AddClosedByToIssues: migrating ==============================
— add_column(:issues, :closed_by_id, :integer)
-> 0.0079s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— foreign_keys(:issues)
-> 0.0017s
— execute(“ALTER TABLE issues\nADD CONSTRAINT fk_c63cbf6c25\nFOREIGN KEY (closed_by_id)\nREFERENCES
-> 0.0063s
— execute(“ALTER TABLE issues VALIDATE CONSTRAINT fk_c63cbf6c25;”)
-> 0.0085s
== 20180209165249 AddClosedByToIssues: migrated (0.0249s) =====================

== 20180212030105 AddExternalIpToClustersApplicationsIngress: migrating =======
— add_column(:clusters_applications_ingress, :external_ip, :string)
-> 0.0004s
== 20180212030105 AddExternalIpToClustersApplicationsIngress: migrated (0.0004s)

== 20180212101828 AddTmpPartialNullIndexToBuilds: migrating ===================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_exists?(:ci_builds, :id, {:where=>”stage_id IS NULL”, :name=>”tmp_id_partial_null_index”, :
-> 0.0102s
— add_index(:ci_builds, :id, {:where=>”stage_id IS NULL”, :name=>”tmp_id_partial_null_index”, :algo
-> 0.0266s
== 20180212101828 AddTmpPartialNullIndexToBuilds: migrated (0.0374s) ==========

== 20180212101928 ScheduleBuildStageMigration: migrating ======================
== 20180212101928 ScheduleBuildStageMigration: migrated (0.0000s) =============

== 20180212102028 RemoveTmpPartialNullIndexFromBuilds: migrating ==============
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— indexes(:ci_builds)
-> 0.0152s
— remove_index(:ci_builds, {:algorithm=>:concurrently, :name=>”tmp_id_partial_null_index”})
-> 0.0116s
== 20180212102028 RemoveTmpPartialNullIndexFromBuilds: migrated (0.0284s) =====

== 20180213131630 AddPartialIndexToProjectsForIndexOnlyScans: migrating =======
— index_exists?(:projects, :id, {:name=>”index_projects_on_id_partial_for_visibility”})
-> 0.0163s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_exists?(:projects, :id, {:name=>”index_projects_on_id_partial_for_visibility”, :unique=>tru
-> 0.0161s
— add_index(:projects, :id, {:name=>”index_projects_on_id_partial_for_visibility”, :unique=>true, :
-> 0.0368s
== 20180213131630 AddPartialIndexToProjectsForIndexOnlyScans: migrated (0.0701s)

== 20180214093516 CreateBadges: migrating =====================================
— create_table(:badges)
-> 0.0737s
— add_foreign_key(:badges, :namespaces, {:column=>:group_id, :on_delete=>:cascade})
-> 0.0028s
== 20180214093516 CreateBadges: migrated (0.0767s) ============================

== 20180214155405 CreateClustersApplicationsRunners: migrating ================
— create_table(:clusters_applications_runners)
-> 0.1574s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— foreign_keys(:clusters_applications_runners)
-> 0.0056s
— execute(“ALTER TABLE clusters_applications_runners\nADD CONSTRAINT fk_02de2ded36\nFOREIGN KEY (ru
-> 0.0096s
— execute(“ALTER TABLE clusters_applications_runners VALIDATE CONSTRAINT fk_02de2ded36;”)
-> 0.0082s
== 20180214155405 CreateClustersApplicationsRunners: migrated (0.1822s) =======

== 20180215143644 AddMirrorOverwritesDivergedBranchesToProject: migrating =====
— add_column(:projects, :mirror_overwrites_diverged_branches, :boolean)
-> 0.0009s
== 20180215143644 AddMirrorOverwritesDivergedBranchesToProject: migrated (0.0010s)

== 20180215181245 UsersNameLowerIndex: migrating ==============================
— execute(“CREATE INDEX CONCURRENTLY index_on_users_name_lower ON users (LOWER(name))”)
-> 0.0293s
== 20180215181245 UsersNameLowerIndex: migrated (0.0293s) =====================

== 20180216120000 AddPagesDomainVerification: migrating =======================
— add_column(:pages_domains, :verified_at, :datetime_with_timezone)
-> 0.0005s
— add_column(:pages_domains, :verification_code, :string)
-> 0.0002s
== 20180216120000 AddPagesDomainVerification: migrated (0.0008s) ==============

== 20180216120010 AddPagesDomainVerifiedAtIndex: migrating ====================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:pages_domains, :verified_at, {:algorithm=>:concurrently})
-> 0.0040s
— add_index(:pages_domains, :verified_at, {:algorithm=>:concurrently})
-> 0.0402s
== 20180216120010 AddPagesDomainVerifiedAtIndex: migrated (0.0451s) ===========

== 20180216120020 AllowDomainVerificationToBeDisabled: migrating ==============
— add_column(:application_settings, :pages_domain_verification_enabled, :boolean, {:default=>true,
-> 0.0969s
== 20180216120020 AllowDomainVerificationToBeDisabled: migrated (0.0970s) =====

== 20180216120030 AddPagesDomainEnabledUntil: migrating =======================
— add_column(:pages_domains, :enabled_until, :datetime_with_timezone)
-> 0.0009s
== 20180216120030 AddPagesDomainEnabledUntil: migrated (0.0010s) ==============

== 20180216120040 AddPagesDomainEnabledUntilIndex: migrating ==================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:pages_domains, [:project_id, :enabled_until], {:algorithm=>:concurrently})
-> 0.0044s
— add_index(:pages_domains, [:project_id, :enabled_until], {:algorithm=>:concurrently})
-> 0.0311s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— index_exists?(:pages_domains, [:verified_at, :enabled_until], {:algorithm=>:concurrently})
-> 0.0053s
— add_index(:pages_domains, [:verified_at, :enabled_until], {:algorithm=>:concurrently})
-> 0.0357s
== 20180216120040 AddPagesDomainEnabledUntilIndex: migrated (0.0780s) =========

== 20180216120050 PagesDomainsVerificationGracePeriod: migrating ==============
== 20180216120050 PagesDomainsVerificationGracePeriod: migrated (0.0076s) =====

== 20180216121020 FillPagesDomainVerificationCode: migrating ==================
— change_column_null(:pages_domains, :verification_code, false)
-> 0.0104s
== 20180216121020 FillPagesDomainVerificationCode: migrated (0.0146s) =========

== 20180216121030 EnqueueVerifyPagesDomainWorkers: migrating ==================
== 20180216121030 EnqueueVerifyPagesDomainWorkers: migrated (0.0024s) =========

== 20180219153455 AddMaximumTimeoutToCiRunners: migrating =====================
— add_column(:ci_runners, :maximum_timeout, :integer)
-> 0.0011s
== 20180219153455 AddMaximumTimeoutToCiRunners: migrated (0.0012s) ============

== 20180220150310 RemoveEmptyExternUidAuth0Identities: migrating ==============
== 20180220150310 RemoveEmptyExternUidAuth0Identities: migrated (0.0069s) =====

== 20180221151752 AddAllowMaintainerToPushToMergeRequests: migrating ==========
— add_column(:merge_requests, :allow_maintainer_to_push, :boolean)
-> 0.0076s
== 20180221151752 AddAllowMaintainerToPushToMergeRequests: migrated (0.0077s) =

== 20180222043024 AddIpAddressToRunner: migrating =============================
— add_column(:ci_runners, :ip_address, :string)
-> 0.0007s
== 20180222043024 AddIpAddressToRunner: migrated (0.0007s) ====================

== 20180223120443 CreateUserInteractedProjectsTable: migrating ================
— create_table(:user_interacted_projects, {:id=>false})
-> 0.0014s
— add_index(:user_interacted_projects, [:project_id, :user_id], {:name=>”user_interacted_projects_n
-> 0.0254s
== 20180223120443 CreateUserInteractedProjectsTable: migrated (0.0269s) =======

== 20180223124427 BuildUserInteractedProjectsTable: migrating =================
— index_exists?(:events, [:author_id, :project_id], {:name=>”events_user_interactions_temp”, :where
-> 0.0019s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— index_exists?(:events, [:author_id, :project_id], {:name=>”events_user_interactions_temp”, :where
-> 0.0017s
— add_index(:events, [:author_id, :project_id], {:name=>”events_user_interactions_temp”, :where=>”p
-> 0.0328s
— execute(“INSERT INTO user_interacted_projects (user_id, project_id)\nSELECT e.user_id, e.project_r_interacted_projects ucp USING (user_id, project_id)\nWHERE ucp.user_id IS NULL\nLIMIT 100000\n”)
-> 0.0208s
— execute(“INSERT INTO user_interacted_projects (user_id, project_id)\nSELECT e.user_id, e.project_r_interacted_projects ucp USING (user_id, project_id)\nWHERE ucp.user_id IS NULL\nLIMIT 100000\n”)
-> 0.0019s
— execute(“WITH numbered AS (select ctid, ROW_NUMBER() OVER (PARTITION BY (user_id, project_id)) asECT ctid FROM numbered WHERE row_number > 1);\n”)
-> 0.0016s
— execute(“LOCK TABLE user_interacted_projects IN SHARE MODE”)
-> 0.0004s
— execute(“WITH numbered AS (select ctid, ROW_NUMBER() OVER (PARTITION BY (user_id, project_id)) asECT ctid FROM numbered WHERE row_number > 1);\n”)
-> 0.0010s
— indexes(:user_interacted_projects)
-> 0.0031s
— add_index(:user_interacted_projects, [:project_id, :user_id], {:unique=>true, :name=>”index_user_
-> 0.0166s
— execute(“DELETE FROM user_interacted_projects WHERE NOT EXISTS (SELECT 1 FROM projects WHERE id =
-> 0.0019s
— execute(“LOCK TABLE user_interacted_projects, projects IN SHARE MODE”)
-> 0.0005s
— execute(“DELETE FROM user_interacted_projects WHERE NOT EXISTS (SELECT 1 FROM projects WHERE id =
-> 0.0011s
— foreign_keys(:user_interacted_projects)
-> 0.0058s
— add_foreign_key(:user_interacted_projects, :projects, {:column=>:project_id, :on_delete=>:cascade
-> 0.0039s
— execute(“DELETE FROM user_interacted_projects WHERE NOT EXISTS (SELECT 1 FROM users WHERE id = us
-> 0.0019s
— execute(“LOCK TABLE user_interacted_projects, users IN SHARE MODE”)
-> 0.0005s
— execute(“DELETE FROM user_interacted_projects WHERE NOT EXISTS (SELECT 1 FROM users WHERE id = us
-> 0.0010s
— foreign_keys(:user_interacted_projects)
-> 0.0060s
— add_foreign_key(:user_interacted_projects, :users, {:column=>:user_id, :on_delete=>:cascade})
-> 0.0028s
— index_exists?(:events, [:author_id, :project_id], {:name=>”events_user_interactions_temp”, :where
-> 0.0057s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0006s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_exists?(:events, [:author_id, :project_id], {:name=>”events_user_interactions_temp”, :where
-> 0.0062s
— remove_index(:events, {:name=>”events_user_interactions_temp”, :where=>”project_id IS NOT NULL”,
-> 0.0165s
— execute(“ANALYZE user_interacted_projects”)
-> 0.0119s
— indexes(:user_interacted_projects)
-> 0.0038s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0006s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— indexes(:user_interacted_projects)
-> 0.0039s
— remove_index(:user_interacted_projects, {:algorithm=>:concurrently, :name=>”user_interacted_proje
-> 0.0118s
== 20180223124427 BuildUserInteractedProjectsTable: migrated (5.2031s) ========

== 20180223144945 AddAllowLocalRequestsFromHooksAndServicesToApplicationSettings: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:application_settings, :allow_local_requests_from_hooks_and_services, :boolean, {:defa
-> 0.0012s
— change_column_default(:application_settings, :allow_local_requests_from_hooks_and_services, false
-> 0.0231s
-> 0.3492s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0016s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”app
-> 0.0009s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”applica
-> 0.0009s
— execute(“UPDATE \”application_settings\” SET \”allow_local_requests_from_hooks_and_services\” = ‘
-> 0.3778s
— change_column_null(:application_settings, :allow_local_requests_from_hooks_and_services, false)
-> 0.3034s
== 20180223144945 AddAllowLocalRequestsFromHooksAndServicesToApplicationSettings: migrated (1.0360s)

== 20180225180932 AddGeoNodeVerificationStatus: migrating =====================
— add_column(:geo_node_statuses, :repositories_verified_count, :integer)
-> 0.0010s
— add_column(:geo_node_statuses, :repositories_verification_failed_count, :integer)
-> 0.0007s
— add_column(:geo_node_statuses, :wikis_verified_count, :integer)
-> 0.0008s
— add_column(:geo_node_statuses, :wikis_verification_failed_count, :integer)
-> 0.0008s
== 20180225180932 AddGeoNodeVerificationStatus: migrated (0.0035s) ============

== 20180226050030 AddChecksumToCiJobArtifacts: migrating ======================
— add_column(:ci_job_artifacts, :file_sha256, :binary)
-> 0.0009s
== 20180226050030 AddChecksumToCiJobArtifacts: migrated (0.0010s) =============

== 20180227182112 AddGroupIdToBoardsCe: migrating =============================
— column_exists?(:boards, :group_id)
-> 0.0020s
== 20180227182112 AddGroupIdToBoardsCe: migrated (0.0021s) ====================

== 20180301010859 CreateCiBuildsMetadataTable: migrating ======================
— create_table(:ci_builds_metadata)
-> 0.0708s
== 20180301010859 CreateCiBuildsMetadataTable: migrated (0.0709s) =============

== 20180301084653 ChangeProjectNamespaceIdNotNull: migrating ==================
— change_column_null(:projects, :namespace_id, false)
-> 0.0041s
== 20180301084653 ChangeProjectNamespaceIdNotNull: migrated (0.0143s) =========

== 20180302152117 EnsureForeignKeysOnClustersApplications: migrating ==========
— foreign_keys(:clusters_applications_ingress)
-> 0.0042s
— foreign_keys(:clusters_applications_prometheus)
-> 0.0039s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— foreign_keys(:clusters_applications_prometheus)
-> 0.0045s
— execute(“ALTER TABLE clusters_applications_prometheus\nADD CONSTRAINT fk_557e773639\nFOREIGN KEY
-> 0.0074s
— execute(“ALTER TABLE clusters_applications_prometheus VALIDATE CONSTRAINT fk_557e773639;”)
-> 0.0084s
== 20180302152117 EnsureForeignKeysOnClustersApplications: migrated (0.1075s) =

== 20180302230551 AddExternalWebhookTokenToProjects: migrating ================
— add_column(:projects, :external_webhook_token, :string)
-> 0.0015s
== 20180302230551 AddExternalWebhookTokenToProjects: migrated (0.0016s) =======

== 20180305095250 CreateInternalIdsTable: migrating ===========================
— create_table(:internal_ids, {:id=>:bigserial})
-> 0.0510s
== 20180305095250 CreateInternalIdsTable: migrated (0.0511s) ==================

== 20180305100050 RemovePermanentFromRedirectRoutes: migrating ================
— execute(“SET statement_timeout TO 0”)
-> 0.0007s
— execute(“DROP INDEX CONCURRENTLY IF EXISTS index_redirect_routes_on_path_text_pattern_ops_where_p
-> 0.0005s
— execute(“DROP INDEX CONCURRENTLY IF EXISTS index_redirect_routes_on_path_text_pattern_ops_where_t
-> 0.0005s
— remove_column(:redirect_routes, :permanent)
-> 0.0050s
== 20180305100050 RemovePermanentFromRedirectRoutes: migrated (0.0070s) =======

== 20180305144721 AddPrivilegedToRunner: migrating ============================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:clusters_applications_runners, :privileged, :boolean, {:default=>nil})
-> 0.0010s
— change_column_default(:clusters_applications_runners, :privileged, true)
-> 0.0029s
-> 0.0153s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”clusters_applications_runners\””)
-> 0.0012s
— change_column_null(:clusters_applications_runners, :privileged, false)
-> 0.0063s
== 20180305144721 AddPrivilegedToRunner: migrated (0.0241s) ===================

== 20180306074045 MigrateCreateTraceArtifactSidekiqQueue: migrating ===========
== 20180306074045 MigrateCreateTraceArtifactSidekiqQueue: migrated (0.0004s) ==

== 20180306134842 AddMissingIndexesActsAsTaggableOnEngine: migrating ==========
— index_exists?(:taggings, :tag_id)
-> 0.0048s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:taggings, :tag_id, {:algorithm=>:concurrently})
-> 0.0044s
— add_index(:taggings, :tag_id, {:algorithm=>:concurrently})
-> 0.0373s
— index_exists?(:taggings, [:taggable_id, :taggable_type])
-> 0.0055s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:taggings, [:taggable_id, :taggable_type], {:algorithm=>:concurrently})
-> 0.0054s
— add_index(:taggings, [:taggable_id, :taggable_type], {:algorithm=>:concurrently})
-> 0.0298s
== 20180306134842 AddMissingIndexesActsAsTaggableOnEngine: migrated (0.0891s) =

== 20180306164012 AddPathIndexToRedirectRoutes: migrating =====================
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— indexes(:redirect_routes)
-> 0.0041s
— execute(“CREATE UNIQUE INDEX CONCURRENTLY index_redirect_routes_on_path_unique_text_pattern_ops O
-> 0.0326s
== 20180306164012 AddPathIndexToRedirectRoutes: migrated (0.0386s) ============

== 20180307012445 MigrateUpdateHeadPipelineForMergeRequestSidekiqQueue: migrating
== 20180307012445 MigrateUpdateHeadPipelineForMergeRequestSidekiqQueue: migrated (0.0004s)

== 20180307164427 DisableMirroringForProjectsWithInvalidMirrorUsers: migrating
— execute(“UPDATE projects\nSET mirror = FALSE, mirror_user_id = NULL\nWHERE mirror = true AND\n N
-> 0.0027s
== 20180307164427 DisableMirroringForProjectsWithInvalidMirrorUsers: migrated (0.0028s)

== 20180308052825 AddSectionNameIdIndexOnCiBuildTraceSections: migrating ======
— index_exists?(:ci_build_trace_sections, :section_name_id, {:name=>”index_ci_build_trace_sections_
-> 0.0039s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:ci_build_trace_sections, :section_name_id, {:name=>”index_ci_build_trace_sections_
-> 0.0036s
— add_index(:ci_build_trace_sections, :section_name_id, {:name=>”index_ci_build_trace_sections_on_s
-> 0.0229s
== 20180308052825 AddSectionNameIdIndexOnCiBuildTraceSections: migrated (0.0313s)

== 20180308234102 AddPartialIndexToProjectRepositoryStatesChecksumColumns: migrating
— index_exists?(:project_repository_states, [:repository_verification_checksum, :wiki_verification_
-> 0.0032s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— index_exists?(:project_repository_states, [:repository_verification_checksum, :wiki_verification_ IS NULL OR wiki_verification_checksum IS NULL”, :algorithm=>:concurrently})
-> 0.0028s
— add_index(:project_repository_states, [:repository_verification_checksum, :wiki_verification_checNULL OR wiki_verification_checksum IS NULL”, :algorithm=>:concurrently})
-> 0.0234s
== 20180308234102 AddPartialIndexToProjectRepositoryStatesChecksumColumns: migrated (0.0305s)

== 20180309121820 RescheduleCommitsCountForMergeRequestDiff: migrating ========
— Populating the MergeRequestDiff `commits_count` (reschedule)
— execute(“SET statement_timeout TO ’60s'”)
-> 0.0003s
== 20180309121820 RescheduleCommitsCountForMergeRequestDiff: migrated (0.0016s)

== 20180309160427 AddPartialIndexesOnTodos: migrating =========================
— index_exists?(:todos, [:user_id, :id], {:name=>”index_todos_on_user_id_and_id_pending”})
-> 0.0061s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— index_exists?(:todos, [:user_id, :id], {:where=>”state=’pending'”, :name=>”index_todos_on_user_id
-> 0.0066s
— add_index(:todos, [:user_id, :id], {:where=>”state=’pending'”, :name=>”index_todos_on_user_id_and
-> 0.0260s
— index_exists?(:todos, [:user_id, :id], {:name=>”index_todos_on_user_id_and_id_done”})
-> 0.0087s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_exists?(:todos, [:user_id, :id], {:where=>”state=’done'”, :name=>”index_todos_on_user_id_an
-> 0.0079s
— add_index(:todos, [:user_id, :id], {:where=>”state=’done'”, :name=>”index_todos_on_user_id_and_id
-> 0.0321s
== 20180309160427 AddPartialIndexesOnTodos: migrated (0.0892s) ================

== 20180309215236 RemoveLastVericationAtColumnsFromProjectRepositoryStates: migrating
— remove_column(:project_repository_states, :last_repository_verification_at)
-> 0.0005s
— remove_column(:project_repository_states, :last_wiki_verification_at)
-> 0.0003s
== 20180309215236 RemoveLastVericationAtColumnsFromProjectRepositoryStates: migrated (0.0008s)

== 20180314100728 AddExternalAuthorizationServiceTimeoutToApplicationSettings: migrating
— add_column(:application_settings, :external_authorization_service_timeout, :float, {:default=>0.5
-> 0.0749s
== 20180314100728 AddExternalAuthorizationServiceTimeoutToApplicationSettings: migrated (0.0750s)

== 20180314145917 AddHeaderAndFooterBannersToAppearancesTable: migrating ======
— add_column(:appearances, :header_message, :text)
-> 0.0012s
— add_column(:appearances, :header_message_html, :text)
-> 0.0008s
— add_column(:appearances, :footer_message, :text)
-> 0.0007s
— add_column(:appearances, :footer_message_html, :text)
-> 0.0008s
— add_column(:appearances, :message_background_color, :text)
-> 0.0008s
— add_column(:appearances, :message_font_color, :text)
-> 0.0008s
== 20180314145917 AddHeaderAndFooterBannersToAppearancesTable: migrated (0.0053s)

== 20180314172513 RemoveLastVericationFailedColumnsFromProjectRepositoryStates: migrating
— remove_column(:project_repository_states, :last_repository_verification_failed)
-> 0.0012s
— remove_column(:project_repository_states, :last_wiki_verification_failed)
-> 0.0009s
== 20180314172513 RemoveLastVericationFailedColumnsFromProjectRepositoryStates: migrated (0.0023s)

== 20180314174825 AddPartialIndexToProjectRepositoryStatesVerificationColumns: migrating
— index_exists?(:project_repository_states, :last_repository_verification_failure, {:name=>”idx_rep
-> 0.0045s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:project_repository_states, :last_repository_verification_failure, {:name=>”idx_rep :algorithm=>:concurrently})
-> 0.0040s
— add_index(:project_repository_states, :last_repository_verification_failure, {:name=>”idx_repositgorithm=>:concurrently})
-> 0.0292s
— index_exists?(:project_repository_states, :last_wiki_verification_failure, {:name=>”idx_repositor
-> 0.0049s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:project_repository_states, :last_wiki_verification_failure, {:name=>”idx_repositorurrently})
-> 0.0044s
— add_index(:project_repository_states, :last_wiki_verification_failure, {:name=>”idx_repository_stntly})
-> 0.0318s
== 20180314174825 AddPartialIndexToProjectRepositoryStatesVerificationColumns: migrated (0.0806s)

== 20180315160435 AddExternalAuthMutualTlsFieldsToProjectSettings: migrating ==
— add_column(:application_settings, :external_auth_client_cert, :text)
-> 0.0013s
— add_column(:application_settings, :encrypted_external_auth_client_key, :text)
-> 0.0009s
— add_column(:application_settings, :encrypted_external_auth_client_key_iv, :string)
-> 0.0008s
— add_column(:application_settings, :encrypted_external_auth_client_key_pass, :string)
-> 0.0010s
— add_column(:application_settings, :encrypted_external_auth_client_key_pass_iv, :string)
-> 0.0010s
== 20180315160435 AddExternalAuthMutualTlsFieldsToProjectSettings: migrated (0.0054s)

== 20180317020334 AddSamlProviderToIdentities: migrating ======================
— add_column(:identities, :saml_provider_id, :integer)
-> 0.0077s
== 20180317020334 AddSamlProviderToIdentities: migrated (0.0078s) =============

== 20180319190020 CreateDeployTokens: migrating ===============================
— create_table(:deploy_tokens)
-> 0.0969s
== 20180319190020 CreateDeployTokens: migrated (0.0971s) ======================

== 20180320182229 AddIndexesForUserActivityQueries: migrating =================
— index_exists?(:events, [:author_id, :project_id])
-> 0.0059s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:events, [:author_id, :project_id], {:algorithm=>:concurrently})
-> 0.0059s
— add_index(:events, [:author_id, :project_id], {:algorithm=>:concurrently})
-> 0.0263s
— index_exists?(:user_interacted_projects, :user_id)
-> 0.0030s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:user_interacted_projects, :user_id, {:algorithm=>:concurrently})
-> 0.0027s
— add_index(:user_interacted_projects, :user_id, {:algorithm=>:concurrently})
-> 0.0351s
== 20180320182229 AddIndexesForUserActivityQueries: migrated (0.0808s) ========

== 20180323150945 AddPushToMergeRequestToNotificationSettings: migrating ======
— add_column(:notification_settings, :push_to_merge_request, :boolean)
-> 0.0011s
== 20180323150945 AddPushToMergeRequestToNotificationSettings: migrated (0.0012s)

== 20180325034910 CreateProtectedBranchUnprotectAccessLevels: migrating =======
— create_table(:protected_branch_unprotect_access_levels)
-> 0.1154s
— add_foreign_key(:protected_branch_unprotect_access_levels, :namespaces, {:column=>:group_id, :on_
-> 0.0031s
== 20180325034910 CreateProtectedBranchUnprotectAccessLevels: migrated (0.1187s)

== 20180326202229 CreateCiBuildTraceChunks: migrating =========================
— create_table(:ci_build_trace_chunks, {:id=>:bigserial})
-> 0.0831s
== 20180326202229 CreateCiBuildTraceChunks: migrated (0.0832s) ================

== 20180327101207 RemoveIndexFromEventsTable: migrating =======================
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0009s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:events, :author_id, {:algorithm=>:concurrently})
-> 0.0067s
— remove_index(:events, {:algorithm=>:concurrently, :column=>:author_id})
-> 0.0272s
== 20180327101207 RemoveIndexFromEventsTable: migrated (0.0356s) ==============

== 20180329230151 AddMissingOnPrimaryCountsToGeoNodeStatuses: migrating =======
— add_column(:geo_node_statuses, :lfs_objects_synced_missing_on_primary_count, :integer)
-> 0.0004s
— add_column(:geo_node_statuses, :job_artifacts_synced_missing_on_primary_count, :integer)
-> 0.0003s
— add_column(:geo_node_statuses, :attachments_synced_missing_on_primary_count, :integer)
-> 0.0003s
== 20180329230151 AddMissingOnPrimaryCountsToGeoNodeStatuses: migrated (0.0010s)

== 20180330121048 AddIssueDueToNotificationSettings: migrating ================
— add_column(:notification_settings, :issue_due, :boolean)
-> 0.0003s
== 20180330121048 AddIssueDueToNotificationSettings: migrated (0.0003s) =======

== 20180401213713 AddEmailAdditionalTextToApplicationSettings: migrating ======
— add_column(:application_settings, :email_additional_text, :string, {:length=>10000})
-> 0.0005s
== 20180401213713 AddEmailAdditionalTextToApplicationSettings: migrated (0.0005s)

== 20180403035759 CreateProjectCiCdSettings: migrating ========================
— table_exists?(:project_ci_cd_settings)
-> 0.0005s
— create_table(:project_ci_cd_settings)
-> 0.0325s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“INSERT INTO project_ci_cd_settings (project_id) SELECT id FROM projects”)
-> 0.0078s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— index_exists?(:project_ci_cd_settings, :project_id, {:unique=>true, :algorithm=>:concurrently})
-> 0.0016s
— add_index(:project_ci_cd_settings, :project_id, {:unique=>true, :algorithm=>:concurrently})
-> 0.0382s
— execute(“DELETE FROM project_ci_cd_settings\nWHERE NOT EXISTS (\n SELECT 1\n FROM projects\n W
-> 0.0015s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— foreign_keys(:project_ci_cd_settings)
-> 0.0061s
— execute(“ALTER TABLE project_ci_cd_settings\nADD CONSTRAINT fk_24c15d2f2e\nFOREIGN KEY (project_i
-> 0.0091s
— execute(“ALTER TABLE project_ci_cd_settings VALIDATE CONSTRAINT fk_24c15d2f2e;”)
-> 0.0082s
== 20180403035759 CreateProjectCiCdSettings: migrated (0.1082s) ===============

== 20180405101928 RescheduleBuildsStagesMigration: migrating ==================
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
== 20180405101928 RescheduleBuildsStagesMigration: migrated (0.0093s) =========

== 20180405142733 CreateProjectDeployTokens: migrating ========================
— create_table(:project_deploy_tokens)
-> 0.0544s
== 20180405142733 CreateProjectDeployTokens: migrated (0.0544s) ===============

== 20180406204716 AddLimitsCiBuildTraceChunksRawDataForMysql: migrating =======
== 20180406204716 AddLimitsCiBuildTraceChunksRawDataForMysql: migrated (0.0000s)

== 20180409170809 PopulateMissingProjectCiCdSettings: migrating ===============
— execute(“INSERT INTO project_ci_cd_settings (project_id)\nSELECT id\nFROM projects\nWHERE NOT EXI
-> 0.0017s
== 20180409170809 PopulateMissingProjectCiCdSettings: migrated (0.0019s) ======

== 20180413022611 CreateMissingNamespaceForInternalUsers: migrating ===========
— column_exists?(:users, :support_bot)
-> 0.0058s
== 20180413022611 CreateMissingNamespaceForInternalUsers: migrated (0.0133s) ==

== 20180416112831 DropNullConstraintGeoEventsStoragePath: migrating ===========
— change_column_null(:geo_hashed_storage_migrated_events, :repository_storage_path, true)
-> 0.0010s
— change_column_null(:geo_repository_created_events, :repository_storage_path, true)
-> 0.0007s
— change_column_null(:geo_repository_deleted_events, :repository_storage_path, true)
-> 0.0008s
— change_column_null(:geo_repository_renamed_events, :repository_storage_path, true)
-> 0.0008s
== 20180416112831 DropNullConstraintGeoEventsStoragePath: migrated (0.0037s) ==

== 20180416155103 AddFurtherScopeColumnsToInternalIdTable: migrating ==========
— change_column_null(:internal_ids, :project_id, true)
-> 0.0008s
— add_column(:internal_ids, :namespace_id, :integer, {:null=>true})
-> 0.0008s
== 20180416155103 AddFurtherScopeColumnsToInternalIdTable: migrated (0.0018s) =

== 20180416205949 AddChecksumFieldsToGeoNodeStatuses: migrating ===============
— add_column(:geo_node_statuses, :repositories_checksummed_count, :integer)
-> 0.0012s
— add_column(:geo_node_statuses, :repositories_checksum_failed_count, :integer)
-> 0.0008s
— add_column(:geo_node_statuses, :repositories_checksum_mismatch_count, :integer)
-> 0.0008s
— add_column(:geo_node_statuses, :wikis_checksummed_count, :integer)
-> 0.0011s
— add_column(:geo_node_statuses, :wikis_checksum_failed_count, :integer)
-> 0.0009s
— add_column(:geo_node_statuses, :wikis_checksum_mismatch_count, :integer)
-> 0.0009s
== 20180416205949 AddChecksumFieldsToGeoNodeStatuses: migrated (0.0061s) ======

== 20180417090132 AddIndexConstraintsToInternalIdTable: migrating =============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:internal_ids, [:usage, :namespace_id], {:unique=>true, :where=>”namespace_id IS NO
-> 0.0035s
— add_index(:internal_ids, [:usage, :namespace_id], {:unique=>true, :where=>”namespace_id IS NOT NU
-> 0.0351s
— index_exists?(:internal_ids, [:usage, :project_id], {:name=>”index_internal_ids_on_usage_and_proj
-> 0.0043s
— rename_index(:internal_ids, “index_internal_ids_on_usage_and_project_id”, “index_internal_ids_on_
-> 0.0132s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_exists?(:internal_ids, [:usage, :project_id], {:unique=>true, :where=>”project_id IS NOT NU
-> 0.0042s
— add_index(:internal_ids, [:usage, :project_id], {:unique=>true, :where=>”project_id IS NOT NULL”,
-> 0.0354s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— indexes(:internal_ids)
-> 0.0049s
— remove_index(:internal_ids, {:algorithm=>:concurrently, :name=>”index_internal_ids_on_usage_and_p
-> 0.0165s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— foreign_keys(:internal_ids)
-> 0.0061s
— execute(“ALTER TABLE internal_ids\nADD CONSTRAINT fk_162941d509\nFOREIGN KEY (namespace_id)\nREFE
-> 0.0043s
— execute(“ALTER TABLE internal_ids VALIDATE CONSTRAINT fk_162941d509;”)
-> 0.0083s
== 20180417090132 AddIndexConstraintsToInternalIdTable: migrated (0.1403s) ====

== 20180417101040 AddTmpStagePriorityIndexToCiBuilds: migrating ===============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_exists?(:ci_builds, [:stage_id, :stage_idx], {:where=>”stage_idx IS NOT NULL”, :name=>”tmp_
-> 0.0149s
— add_index(:ci_builds, [:stage_id, :stage_idx], {:where=>”stage_idx IS NOT NULL”, :name=>”tmp_buil
-> 0.0327s
== 20180417101040 AddTmpStagePriorityIndexToCiBuilds: migrated (0.0485s) ======

== 20180417101940 AddIndexToCiStage: migrating ================================
— add_column(:ci_stages, :position, :integer)
-> 0.0010s
== 20180417101940 AddIndexToCiStage: migrated (0.0011s) =======================

== 20180418053107 AddIndexToCiJobArtifactsFileStore: migrating ================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:ci_job_artifacts, :file_store, {:algorithm=>:concurrently})
-> 0.0045s
— add_index(:ci_job_artifacts, :file_store, {:algorithm=>:concurrently})
-> 0.0340s
== 20180418053107 AddIndexToCiJobArtifactsFileStore: migrated (0.0393s) =======

== 20180419031622 AddIndexForTrackingMirroredCiCdRepositories: migrating ======
— execute(“CREATE INDEX CONCURRENTLY IF NOT EXISTS index_projects_on_mirror_and_mirror_trigger_buil
-> 0.0311s
== 20180419031622 AddIndexForTrackingMirroredCiCdRepositories: migrated (0.0320s)

== 20180419171038 CreateVulnerabilityFeedback: migrating ======================
— create_table(:vulnerability_feedback)
-> 0.1113s
== 20180419171038 CreateVulnerabilityFeedback: migrated (0.1114s) =============

== 20180420010016 AddPipelineBuildForeignKey: migrating =======================
— execute(“DELETE FROM ci_builds WHERE project_id IS NULL OR commit_id IS NULL\n”)
-> 0.0015s
— execute(“DELETE FROM ci_builds WHERE NOT EXISTS\n (SELECT true FROM ci_pipelines WHERE ci_pipeli
-> 0.0012s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— foreign_keys(:ci_builds)
-> 0.0057s
— execute(“ALTER TABLE ci_builds\nADD CONSTRAINT fk_d3130c9a7f\nFOREIGN KEY (commit_id)\nREFERENCES
-> 0.0062s
— execute(“ALTER TABLE ci_builds VALIDATE CONSTRAINT fk_d3130c9a7f;”)
-> 0.0085s
== 20180420010016 AddPipelineBuildForeignKey: migrated (0.0243s) ==============

== 20180420010616 CleanupBuildStageMigration: migrating =======================
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— indexes(:ci_builds)
-> 0.0181s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_exists?(:ci_builds, :id, {:where=>”stage_id IS NULL”, :name=>”tmp_id_stage_partial_null_ind
-> 0.0147s
— add_index(:ci_builds, :id, {:where=>”stage_id IS NULL”, :name=>”tmp_id_stage_partial_null_index”,
-> 0.0348s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— indexes(:ci_builds)
-> 0.0189s
— remove_index(:ci_builds, {:algorithm=>:concurrently, :name=>”tmp_id_stage_partial_null_index”})
-> 0.0636s
== 20180420010616 CleanupBuildStageMigration: migrated (0.1602s) ==============

== 20180420080616 ScheduleStagesIndexMigration: migrating =====================
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
== 20180420080616 ScheduleStagesIndexMigration: migrated (0.0058s) ============

== 20180424090541 AddEnforceTermsToApplicationSettings: migrating =============
— add_column(:application_settings, :enforce_terms, :boolean, {:default=>false})
-> 0.0505s
== 20180424090541 AddEnforceTermsToApplicationSettings: migrated (0.0506s) ====

== 20180424134533 CreateApplicationSettingTerms: migrating ====================
— create_table(:application_setting_terms)
-> 0.0377s
== 20180424134533 CreateApplicationSettingTerms: migrated (0.0378s) ===========

== 20180425075446 CreateTermAgreements: migrating =============================
— create_table(:term_agreements)
-> 0.1154s
— add_index(:term_agreements, [:user_id, :term_id], {:unique=>true, :name=>”term_agreements_unique_
-> 0.0333s
== 20180425075446 CreateTermAgreements: migrated (0.1490s) ====================

== 20180425131009 AssureCommitsCountForMergeRequestDiff: migrating ============
== 20180425131009 AssureCommitsCountForMergeRequestDiff: migrated (0.0043s) ===

== 20180426102016 AddAcceptedTermToUsers: migrating ===========================
— change_table(:users)
-> 0.0074s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— foreign_keys(:users)
-> 0.0060s
— execute(“ALTER TABLE users\nADD CONSTRAINT fk_789cd90b35\nFOREIGN KEY (accepted_term_id)\nREFEREN
-> 0.0095s
— execute(“ALTER TABLE users VALIDATE CONSTRAINT fk_789cd90b35;”)
-> 0.0082s
== 20180426102016 AddAcceptedTermToUsers: migrated (0.0325s) ==================

== 20180430101916 AddRunnerTypeToCiRunners: migrating =========================
— add_column(:ci_runners, :runner_type, :smallint)
-> 0.0012s
== 20180430101916 AddRunnerTypeToCiRunners: migrated (0.0013s) ================

== 20180430143705 BackfillRunnerTypeForCiRunnersPostMigrate: migrating ========
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”ci_runners\” WHERE \”ci_runners\”.\”is_shared\” = ‘t’
-> 0.0010s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”ci_runners\” WHERE \”ci_runners\”.\”is_shared\” = ‘f’
-> 0.0009s
== 20180430143705 BackfillRunnerTypeForCiRunnersPostMigrate: migrated (0.0058s)

== 20180502122856 CreateProjectMirrorData: migrating ==========================
— table_exists?(:project_mirror_data)
-> 0.0014s
— column_exists?(:project_mirror_data, :status)
-> 0.0021s
— add_column(:project_mirror_data, :status, :string)
-> 0.0191s
— column_exists?(:project_mirror_data, :jid)
-> 0.0021s
— add_column(:project_mirror_data, :jid, :string)
-> 0.0010s
— column_exists?(:project_mirror_data, :last_error)
-> 0.0019s
— add_column(:project_mirror_data, :last_error, :text)
-> 0.0009s
== 20180502122856 CreateProjectMirrorData: migrated (0.0292s) =================

== 20180502124117 AddMissingColumnsToProjectMirrorData: migrating =============
— column_exists?(:project_mirror_data, :last_update_at)
-> 0.0025s
— add_column(:project_mirror_data, :last_update_at, :datetime_with_timezone)
-> 0.0042s
— column_exists?(:project_mirror_data, :last_successful_update_at)
-> 0.0025s
— add_column(:project_mirror_data, :last_successful_update_at, :datetime_with_timezone)
-> 0.0056s
== 20180502124117 AddMissingColumnsToProjectMirrorData: migrated (0.0150s) ====

== 20180502125859 AddSamlProviderIndexAndConstraintToIdentities: migrating ====
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— index_exists?(:identities, :saml_provider_id, {:where=>”saml_provider_id IS NOT NULL”, :algorithm
-> 0.0032s
— add_index(:identities, :saml_provider_id, {:where=>”saml_provider_id IS NOT NULL”, :algorithm=>:c
-> 0.0275s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— foreign_keys(:identities)
-> 0.0052s
— execute(“ALTER TABLE identities\nADD CONSTRAINT fk_aade90f0fc\nFOREIGN KEY (saml_provider_id)\nRE
-> 0.0115s
— execute(“ALTER TABLE identities VALIDATE CONSTRAINT fk_aade90f0fc;”)
-> 0.0082s
== 20180502125859 AddSamlProviderIndexAndConstraintToIdentities: migrated (0.0575s)

== 20180502130136 MigrateMirrorAttributesDataFromProjectsToImportState: migrating
== 20180502130136 MigrateMirrorAttributesDataFromProjectsToImportState: migrated (0.0068s)

== 20180502134117 MigrateImportAttributesDataFromProjectsToProjectMirrorData: migrating
== 20180502134117 MigrateImportAttributesDataFromProjectsToProjectMirrorData: migrated (0.0054s)

== 20180503131624 CreateRemoteMirrors: migrating ==============================
— table_exists?(:remote_mirrors)
-> 0.0015s
== 20180503131624 CreateRemoteMirrors: migrated (0.0016s) =====================

== 20180503141722 AddRemoteMirrorAvailableOverriddenToProjects: migrating =====
— column_exists?(:projects, :remote_mirror_available_overridden)
-> 0.0052s
== 20180503141722 AddRemoteMirrorAvailableOverriddenToProjects: migrated (0.0053s)

== 20180503150427 AddIndexToNamespacesRunnersToken: migrating =================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:namespaces, :runners_token, {:unique=>true, :algorithm=>:concurrently})
-> 0.0136s
— add_index(:namespaces, :runners_token, {:unique=>true, :algorithm=>:concurrently})
-> 0.0341s
== 20180503150427 AddIndexToNamespacesRunnersToken: migrated (0.0486s) ========

== 20180503154922 AddIndexesToProjectMirrorDataEE: migrating ==================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:project_mirror_data, :last_successful_update_at, {:algorithm=>:concurrently})
-> 0.0042s
— add_index(:project_mirror_data, :last_successful_update_at, {:algorithm=>:concurrently})
-> 0.0349s
== 20180503154922 AddIndexesToProjectMirrorDataEE: migrated (0.0401s) =========

== 20180503175053 EnsureMissingColumnsToProjectMirrorData: migrating ==========
— column_exists?(:project_mirror_data, :status)
-> 0.0023s
— column_exists?(:project_mirror_data, :jid)
-> 0.0021s
— column_exists?(:project_mirror_data, :last_error)
-> 0.0021s
== 20180503175053 EnsureMissingColumnsToProjectMirrorData: migrated (0.0067s) =

== 20180503175054 AddIndexesToProjectMirrorData: migrating ====================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:project_mirror_data, :jid, {:algorithm=>:concurrently})
-> 0.0051s
— add_index(:project_mirror_data, :jid, {:algorithm=>:concurrently})
-> 0.0259s
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— index_exists?(:project_mirror_data, :status, {:algorithm=>:concurrently})
-> 0.0059s
— add_index(:project_mirror_data, :status, {:algorithm=>:concurrently})
-> 0.0267s
== 20180503175054 AddIndexesToProjectMirrorData: migrated (0.0654s) ===========

== 20180503193542 AddIndexesToRemoteMirror: migrating =========================
— index_exists?(:remote_mirrors, :last_successful_update_at)
-> 0.0039s
== 20180503193542 AddIndexesToRemoteMirror: migrated (0.0040s) ================

== 20180503193953 AddMirrorAvailableToApplicationSettings: migrating ==========
— column_exists?(:application_settings, :mirror_available)
-> 0.0273s
== 20180503193953 AddMirrorAvailableToApplicationSettings: migrated (0.0274s) =

== 20180503200320 EnablePrometheusMetricsByDefault: migrating =================
— change_column_default(:application_settings, :prometheus_metrics_enabled, true)
-> 0.0235s
== 20180503200320 EnablePrometheusMetricsByDefault: migrated (0.0236s) ========

== 20180508055821 MakeRemoteMirrorsDisabledByDefault: migrating ===============
— change_column_default(:remote_mirrors, :enabled, false)
-> 0.0023s
== 20180508055821 MakeRemoteMirrorsDisabledByDefault: migrated (0.0023s) ======

== 20180508100222 AddNotNullConstraintToProjectMirrorDataForeignKey: migrating
— change_column_null(:project_mirror_data, :project_id, false)
-> 0.0006s
== 20180508100222 AddNotNullConstraintToProjectMirrorDataForeignKey: migrated (0.0020s)

== 20180508102840 AddUniqueConstraintToProjectMirrorDataProjectIdIndex: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0007s
— index_exists?(:project_mirror_data, :project_id, {:unique=>true, :name=>”index_project_mirror_dat
-> 0.0065s
— add_index(:project_mirror_data, :project_id, {:unique=>true, :name=>”index_project_mirror_data_on
-> 0.0329s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0004s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— indexes(:project_mirror_data)
-> 0.0021s
— remove_index(:project_mirror_data, {:algorithm=>:concurrently, :name=>”index_project_mirror_data_
-> 0.0088s
— rename_index(:project_mirror_data, “index_project_mirror_data_on_project_id_unique”, “index_proje
-> 0.0059s
== 20180508102840 AddUniqueConstraintToProjectMirrorDataProjectIdIndex: migrated (0.0583s)

== 20180509091305 RemoveProjectMirrorDataCreatedAtUpdatedAt: migrating ========
— column_exists?(:project_mirror_data, :created_at)
-> 0.0014s
— remove_column(:project_mirror_data, :created_at)
-> 0.0005s
— column_exists?(:project_mirror_data, :updated_at)
-> 0.0016s
— remove_column(:project_mirror_data, :updated_at)
-> 0.0006s
== 20180509091305 RemoveProjectMirrorDataCreatedAtUpdatedAt: migrated (0.0044s)

== 20180529093006 EnsureRemoteMirrorColumns: migrating ========================
— column_exists?(:remote_mirrors, :last_update_started_at)
-> 0.0026s
— column_exists?(:remote_mirrors, :remote_name)
-> 0.0022s
— column_exists?(:remote_mirrors, :only_protected_branches)
-> 0.0022s
== 20180529093006 EnsureRemoteMirrorColumns: migrated (0.0073s) ===============

– execute “bash” “/tmp/chef-script20190416-23896-1lyelex”
Recipe: gitlab::gitlab-rails
* execute[clear the gitlab-rails cache] action run
– execute /opt/gitlab/bin/gitlab-rake cache:clear
Recipe: gitlab::logrotate_folders_and_configs
* directory[/var/opt/gitlab/logrotate] action create (up to date)
* directory[/var/opt/gitlab/logrotate/logrotate.d] action create (up to date)
* directory[/var/log/gitlab/logrotate] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.conf] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/nginx] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/unicorn] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-rails] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-shell] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-workhorse] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-pages] action create (up to date)
Recipe: gitlab::unicorn
* directory[/var/log/gitlab/unicorn] action create (up to date)
* directory[/opt/gitlab/var/unicorn] action create (up to date)
* directory[/var/opt/gitlab/gitlab-rails/sockets] action create (up to date)
* directory[/var/opt/gitlab/gitlab-rails/etc] action create (up to date)
* template[/var/opt/gitlab/gitlab-rails/etc/unicorn.rb] action create (up to date)
* directory[/opt/gitlab/sv/unicorn] action create (up to date)
* directory[/opt/gitlab/sv/unicorn/log] action create (up to date)
* directory[/opt/gitlab/sv/unicorn/log/main] action create (up to date)
* template[/opt/gitlab/sv/unicorn/run] action create (up to date)
* template[/opt/gitlab/sv/unicorn/log/run] action create (up to date)
* template[/var/log/gitlab/unicorn/config] action create (up to date)
* ruby_block[reload unicorn svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart unicorn svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/unicorn/down] action delete (up to date)
* directory[/opt/gitlab/sv/unicorn/control] action create (up to date)
* template[/opt/gitlab/sv/unicorn/control/t] action create (up to date)
* link[/opt/gitlab/init/unicorn] action create (up to date)
* link[/opt/gitlab/service/unicorn] action create (up to date)
* ruby_block[supervise_unicorn_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/unicorn/supervise] action create (up to date)
* directory[/opt/gitlab/sv/unicorn/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/unicorn/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/unicorn/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/unicorn/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/unicorn/log/supervise/control] action touch (skipped due to only_if)
* service[unicorn] action nothing (skipped due to action :nothing)
* sysctl[net.core.somaxconn] action create
* directory[create /etc/sysctl.d for net.core.somaxconn] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-net.core.somaxconn.conf net.core.somaxconn] action cr
* link[/etc/sysctl.d/90-omnibus-gitlab-net.core.somaxconn.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf net.core.somaxconn] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf net.core.somaxconn] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf net.core.somaxconn] action delete (skipped due t
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf net.core.somaxconn] action delete (skipped due to only_if)
* execute[load sysctl conf net.core.somaxconn] action nothing (skipped due to action :nothing)
(up to date)
Recipe: gitlab::sidekiq
* directory[/var/log/gitlab/sidekiq] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq/log] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq/log/main] action create (up to date)
* template[/opt/gitlab/sv/sidekiq/run] action create (up to date)
* template[/opt/gitlab/sv/sidekiq/log/run] action create (up to date)
* template[/var/log/gitlab/sidekiq/config] action create (up to date)
* ruby_block[reload sidekiq svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart sidekiq svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/sidekiq/down] action delete (up to date)
* link[/opt/gitlab/init/sidekiq] action create (up to date)
* link[/opt/gitlab/service/sidekiq] action create (up to date)
* ruby_block[supervise_sidekiq_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/sidekiq/supervise] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/sidekiq/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/sidekiq/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/sidekiq/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/sidekiq/log/supervise/control] action touch (skipped due to only_if)
* service[sidekiq] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-workhorse
* directory[/var/opt/gitlab/gitlab-workhorse] action create (up to date)
* directory[/var/log/gitlab/gitlab-workhorse] action create (up to date)
* directory[/opt/gitlab/etc/gitlab-workhorse] action create (up to date)
* env_dir[/opt/gitlab/etc/gitlab-workhorse/env] action create
* directory[/opt/gitlab/etc/gitlab-workhorse/env] action create (up to date)
* file[/opt/gitlab/etc/gitlab-workhorse/env/PATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-workhorse/env/HOME] action create (up to date)
(up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse/log] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitlab-workhorse/run] action create (up to date)
* template[/opt/gitlab/sv/gitlab-workhorse/log/run] action create (up to date)
* template[/var/log/gitlab/gitlab-workhorse/config] action create (up to date)
* ruby_block[reload gitlab-workhorse svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart gitlab-workhorse svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/gitlab-workhorse/down] action delete (up to date)
* link[/opt/gitlab/init/gitlab-workhorse] action create (up to date)
* link[/opt/gitlab/service/gitlab-workhorse] action create (up to date)
* ruby_block[supervise_gitlab-workhorse_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/gitlab-workhorse/supervise] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/gitlab-workhorse/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-workhorse/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-workhorse/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-workhorse/log/supervise/control] action touch (skipped due to only_if)
* service[gitlab-workhorse] action nothing (skipped due to action :nothing)
* file[/var/opt/gitlab/gitlab-workhorse/VERSION] action create
– update content in file /var/opt/gitlab/gitlab-workhorse/VERSION from 361166 to 28a20c
— /var/opt/gitlab/gitlab-workhorse/VERSION 2018-02-20 13:41:16.898969455 +0900
+++ /var/opt/gitlab/gitlab-workhorse/.chef-VERSION20190416-23896-fb7bbi 2019-04-16 12:17:05.432809785 +090
@@ -1,2 +1,2 @@
-gitlab-workhorse v3.3.1-20180216.173531
+gitlab-workhorse v4.2.1-20180711.082039
– restore selinux security context
* template[/var/opt/gitlab/gitlab-workhorse/config.toml] action create (up to date)
Recipe: gitlab::mailroom_disable
* link[/opt/gitlab/service/mailroom] action delete (up to date)
* directory[/opt/gitlab/sv/mailroom] action delete (up to date)
Recipe: gitlab::nginx
* directory[/var/opt/gitlab/nginx] action create (up to date)
* directory[/var/opt/gitlab/nginx/conf] action create (up to date)
* directory[/var/log/gitlab/nginx] action create (up to date)
* link[/var/opt/gitlab/nginx/logs] action create (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-http.conf] action create
– update content in file /var/opt/gitlab/nginx/conf/gitlab-http.conf from bd94df to 3e6762
— /var/opt/gitlab/nginx/conf/gitlab-http.conf 2018-06-18 15:18:50.354364773 +0900
+++ /var/opt/gitlab/nginx/conf/.chef-gitlab-http20190416-23896-4nx32e.conf 2019-04-16 12:17:05.494807956 +090
@@ -53,7 +53,7 @@
ssl_certificate_key /etc/letsencrypt/live/gitlab.theksystem.kr/privkey.pem;

# GitLab needs backwards compatible ciphers to retain compatibility with Java IDEs
– ssl_ciphers ‘ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES1A256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4’;
+ ssl_ciphers ‘ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES156-SHA:AES128-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4’;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1000 shared:SSL:10m;
@@ -80,9 +80,15 @@
set $http_host_with_default $http_host;
}

– ## If you use HTTPS make sure you disable gzip compression
– ## to be safe against BREACH attack.
– gzip off;
+ gzip on;
+ gzip_static on;
+ gzip_comp_level 2;
+ gzip_http_version 1.1;
+ gzip_vary on;
+ gzip_disable “msie6″;
+ gzip_min_length 10240;
+ gzip_proxied no-cache no-store private expired auth;
+ gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json applicati

## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
– restore selinux security context
* template[/var/opt/gitlab/nginx/conf/gitlab-pages.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-registry.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-mattermost-http.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/nginx-status.conf] action create (up to date)
* template[/var/opt/gitlab/nginx/conf/nginx.conf] action create (up to date)
Recipe: nginx::enable
* directory[/opt/gitlab/sv/nginx] action create (up to date)
* directory[/opt/gitlab/sv/nginx/log] action create (up to date)
* directory[/opt/gitlab/sv/nginx/log/main] action create (up to date)
* template[/opt/gitlab/sv/nginx/run] action create (up to date)
* template[/opt/gitlab/sv/nginx/log/run] action create (up to date)
* template[/var/log/gitlab/nginx/config] action create (up to date)
* ruby_block[reload nginx svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart nginx svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/nginx/down] action delete (up to date)
* link[/opt/gitlab/init/nginx] action create (up to date)
* link[/opt/gitlab/service/nginx] action create (up to date)
* ruby_block[supervise_nginx_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/nginx/supervise] action create (up to date)
* directory[/opt/gitlab/sv/nginx/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/nginx/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/nginx/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/nginx/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/nginx/log/supervise/control] action touch (skipped due to only_if)
* service[nginx] action nothing (skipped due to action :nothing)
* execute[reload nginx] action nothing (skipped due to action :nothing)
Recipe: gitlab::remote-syslog_disable
* link[/opt/gitlab/service/remote-syslog] action delete (up to date)
* directory[/opt/gitlab/sv/remote-syslog] action delete (up to date)
Recipe: gitlab::logrotate
* directory[/opt/gitlab/sv/logrotate] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/log] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/log/main] action create (up to date)
* template[/opt/gitlab/sv/logrotate/run] action create (up to date)
* template[/opt/gitlab/sv/logrotate/log/run] action create (up to date)
* template[/var/log/gitlab/logrotate/config] action create (up to date)
* ruby_block[reload logrotate svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart logrotate svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/logrotate/down] action delete (up to date)
* directory[/opt/gitlab/sv/logrotate/control] action create (up to date)
* template[/opt/gitlab/sv/logrotate/control/t] action create (up to date)
* link[/opt/gitlab/init/logrotate] action create (up to date)
* link[/opt/gitlab/service/logrotate] action create (up to date)
* ruby_block[supervise_logrotate_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/logrotate/supervise] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/logrotate/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/logrotate/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/logrotate/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/logrotate/log/supervise/control] action touch (skipped due to only_if)
* service[logrotate] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-pages_disable
* link[/opt/gitlab/service/gitlab-pages] action delete (up to date)
* directory[/opt/gitlab/sv/gitlab-pages] action delete (up to date)
Recipe: gitlab::storage-check_disable
* link[/opt/gitlab/service/storage-check] action delete (up to date)
* directory[/opt/gitlab/sv/storage-check] action delete (up to date)
Recipe: registry::disable
* link[/opt/gitlab/service/registry] action delete (up to date)
* directory[/opt/gitlab/sv/registry] action delete (up to date)
Recipe: gitaly::enable
* directory[/var/opt/gitlab/gitaly] action create (up to date)
* directory[/var/log/gitlab/gitaly] action create (up to date)
* env_dir[/opt/gitlab/etc/gitaly] action create
* directory[/opt/gitlab/etc/gitaly] action create (up to date)
* file[/opt/gitlab/etc/gitaly/HOME] action create (up to date)
* file[/opt/gitlab/etc/gitaly/PATH] action create (up to date)
* file[/opt/gitlab/etc/gitaly/TZ] action create (up to date)
(up to date)
* template[Create Gitaly config.toml] action create (up to date)
* directory[/opt/gitlab/sv/gitaly] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitaly/run] action create (up to date)
* template[/opt/gitlab/sv/gitaly/log/run] action create (up to date)
* template[/var/log/gitlab/gitaly/config] action create (up to date)
* ruby_block[reload gitaly svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart gitaly svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/gitaly/down] action delete (up to date)
* link[/opt/gitlab/init/gitaly] action create (up to date)
* link[/opt/gitlab/service/gitaly] action create (up to date)
* ruby_block[supervise_gitaly_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/gitaly/supervise] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/gitaly/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitaly/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitaly/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitaly/log/supervise/control] action touch (skipped due to only_if)
* service[gitaly] action nothing (skipped due to action :nothing)
* file[/var/opt/gitlab/gitaly/VERSION] action create
– create new file /var/opt/gitlab/gitaly/VERSION
– update content in file /var/opt/gitlab/gitaly/VERSION from none to c417e2
— /var/opt/gitlab/gitaly/VERSION 2019-04-16 12:17:05.735800845 +0900
+++ /var/opt/gitlab/gitaly/.chef-VERSION20190416-23896-bgcm30 2019-04-16 12:17:05.735800845 +0900
@@ -1 +1,2 @@
+Gitaly, version 0.100.1, built 20180726.011732
– restore selinux security context
Recipe: mattermost::disable
* link[/opt/gitlab/service/mattermost] action delete (up to date)
* directory[/opt/gitlab/sv/mattermost] action delete (up to date)
Recipe: gitlab::gitlab-healthcheck
* template[/opt/gitlab/etc/gitlab-healthcheck-rc] action create (up to date)
Recipe: gitlab::prometheus_user
* account[Prometheus user and group] action create
* group[Prometheus user and group] action create (up to date)
* linux_user[Prometheus user and group] action create (up to date)
(up to date)
Recipe: gitlab::node-exporter
* directory[/var/log/gitlab/node-exporter] action create (up to date)
* directory[/var/opt/gitlab/node-exporter/textfile_collector] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/node-exporter/run] action create (up to date)
* template[/opt/gitlab/sv/node-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/node-exporter/config] action create (up to date)
* ruby_block[reload node-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart node-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/node-exporter/down] action delete (up to date)
* link[/opt/gitlab/init/node-exporter] action create (up to date)
* link[/opt/gitlab/service/node-exporter] action create (up to date)
* ruby_block[supervise_node-exporter_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/node-exporter/supervise] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/node-exporter/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/node-exporter/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/node-exporter/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/node-exporter/log/supervise/control] action touch (skipped due to only_if)
* service[node-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-monitor
* directory[/var/opt/gitlab/gitlab-monitor] action create (up to date)
* directory[/var/log/gitlab/gitlab-monitor] action create (up to date)
* template[/var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor/log] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitlab-monitor/run] action create (up to date)
* template[/opt/gitlab/sv/gitlab-monitor/log/run] action create (up to date)
* template[/var/log/gitlab/gitlab-monitor/config] action create (up to date)
* ruby_block[reload gitlab-monitor svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart gitlab-monitor svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/gitlab-monitor/down] action delete (up to date)
* link[/opt/gitlab/init/gitlab-monitor] action create (up to date)
* link[/opt/gitlab/service/gitlab-monitor] action create (up to date)
* ruby_block[supervise_gitlab-monitor_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/gitlab-monitor/supervise] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/gitlab-monitor/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-monitor/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-monitor/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/gitlab-monitor/log/supervise/control] action touch (skipped due to only_if)
* service[gitlab-monitor] action nothing (skipped due to action :nothing)
Recipe: gitlab::redis-exporter
* directory[/var/log/gitlab/redis-exporter] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/redis-exporter/run] action create (up to date)
* template[/opt/gitlab/sv/redis-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/redis-exporter/config] action create (up to date)
* ruby_block[reload redis-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart redis-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/redis-exporter/down] action delete (up to date)
* link[/opt/gitlab/init/redis-exporter] action create (up to date)
* link[/opt/gitlab/service/redis-exporter] action create (up to date)
* ruby_block[supervise_redis-exporter_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/redis-exporter/supervise] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/redis-exporter/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis-exporter/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis-exporter/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/redis-exporter/log/supervise/control] action touch (skipped due to only_if)
* service[redis-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab::prometheus
* directory[/var/opt/gitlab/prometheus] action create (up to date)
* directory[/var/log/gitlab/prometheus] action create (up to date)
* file[Prometheus config] action create (up to date)
* directory[/opt/gitlab/sv/prometheus] action create (up to date)
* directory[/opt/gitlab/sv/prometheus/log] action create (up to date)
* directory[/opt/gitlab/sv/prometheus/log/main] action create (up to date)
* template[/opt/gitlab/sv/prometheus/run] action create
– update content in file /opt/gitlab/sv/prometheus/run from 647ec7 to d15715
— /opt/gitlab/sv/prometheus/run 2018-02-20 13:41:42.130426824 +0900
+++ /opt/gitlab/sv/prometheus/.chef-run20190416-23896-1dneluz 2019-04-16 12:17:05.876796685 +0900
@@ -3,5 +3,5 @@

umask 077
exec chpst -P -U gitlab-prometheus -u gitlab-prometheus \
– /opt/gitlab/embedded/bin/prometheus -web.listen-address=localhost:9090 -storage.local.path=/var/opt/gitlab/gitlab/prometheus/prometheus.yml
+ /opt/gitlab/embedded/bin/prometheus -web.listen-address=localhost:9090 -storage.local.path=/var/opt/gitlab/gitlab/prometheus/prometheus.yml
– restore selinux security context
* template[/opt/gitlab/sv/prometheus/log/run] action create (up to date)
* template[/var/log/gitlab/prometheus/config] action create (up to date)
* ruby_block[reload prometheus svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart prometheus svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/prometheus/down] action delete (up to date)
* link[/opt/gitlab/init/prometheus] action create (up to date)
* link[/opt/gitlab/service/prometheus] action create (up to date)
* ruby_block[supervise_prometheus_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/prometheus/supervise] action create (up to date)
* directory[/opt/gitlab/sv/prometheus/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/prometheus/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/prometheus/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/prometheus/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/prometheus/log/supervise/control] action touch (skipped due to only_if)
* service[prometheus] action nothing (skipped due to action :nothing)
Recipe: gitlab::alertmanager
* directory[/var/opt/gitlab/alertmanager] action create
– create new directory /var/opt/gitlab/alertmanager
– change mode from ” to ‘0750’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* directory[/var/log/gitlab/alertmanager] action create
– create new directory /var/log/gitlab/alertmanager
– change mode from ” to ‘0700’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* file[Alertmanager config] action create
– create new file /var/opt/gitlab/alertmanager/alertmanager.yml
– update content in file /var/opt/gitlab/alertmanager/alertmanager.yml from none to 21b7be
— /var/opt/gitlab/alertmanager/alertmanager.yml 2019-04-16 12:17:05.974793793 +0900
+++ /var/opt/gitlab/alertmanager/.chef-alertmanager20190416-23896-1ptrgr9.yml 2019-04-16 12:17:05.974793
@@ -1 +1,10 @@
+—
+global: {}
+templates: []
+route:
+ receiver: default-receiver
+ routes: []
+receivers:
+- name: default-receiver
+inhibit_rules: []
– change mode from ” to ‘0644’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* directory[/opt/gitlab/sv/alertmanager] action create
– create new directory /opt/gitlab/sv/alertmanager
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* directory[/opt/gitlab/sv/alertmanager/log] action create
– create new directory /opt/gitlab/sv/alertmanager/log
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* directory[/opt/gitlab/sv/alertmanager/log/main] action create
– create new directory /opt/gitlab/sv/alertmanager/log/main
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* template[/opt/gitlab/sv/alertmanager/run] action create
– create new file /opt/gitlab/sv/alertmanager/run
– update content in file /opt/gitlab/sv/alertmanager/run from none to ac906d
— /opt/gitlab/sv/alertmanager/run 2019-04-16 12:17:06.122789427 +0900
+++ /opt/gitlab/sv/alertmanager/.chef-run20190416-23896-1toru5j 2019-04-16 12:17:06.121789456 +0900
@@ -1 +1,7 @@
+#!/bin/sh
+exec 2>&1
+
+umask 077
+exec chpst -P -U gitlab-prometheus -u gitlab-prometheus \
+ /opt/gitlab/embedded/bin/alertmanager –web.listen-address=localhost:9093 –storage.path=/var/opt/gitlab/al
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* template[/opt/gitlab/sv/alertmanager/log/run] action create
– create new file /opt/gitlab/sv/alertmanager/log/run
– update content in file /opt/gitlab/sv/alertmanager/log/run from none to 2feab9
— /opt/gitlab/sv/alertmanager/log/run 2019-04-16 12:17:06.164788187 +0900
+++ /opt/gitlab/sv/alertmanager/log/.chef-run20190416-23896-nrefj 2019-04-16 12:17:06.163788217 +0900
@@ -1 +1,3 @@
+#!/bin/sh
+exec svlogd -tt /var/log/gitlab/alertmanager
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* template[/var/log/gitlab/alertmanager/config] action create
– create new file /var/log/gitlab/alertmanager/config
– update content in file /var/log/gitlab/alertmanager/config from none to 623c00
— /var/log/gitlab/alertmanager/config 2019-04-16 12:17:06.196787243 +0900
+++ /var/log/gitlab/alertmanager/.chef-config20190416-23896-i3lm8v 2019-04-16 12:17:06.196787243 +0900
@@ -1 +1,7 @@
+s209715200
+n30
+t86400
+!gzip
+
+
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[reload alertmanager svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart alertmanager svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/alertmanager/down] action delete (up to date)
* link[/opt/gitlab/init/alertmanager] action create
– create symlink at /opt/gitlab/init/alertmanager to /opt/gitlab/embedded/bin/sv
* link[/opt/gitlab/service/alertmanager] action create
– create symlink at /opt/gitlab/service/alertmanager to /opt/gitlab/sv/alertmanager
* ruby_block[supervise_alertmanager_sleep] action run
– execute the ruby block supervise_alertmanager_sleep
* directory[/opt/gitlab/sv/alertmanager/supervise] action create
– change mode from ‘0700’ to ‘0755’
– restore selinux security context
* directory[/opt/gitlab/sv/alertmanager/log/supervise] action create
– change mode from ‘0700’ to ‘0755’
– restore selinux security context
* file[/opt/gitlab/sv/alertmanager/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/alertmanager/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/alertmanager/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/alertmanager/log/supervise/control] action touch (skipped due to only_if)
* service[alertmanager] action nothing (skipped due to action :nothing)
Recipe: gitlab::postgres-exporter
* directory[/var/log/gitlab/postgres-exporter] action create (up to date)
* directory[/var/opt/gitlab/postgres-exporter] action create (up to date)
* env_dir[/opt/gitlab/etc/postgres-exporter/env] action create
* directory[/opt/gitlab/etc/postgres-exporter/env] action create (up to date)
* file[/opt/gitlab/etc/postgres-exporter/env/DATA_SOURCE_NAME] action create (up to date)
(up to date)
* directory[/opt/gitlab/sv/postgres-exporter] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/postgres-exporter/run] action create (up to date)
* template[/opt/gitlab/sv/postgres-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/postgres-exporter/config] action create (up to date)
* ruby_block[reload postgres-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* ruby_block[restart postgres-exporter svlogd configuration] action nothing (skipped due to action :nothing)
* file[/opt/gitlab/sv/postgres-exporter/down] action delete (up to date)
* link[/opt/gitlab/init/postgres-exporter] action create (up to date)
* link[/opt/gitlab/service/postgres-exporter] action create (up to date)
* ruby_block[supervise_postgres-exporter_sleep] action run (skipped due to not_if)
* directory[/opt/gitlab/sv/postgres-exporter/supervise] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/postgres-exporter/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgres-exporter/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgres-exporter/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgres-exporter/log/supervise/control] action touch (skipped due to only_if)
* service[postgres-exporter] action nothing (skipped due to action :nothing)
* template[/var/opt/gitlab/postgres-exporter/queries.yaml] action create (up to date)
Recipe: gitlab::deprecate-skip-auto-migrations
* file[/etc/gitlab/skip-auto-reconfigure] action create (skipped due to only_if)
* ruby_block[skip-auto-migrations deprecation] action run (skipped due to only_if)
Recipe: gitlab-ee::sentinel_disable
* account[user and group for sentinel] action create
* group[user and group for sentinel] action create (up to date)
* linux_user[user and group for sentinel] action create (up to date)
(up to date)
* link[/opt/gitlab/service/sentinel] action delete (up to date)
* directory[/opt/gitlab/sv/sentinel] action delete (up to date)
* file[/var/opt/gitlab/sentinel/sentinel.conf] action delete (up to date)
* directory[/var/opt/gitlab/sentinel] action delete (up to date)
Recipe: gitlab-ee::sidekiq-cluster_disable
* link[/opt/gitlab/service/sidekiq-cluster] action delete (up to date)
* directory[/opt/gitlab/sv/sidekiq-cluster] action delete (up to date)
Recipe: gitlab-ee::geo-postgresql_disable
* link[/opt/gitlab/service/geo-postgresql] action delete (up to date)
* directory[/opt/gitlab/sv/geo-postgresql] action delete (up to date)
Recipe: gitlab-ee::geo-logcursor_disable
* link[/opt/gitlab/service/geo-logcursor] action delete (up to date)
* directory[/opt/gitlab/sv/geo-logcursor] action delete (up to date)
Recipe: gitlab-ee::pgbouncer_disable
* link[/opt/gitlab/service/pgbouncer] action delete (up to date)
* directory[/opt/gitlab/sv/pgbouncer] action delete (up to date)
Recipe: consul::disable_daemon
* link[/opt/gitlab/service/consul] action delete (up to date)
* directory[/opt/gitlab/sv/consul] action delete (up to date)
Recipe: repmgr::repmgrd_disable
* link[/opt/gitlab/service/repmgrd] action delete (up to date)
* directory[/opt/gitlab/sv/repmgrd] action delete (up to date)
Recipe: gitlab-ee::geo-secondary_disable
* templatesymlink[Removes database_geo.yml symlink] action delete
* file[/var/opt/gitlab/gitlab-rails/etc/database_geo.yml] action delete (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/database_geo.yml] action delete (up to date)
(up to date)
Recipe: gitlab::gitlab-rails
* execute[clear the gitlab-rails cache] action run
– execute /opt/gitlab/bin/gitlab-rake cache:clear
Recipe: gitlab::gitlab-workhorse
* service[gitlab-workhorse] action restart
– restart service service[gitlab-workhorse]
Recipe: gitaly::enable
* service[gitaly] action restart
– restart service service[gitaly]
Recipe: gitlab::alertmanager
* service[alertmanager] action restart
– restart service service[alertmanager]
* ruby_block[restart alertmanager svlogd configuration] action create
– execute the ruby block restart alertmanager svlogd configuration
* ruby_block[reload alertmanager svlogd configuration] action create
– execute the ruby block reload alertmanager svlogd configuration

Running handlers:
Running handlers complete
Chef Client finished, 50/583 resources updated in 01 minutes 56 seconds
gitlab Reconfigured!
Checking for an omnibus managed postgresql: OK
Checking for a newer version of PostgreSQL to install
No new version of PostgreSQL installed, nothing to upgrade to
Ensuring PostgreSQL is updated: OK
Restarting previously running GitLab services
ok: run: gitaly: (pid 25143) 1s
ok: run: gitlab-monitor: (pid 25197) 1s
ok: run: gitlab-workhorse: (pid 25128) 3s
ok: run: logrotate: (pid 25278) 0s
ok: run: nginx: (pid 25284) 1s
ok: run: node-exporter: (pid 25294) 0s
ok: run: postgres-exporter: (pid 25300) 0s
ok: run: postgresql: (pid 24768) 58s
ok: run: prometheus: (pid 25309) 0s
ok: run: redis: (pid 24745) 60s
ok: run: redis-exporter: (pid 25327) 0s
ok: run: sidekiq: (pid 25335) 1s
ok: run: unicorn: (pid 25344) 0s

_______ __ __ __
/ ____(_) /_/ / ____ _/ /_
/ / __/ / __/ / / __ `/ __ \
/ /_/ / / /_/ /___/ /_/ / /_/ /
\____/_/\__/_____/\__,_/_.___/

Upgrade complete! If your GitLab server is misbehaving try running
sudo gitlab-ctl restart
before anything else.
If you need to roll back to the previous version you can use the database
backup made during the upgrade (scroll up for the filename).

Verifying : gitlab-ee-10.8.7-ee.0.el7.x86_64
Verifying : gitlab-ee-10.4.4-ee.0.el7.x86_64

Updated:
gitlab-ee.x86_64 0:10.8.7-ee.0.el7

Complete!

업그레이드 완료

다음에 11버전 으로 업그레이드

[root@withkdev ~]# sudo yum install -y gitlab-ee
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: centos.mirror.moack.net
* epel: mirror.premi.st
* extras: centos.mirror.moack.net
* remi-php71: ftp.riken.jp
* remi-php73: ftp.riken.jp
* remi-safe: ftp.riken.jp
* updates: centos.mirror.moack.net
gitlab_gitlab-ee/x86_64/signature
gitlab_gitlab-ee/x86_64/signature
gitlab_gitlab-ee-source/signature
gitlab_gitlab-ee-source/signature
Resolving Dependencies
–> Running transaction check
—> Package gitlab-ee.x86_64 0:10.8.7-ee.0.el7 will be updated
—> Package gitlab-ee.x86_64 0:11.9.8-ee.0.el7 will be an update
–> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================
Package Arch Version
==================================================================================================================
Updating:
gitlab-ee x86_64 11.9.8-e

Transaction Summary
==================================================================================================================
Upgrade 1 Package

Total download size: 611 M
Downloading packages:
No Presto metadata available for gitlab_gitlab-ee
gitlab-ee-11.9.8-ee.0.el7.x86_64.rpm | 611 MB 00:00:30
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
gitlab preinstall: Automatically backing up only the GitLab SQL database (excluding everything else!)
Dumping database …
Dumping PostgreSQL database gitlabhq_production … [DONE]
done
Dumping repositories …
[SKIPPED]
Dumping uploads …
[SKIPPED]
Dumping builds …
[SKIPPED]
Dumping artifacts …
[SKIPPED]
Dumping pages …
[SKIPPED]
Dumping lfs objects …
[SKIPPED]
Dumping container registry images …
[DISABLED]
Creating backup archive: 1555384748_2019_04_16_10.8.7-ee_gitlab_backup.tar … done
Uploading backup archive to remote storage … skipped
Deleting tmp directories … done
done
Deleting old backups … skipping
Updating : gitlab-ee-11.9.8-ee.0.el7.x86_64 1/2
Cleanup : gitlab-ee-10.8.7-ee.0.el7.x86_64 2/2
Checking PostgreSQL executables:Starting Chef Client, version 13.6.4
resolving cookbooks for run list: [“gitlab::config”, “postgresql::bin”]
Synchronizing Cookbooks:
– gitlab (0.0.1)
– postgresql (0.1.0)
– package (0.1.0)
– redis (0.1.0)
– registry (0.1.0)
– mattermost (0.1.0)
– consul (0.1.0)
– gitaly (0.1.0)
– nginx (0.1.0)
– letsencrypt (0.1.0)
– runit (4.3.0)
– acme (3.1.0)
– crond (0.1.0)
– compat_resource (12.19.1)
Installing Cookbook Gems:
Compiling Cookbooks…
Converging 1 resources
Recipe: postgresql::bin
* ruby_block[Link postgresql bin files to the correct version] action run (skipped due to only_if)

Running handlers:
Running handlers complete
Chef Client finished, 0/1 resources updated in 29 seconds
Checking PostgreSQL executables: OK
Shutting down all GitLab services except those needed for migrations
ok: down: alertmanager: 0s, normally up
ok: down: gitlab-monitor: 1s, normally up
ok: down: gitlab-workhorse: 0s, normally up
ok: down: logrotate: 1s, normally up
ok: down: nginx: 0s, normally up
ok: down: node-exporter: 1s, normally up
ok: down: postgres-exporter: 0s, normally up
ok: down: prometheus: 0s, normally up
ok: down: redis-exporter: 1s, normally up
ok: down: sidekiq: 1s, normally up
ok: down: unicorn: 0s, normally up
Ensuring the required services are running
ok: run: postgresql: (pid 24768) 319s
ok: run: redis: (pid 24745) 320s
ok: run: gitaly: (pid 25143) 264s
run: postgresql: (pid 24768) 319s; run: log: (pid 4387) 225311s
run: redis: (pid 24745) 320s; run: log: (pid 4389) 225311s
run: gitaly: (pid 25143) 264s; run: log: (pid 4473) 225311s
Reconfiguring GitLab to apply migrations
Starting Chef Client, version 13.6.4
resolving cookbooks for run list: [“gitlab-ee”]
Synchronizing Cookbooks:
– gitlab-ee (0.0.1)
– package (0.1.0)
– gitlab (0.0.1)
– consul (0.1.0)
– runit (4.3.0)
– repmgr (0.1.0)
– postgresql (0.1.0)
– redis (0.1.0)
– registry (0.1.0)
– mattermost (0.1.0)
– gitaly (0.1.0)
– letsencrypt (0.1.0)
– nginx (0.1.0)
– acme (3.1.0)
– crond (0.1.0)
– compat_resource (12.19.1)
Installing Cookbook Gems:
Compiling Cookbooks…
Recipe: gitlab::default
* directory[/etc/gitlab] action create (up to date)
Converging 258 resources
* directory[/etc/gitlab] action create (up to date)
* directory[Create /var/opt/gitlab] action create (up to date)
* directory[/opt/gitlab/embedded/etc] action create (up to date)
* template[/opt/gitlab/embedded/etc/gitconfig] action create
– update content in file /opt/gitlab/embedded/etc/gitconfig from 987af3 to f8c837
— /opt/gitlab/embedded/etc/gitconfig 2018-02-20 13:39:43.593031507 +0900
+++ /opt/gitlab/embedded/etc/.chef-gitconfig20190416-27010-1ubirj7 2019-04-16 12:22:16.319900025 +0900
@@ -8,4 +8,5 @@
[transfer]
hideRefs=^refs/tmp/
hideRefs=^refs/keep-around/
+hideRefs=^refs/remotes/
– restore selinux security context
Recipe: gitlab::web-server
* account[Webserver user and group] action create
* group[Webserver user and group] action create (up to date)
* linux_user[Webserver user and group] action create (up to date)
(up to date)
Recipe: gitlab::users
* directory[/var/opt/gitlab] action create (up to date)
* account[GitLab user and group] action create
* group[GitLab user and group] action create (up to date)
* linux_user[GitLab user and group] action create (up to date)
(up to date)
* template[/var/opt/gitlab/.gitconfig] action create (up to date)
* directory[/var/opt/gitlab/.bundle] action create
– create new directory /var/opt/gitlab/.bundle
– change owner from ” to ‘git’
– change group from ” to ‘git’
– restore selinux security context
Recipe: gitlab::gitlab-shell
* storage_directory[/var/opt/gitlab/.ssh] action create
* ruby_block[directory resource: /var/opt/gitlab/.ssh] action run (skipped due to not_if)
(up to date)
* directory[/var/log/gitlab/gitlab-shell/] action create (up to date)
* directory[/var/opt/gitlab/gitlab-shell] action create (up to date)
* templatesymlink[Create a config.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-shell/config.yml] action create
– update content in file /var/opt/gitlab/gitlab-shell/config.yml from 681cf5 to 7546e5
— /var/opt/gitlab/gitlab-shell/config.yml 2019-04-16 12:15:56.533842720 +0900
+++ /var/opt/gitlab/gitlab-shell/.chef-config20190416-27010-s80jfw.yml 2019-04-16 12:22:16.536893910 +0900
@@ -19,14 +19,6 @@
# File used as authorized_keys for gitlab user
auth_file: “/var/opt/gitlab/.ssh/authorized_keys”

-# Redis settings used for pushing commit notices to gitlab
-redis:
– host: 127.0.0.1
– port:
– socket: /var/opt/gitlab/redis/redis.socket
– database:
– namespace: resque:gitlab

# Log file.
# Default is gitlab-shell.log in the root directory.
log_file: “/var/log/gitlab/gitlab-shell/gitlab-shell.log”
– change mode from ‘0644’ to ‘0640’
– change group from ‘root’ to ‘git’
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-shell/config.yml to /var/opt/gitlab/gitlab-shell/config.yml] action create (up to date)

* link[/opt/gitlab/embedded/service/gitlab-shell/.gitlab_shell_secret] action create (up to date)
* execute[/opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-keys check-permissions] action run
– execute /opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-keys check-permissions
* bash[Set proper security context on ssh files for selinux] action run
– execute “bash” “/tmp/chef-script20190416-27010-9nshnu”
Recipe: gitlab::gitlab-rails
* storage_directory[/var/opt/gitlab/git-data] action create
* ruby_block[directory resource: /var/opt/gitlab/git-data] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/git-data/repositories] action create
* ruby_block[directory resource: /var/opt/gitlab/git-data/repositories] action run (skipped due to not_if)
(up to date)
* directory[/var/log/gitlab] action create (up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/artifacts] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/artifacts] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/external-diffs] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/external-diffs] action run
– execute the ruby block directory resource: /var/opt/gitlab/gitlab-rails/shared/external-diffs

* storage_directory[/var/opt/gitlab/gitlab-rails/shared/lfs-objects] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/lfs-objects] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/packages] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/packages] action run
– execute the ruby block directory resource: /var/opt/gitlab/gitlab-rails/shared/packages

* storage_directory[/var/opt/gitlab/gitlab-rails/uploads] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/uploads] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-ci/builds] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-ci/builds] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/cache] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/cache] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/tmp] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/tmp] action run (skipped due to not_if)
(up to date)
* storage_directory[/var/opt/gitlab/gitlab-rails/shared/pages] action create
* ruby_block[directory resource: /var/opt/gitlab/gitlab-rails/shared/pages] action run (skipped due to not_if)
(up to date)
* directory[create /var/opt/gitlab/gitlab-rails/etc] action create (up to date)
* directory[create /opt/gitlab/etc/gitlab-rails] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/working] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/tmp] action create (up to date)
* directory[create /var/opt/gitlab/gitlab-rails/upgrade-status] action create (up to date)
* directory[create /var/log/gitlab/gitlab-rails] action create (up to date)
* storage_directory[/var/opt/gitlab/backups] action create
* ruby_block[directory resource: /var/opt/gitlab/backups] action run (skipped due to not_if)
(up to date)
* directory[/var/opt/gitlab/gitlab-rails] action create (up to date)
* directory[/var/opt/gitlab/gitlab-ci] action create (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key] action create (skipped due to only_if)
* template[/opt/gitlab/etc/gitlab-rails/gitlab-rails-rc] action create
– update content in file /opt/gitlab/etc/gitlab-rails/gitlab-rails-rc from 15c7d9 to 81d695
— /opt/gitlab/etc/gitlab-rails/gitlab-rails-rc 2018-02-20 13:39:49.625882595 +0900
+++ /opt/gitlab/etc/gitlab-rails/.chef-gitlab-rails-rc20190416-27010-gnmexj 2019-04-16 12:22:28.463557780 +0900
@@ -1,2 +1,3 @@
gitlab_user=’git’
+gitlab_group=’git’
– restore selinux security context
* file[/opt/gitlab/embedded/service/gitlab-rails/.secret] action delete (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/secret] action delete (up to date)
* templatesymlink[Create a database.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/database.yml] action create
– update content in file /var/opt/gitlab/gitlab-rails/etc/database.yml from 00a743 to ba7f50
— /var/opt/gitlab/gitlab-rails/etc/database.yml 2019-04-16 12:16:10.841420560 +0900
+++ /var/opt/gitlab/gitlab-rails/etc/.chef-database20190416-27010-16uakx6.yml 2019-04-16 12:22:28.506556568 +0900
@@ -14,6 +14,7 @@
port: 5432
socket:
sslmode:
+ sslcompression: 0
sslrootcert:
sslca:
load_balancing: {“hosts”:[]}
– change mode from ‘0644’ to ‘0640’
– change group from ‘root’ to ‘git’
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/database.yml to /var/opt/gitlab/gitlab-rails/etc/database.yml] action create (up to date)

* templatesymlink[Create a secrets.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/secrets.yml] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/secrets.yml to /var/opt/gitlab/gitlab-rails/etc/secrets.yml] action create (up to date)
(up to date)
* templatesymlink[Create a resque.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/resque.yml] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/resque.yml to /var/opt/gitlab/gitlab-rails/etc/resque.yml] action create (up to date)
(up to date)
* templatesymlink[Create a redis.cache.yml and create a symlink to Rails root] action create (skipped due to not_if)
* templatesymlink[Create a redis.queues.yml and create a symlink to Rails root] action create (skipped due to not_if)
* templatesymlink[Create a redis.shared_state.yml and create a symlink to Rails root] action create (skipped due to not_if)
* templatesymlink[Create a smtp_settings.rb and create a symlink to Rails root] action delete
* file[/var/opt/gitlab/gitlab-rails/etc/smtp_settings.rb] action delete (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/initializers/smtp_settings.rb] action delete (up to date)
(up to date)
* templatesymlink[Create a gitlab.yml and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab.yml] action create
– update content in file /var/opt/gitlab/gitlab-rails/etc/gitlab.yml from ff91b8 to adb97b
— /var/opt/gitlab/gitlab-rails/etc/gitlab.yml 2019-04-16 12:16:10.897418908 +0900
+++ /var/opt/gitlab/gitlab-rails/etc/.chef-gitlab20190416-27010-h53rcp.yml 2019-04-16 12:22:28.583554398 +0900
@@ -82,6 +82,9 @@
# The default is ‘tmp/repositories’ relative to the root of the Rails app.
repository_downloads_path:

+ ## Impersonation settings
+ impersonation_enabled:
+
usage_ping_enabled:

## Reply by email
@@ -128,6 +131,19 @@
remote_directory: “artifacts”
connection: {}

+ ## External merge request diffs
+ external_diffs:
+ enabled:
+ # The location where merge request diffs are stored (default: shared/external-diffs).
+ storage_path: /var/opt/gitlab/gitlab-rails/shared/external-diffs
+ object_store:
+ enabled: false
+ direct_upload: false
+ background_upload: true
+ proxy_download: false
+ remote_directory: “external-diffs”
+ connection: {}
+
## Git LFS
lfs:
enabled:
@@ -153,6 +169,19 @@
remote_directory: “uploads”
connection: {}

+ ## Packages (maven repository so far) EE only
+ packages:
+ enabled:
+ # The location where build packages are stored (default: shared/packages).
+ storage_path: /var/opt/gitlab/gitlab-rails/shared/packages
+ object_store:
+ enabled: false
+ direct_upload: false
+ background_upload: true
+ proxy_download: false
+ remote_directory: “packages”
+ connection: {}
+
## Container Registry
registry:
enabled: false
@@ -170,6 +199,7 @@
## GitLab Pages
pages:
enabled: false
+ access_control: false
path: /var/opt/gitlab/gitlab-rails/shared/pages
host:
port:
@@ -177,6 +207,9 @@
external_http: null
external_https: null
artifacts_server: true
+ admin:
+ address: unix:/var/opt/gitlab/gitlab-pages/admin.socket
+ certificate:

## Gravatar
## For Libravatar see: https://docs.gitlab.com/ce/customization/libravatar.html
@@ -214,6 +247,10 @@
repository_archive_cache_worker:
cron:

+ # Archive live traces which have not been archived yet
+ ci_archive_traces_cron_worker:
+ cron:
+
# Verify custom GitLab Pages domains
pages_domain_verification_cron_worker:
cron:
@@ -230,6 +267,9 @@
# GitLab LDAP group sync worker
# NOTE: This will only take effect if LDAP is enabled

+ # GitLab Geo prune event log worker
+ # NOTE: This will only take effect if Geo is enabled (primary node only)
+
# GitLab Geo repository sync worker
# NOTE: This will only take effect if Geo is enabled

@@ -245,6 +285,8 @@
# GitLab Geo migrated local files clean up worker
# NOTE: This will only take effect if Geo is enabled (secondary nodes only)

+ # Export pseudonymized data in CSV format for analysis
+
#
# 2. GitLab CI settings
# ==========================
@@ -289,6 +331,17 @@
sync_ssh_keys:
sync_time:

+ ## Smartcard authentication settings
+ smartcard:
+ # Allow smartcard authentication
+ enabled: false
+
+ # Path to a file containing a CA certificate
+ ca_file: “/etc/gitlab/ssl/CA.pem”
+
+ # Port where the client side certificate is requested by the webserver (NGINX/Apache)
+ client_certificate_required_port: 3444
+
## Kerberos settings
kerberos:
# Allow the HTTP Negotiate authentication method for Git clients
@@ -315,7 +368,7 @@
## OmniAuth settings
omniauth:
# Allow login via Twitter, Google, etc. using OmniAuth providers
– enabled: false
+ enabled:

# Uncomment this to automatically sign in with a specific omniauth provider’s without
# showing GitLab’s sign-in page (default: show the GitLab sign-in page)
@@ -412,8 +465,16 @@
remote_directory:
multipart_chunk_size:
encryption:
+ encryption_key:
storage_class:

+ ## Pseudonymizer settings
+ pseudonymizer:
+ manifest:
+ upload:
+ remote_directory:
+ connection: {}
+
## GitLab Shell settings
gitlab_shell:
path: /opt/gitlab/embedded/service/gitlab-shell/
@@ -440,7 +501,8 @@
unicorn_sampler_interval: 10
# IP whitelist controlling access to monitoring endpoints
ip_whitelist:
– – 127.0.0.0/8
+ – “127.0.0.0/8”
+ – “::1/128”
# Sidekiq exporter is webserver built in to Sidekiq to expose Prometheus metrics
sidekiq_exporter:
enabled: true
– change mode from ‘0644’ to ‘0640’
– change group from ‘root’ to ‘git’
– restore selinux security context
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml to /var/opt/gitlab/gitlab-rails/etc/gitlab.yml] action create (up to date)

* templatesymlink[Create a rack_attack.rb and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/config/initializers/rack_attack.rb to /var/opt/gitlab/gitlab-rails/etc/rack_attack.rb] action create (up to date)
(up to date)
* templatesymlink[Create a gitlab_workhorse_secret and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab_workhorse_secret] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/.gitlab_workhorse_secret to /var/opt/gitlab/gitlab-rails/etc/gitlab_workhorse_secret] action create (up to date)
(up to date)
* templatesymlink[Create a gitlab_shell_secret and create a symlink to Rails root] action create
* template[/var/opt/gitlab/gitlab-rails/etc/gitlab_shell_secret] action create (up to date)
* link[Link /opt/gitlab/embedded/service/gitlab-rails/.gitlab_shell_secret to /var/opt/gitlab/gitlab-rails/etc/gitlab_shell_secret] action create (up to date)
(up to date)
* templatesymlink[Create a gitlab_pages_secret and create a symlink to Rails root] action create (skipped due to only_if)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/initializers/relative_url.rb] action delete (up to date)
* file[/var/opt/gitlab/gitlab-rails/etc/relative_url.rb] action delete (up to date)
* env_dir[/opt/gitlab/etc/gitlab-rails/env] action create
* directory[/opt/gitlab/etc/gitlab-rails/env] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/HOME] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/RAILS_ENV] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/LD_PRELOAD] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/SIDEKIQ_MEMORY_KILLER_MAX_RSS] action create
– update content in file /opt/gitlab/etc/gitlab-rails/env/SIDEKIQ_MEMORY_KILLER_MAX_RSS from 6cce36 to dd80d7
— /opt/gitlab/etc/gitlab-rails/env/SIDEKIQ_MEMORY_KILLER_MAX_RSS 2018-02-20 13:39:50.343864871 +0900
+++ /opt/gitlab/etc/gitlab-rails/env/.chef-SIDEKIQ_MEMORY_KILLER_MAX_RSS20190416-27010-y5eaax 2019-04-16 12:22:28.643552707 +0900
@@ -1,2 +1,2 @@
-1000000
+2000000
– restore selinux security context
* file[/opt/gitlab/etc/gitlab-rails/env/BUNDLE_GEMFILE] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/PATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/ICU_DATA] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/PYTHONPATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/EXECJS_RUNTIME] action create (up to date)
* file[/opt/gitlab/etc/gitlab-rails/env/TZ] action create
– create new file /opt/gitlab/etc/gitlab-rails/env/TZ
– update content in file /opt/gitlab/etc/gitlab-rails/env/TZ from none to 983a95
— /opt/gitlab/etc/gitlab-rails/env/TZ 2019-04-16 12:22:28.672551890 +0900
+++ /opt/gitlab/etc/gitlab-rails/env/.chef-TZ20190416-27010-19n63qw 2019-04-16 12:22:28.671551918 +0900
@@ -1 +1,2 @@
+:/etc/localtime
– restore selinux security context

* link[/opt/gitlab/embedded/service/gitlab-rails/tmp] action create (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/public/uploads] action create (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/log] action create (up to date)
* link[/var/log/gitlab/gitlab-rails/sidekiq.log] action create (skipped due to not_if)
* file[/opt/gitlab/embedded/service/gitlab-rails/db/schema.rb] action create
– change owner from ‘root’ to ‘git’
– restore selinux security context
* remote_file[/var/opt/gitlab/gitlab-rails/VERSION] action create
– update content in file /var/opt/gitlab/gitlab-rails/VERSION from 796bef to f7ef08
— /var/opt/gitlab/gitlab-rails/VERSION 2019-04-16 12:16:11.021415249 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-VERSION20190416-27010-ocgotr 2019-04-16 12:22:28.720550538 +0900
@@ -1,2 +1,2 @@
-10.8.7-ee
+11.9.8-ee
– restore selinux security context
* remote_file[/var/opt/gitlab/gitlab-rails/REVISION] action create
– update content in file /var/opt/gitlab/gitlab-rails/REVISION from 657e64 to 4da29e
— /var/opt/gitlab/gitlab-rails/REVISION 2019-04-16 12:16:11.049414423 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-REVISION20190416-27010-1ctmmjg 2019-04-16 12:22:28.745549833 +0900
@@ -1,2 +1,2 @@
-075705a
+c970180
– restore selinux security context
* file[/var/opt/gitlab/gitlab-rails/RUBY_VERSION] action create
– update content in file /var/opt/gitlab/gitlab-rails/RUBY_VERSION from 3dd12e to 07c7b9
— /var/opt/gitlab/gitlab-rails/RUBY_VERSION 2019-04-16 12:16:11.081413479 +0900
+++ /var/opt/gitlab/gitlab-rails/.chef-RUBY_VERSION20190416-27010-10792k3 2019-04-16 12:22:28.778548903 +0900
@@ -1,2 +1,2 @@
-ruby 2.3.7p456 (2018-03-28 revision 63024) [x86_64-linux]
+ruby 2.5.3p105 (2018-10-18 revision 65156) [x86_64-linux]
– restore selinux security context
* execute[clear the gitlab-rails cache] action nothing (skipped due to action :nothing)
* file[/var/opt/gitlab/gitlab-rails/config.ru] action delete (up to date)
Recipe: gitlab::selinux
* execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-7.2.0-ssh-keygen.pp] action run (skipped due to not_if)
* execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-10.5.0-ssh-authorized-keys.pp] action run (skipped due to not_if)
Recipe: gitlab::add_trusted_certs
* directory[/etc/gitlab/trusted-certs] action create (up to date)
* directory[/opt/gitlab/embedded/ssl/certs] action create (up to date)
* file[/opt/gitlab/embedded/ssl/certs/README] action create (up to date)
* ruby_block[Move existing certs and link to /opt/gitlab/embedded/ssl/certs] action run

* Moving existing certificates found in /opt/gitlab/embedded/ssl/certs

* Symlinking existing certificates found in /etc/gitlab/trusted-certs

– execute the ruby block Move existing certs and link to /opt/gitlab/embedded/ssl/certs
Recipe: gitlab::default
* service[create a temporary unicorn service] action nothing (skipped due to action :nothing)
* service[create a temporary puma service] action nothing (skipped due to action :nothing)
* service[create a temporary sidekiq service] action nothing (skipped due to action :nothing)
* service[create a temporary mailroom service] action nothing (skipped due to action :nothing)
Recipe: package::runit_systemd
* directory[/usr/lib/systemd/system] action create (up to date)
* cookbook_file[/usr/lib/systemd/system/gitlab-runsvdir.service] action create
– update content in file /usr/lib/systemd/system/gitlab-runsvdir.service from bf758a to 6ca59d
— /usr/lib/systemd/system/gitlab-runsvdir.service 2018-02-20 13:40:05.932480034 +0900
+++ /usr/lib/systemd/system/.chef-gitlab-runsvdir20190416-27010-yrvqap.service 2019-04-16 12:22:33.101427071 +0900
@@ -1,11 +1,11 @@
[Unit]
Description=GitLab Runit supervision process
-After=basic.target
+After=multi-user.target

[Service]
ExecStart=/opt/gitlab/embedded/bin/runsvdir-start
Restart=always

[Install]
-WantedBy=basic.target
+WantedBy=multi-user.target
– restore selinux security context
* execute[systemctl daemon-reload] action run
– execute systemctl daemon-reload
* execute[systemctl enable gitlab-runsvdir] action run
[execute] Created symlink from /etc/systemd/system/multi-user.target.wants/gitlab-runsvdir.service to /usr/lib/systemd/system/gitlab-runsvdir.service.
– execute systemctl enable gitlab-runsvdir
* execute[systemctl start gitlab-runsvdir] action run
– execute systemctl start gitlab-runsvdir
* file[/etc/systemd/system/default.target.wants/gitlab-runsvdir.service] action delete (up to date)
* file[/etc/systemd/system/basic.target.wants/gitlab-runsvdir.service] action delete
– delete file /etc/systemd/system/basic.target.wants/gitlab-runsvdir.service
* execute[systemctl daemon-reload] action nothing (skipped due to action :nothing)
* execute[systemctl enable gitlab-runsvdir] action nothing (skipped due to action :nothing)
* execute[systemctl start gitlab-runsvdir] action nothing (skipped due to action :nothing)
Recipe: redis::enable
* account[user and group for redis] action create
* group[user and group for redis] action create (up to date)
* linux_user[user and group for redis] action create (up to date)
(up to date)
* group[Socket group] action create (up to date)
* directory[/var/opt/gitlab/redis] action create (up to date)
* directory[/var/log/gitlab/redis] action create (up to date)
* template[/var/opt/gitlab/redis/redis.conf] action create
– update content in file /var/opt/gitlab/redis/redis.conf from d493be to 46b4a3
— /var/opt/gitlab/redis/redis.conf 2019-04-16 12:16:24.488017903 +0900
+++ /var/opt/gitlab/redis/.chef-redis20190416-27010-iow5yy.conf 2019-04-16 12:22:34.304393168 +0900
@@ -94,7 +94,7 @@
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
-# tcp-backlog 511
+tcp-backlog 511

# Unix socket.
#
– restore selinux security context
Recipe:
* service[redis] action restart
– restart service service[redis]
* service[redis] action nothing (skipped due to action :nothing)
Recipe: redis::enable
* runit_service[redis] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/redis] action create (up to date)
* template[/opt/gitlab/sv/redis/run] action create
– update content in file /opt/gitlab/sv/redis/run from 535f80 to da365d
— /opt/gitlab/sv/redis/run 2018-02-20 13:40:06.622462999 +0900
+++ /opt/gitlab/sv/redis/.chef-run20190416-27010-cpppa7 2019-04-16 12:22:34.812378851 +0900
@@ -2,5 +2,5 @@
exec 2>&1

umask 077
-exec chpst -P -U gitlab-redis -u gitlab-redis /opt/gitlab/embedded/bin/redis-server /var/opt/gitlab/redis/redis.conf
+exec chpst -P -U gitlab-redis:gitlab-redis -u gitlab-redis:gitlab-redis /opt/gitlab/embedded/bin/redis-server /var/opt/gitlab/redis/redis.conf
– restore selinux security context
* directory[/opt/gitlab/sv/redis/log] action create (up to date)
* directory[/opt/gitlab/sv/redis/log/main] action create (up to date)
* template[/opt/gitlab/sv/redis/log/run] action create (up to date)
* template[/var/log/gitlab/redis/config] action create (up to date)
* directory[/opt/gitlab/sv/redis/env] action create
– create new directory /opt/gitlab/sv/redis/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for redis service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/redis/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/redis/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/redis/control] action create
– create new directory /opt/gitlab/sv/redis/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/redis] action create (up to date)
* file[/opt/gitlab/sv/redis/down] action delete (up to date)
* ruby_block[restart_service] action run
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/redis] action create (up to date)
* template[/opt/gitlab/sv/redis/run] action create (up to date)
* directory[/opt/gitlab/sv/redis/log] action create (up to date)
* directory[/opt/gitlab/sv/redis/log/main] action create (up to date)
* template[/opt/gitlab/sv/redis/log/run] action create (up to date)
* template[/var/log/gitlab/redis/config] action create (up to date)
* directory[/opt/gitlab/sv/redis/env] action create (up to date)
* ruby_block[Delete unmanaged env files for redis service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/redis/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/redis/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/redis/control] action create (up to date)
* link[/opt/gitlab/init/redis] action create (up to date)
* file[/opt/gitlab/sv/redis/down] action delete (up to date)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/redis] action create (up to date)
* ruby_block[wait for redis service socket] action run (skipped due to not_if)
– execute the ruby block restart_service
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/redis] action create (up to date)
* ruby_block[wait for redis service socket] action run (skipped due to not_if)

Recipe: gitaly::enable
* directory[/var/opt/gitlab/gitaly] action create (up to date)
* directory[/var/log/gitlab/gitaly] action create (up to date)
* env_dir[/opt/gitlab/etc/gitaly/env] action create
* directory[/opt/gitlab/etc/gitaly/env] action create
– create new directory /opt/gitlab/etc/gitaly/env
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/HOME] action create
– create new file /opt/gitlab/etc/gitaly/env/HOME
– update content in file /opt/gitlab/etc/gitaly/env/HOME from none to 205bb9
— /opt/gitlab/etc/gitaly/env/HOME 2019-04-16 12:22:34.979374145 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-HOME20190416-27010-nwg3ab 2019-04-16 12:22:34.979374145 +0900
@@ -1 +1,2 @@
+/var/opt/gitlab
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/PATH] action create
– create new file /opt/gitlab/etc/gitaly/env/PATH
– update content in file /opt/gitlab/etc/gitaly/env/PATH from none to d5dc07
— /opt/gitlab/etc/gitaly/env/PATH 2019-04-16 12:22:35.002373497 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-PATH20190416-27010-1y00iq9 2019-04-16 12:22:35.001373525 +0900
@@ -1 +1,2 @@
+/opt/gitlab/bin:/opt/gitlab/embedded/bin:/bin:/usr/bin
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/TZ] action create
– create new file /opt/gitlab/etc/gitaly/env/TZ
– update content in file /opt/gitlab/etc/gitaly/env/TZ from none to 983a95
— /opt/gitlab/etc/gitaly/env/TZ 2019-04-16 12:22:35.025372849 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-TZ20190416-27010-12ljuc6 2019-04-16 12:22:35.024372877 +0900
@@ -1 +1,2 @@
+:/etc/localtime
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/PYTHONPATH] action create
– create new file /opt/gitlab/etc/gitaly/env/PYTHONPATH
– update content in file /opt/gitlab/etc/gitaly/env/PYTHONPATH from none to 990cc2
— /opt/gitlab/etc/gitaly/env/PYTHONPATH 2019-04-16 12:22:35.050372144 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-PYTHONPATH20190416-27010-1kfa6d7 2019-04-16 12:22:35.050372144 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/lib/python3.4/site-packages
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/ICU_DATA] action create
– create new file /opt/gitlab/etc/gitaly/env/ICU_DATA
– update content in file /opt/gitlab/etc/gitaly/env/ICU_DATA from none to a04260
— /opt/gitlab/etc/gitaly/env/ICU_DATA 2019-04-16 12:22:35.093370932 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-ICU_DATA20190416-27010-1y8u8v0 2019-04-16 12:22:35.093370932 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/share/icu/current
– restore selinux security context
* file[/opt/gitlab/etc/gitaly/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/gitaly/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/gitaly/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/gitaly/env/SSL_CERT_DIR 2019-04-16 12:22:35.162368988 +0900
+++ /opt/gitlab/etc/gitaly/env/.chef-SSL_CERT_DIR20190416-27010-jr109c 2019-04-16 12:22:35.162368988 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context

* file[/opt/gitlab/etc/gitaly/HOME] action delete
– delete file /opt/gitlab/etc/gitaly/HOME
* file[/opt/gitlab/etc/gitaly/PATH] action delete
– delete file /opt/gitlab/etc/gitaly/PATH
* file[/opt/gitlab/etc/gitaly/TZ] action delete
– delete file /opt/gitlab/etc/gitaly/TZ
* file[/opt/gitlab/etc/gitaly/PYTHONPATH] action delete (up to date)
* file[/opt/gitlab/etc/gitaly/ICU_DATA] action delete (up to date)
* file[/opt/gitlab/etc/gitaly/SSL_CERT_DIR] action delete (up to date)
* template[Create Gitaly config.toml] action create
– update content in file /var/opt/gitlab/gitaly/config.toml from e62e01 to 3a6251
— /var/opt/gitlab/gitaly/config.toml 2018-02-20 13:41:25.420787589 +0900
+++ /var/opt/gitlab/gitaly/.chef-config20190416-27010-jdwjk9.toml 2019-04-16 12:22:35.240366789 +0900
@@ -8,6 +8,7 @@
bin_dir = ‘/opt/gitlab/embedded/bin’

+
# Optional: export metrics via Prometheus
prometheus_listen_addr = ‘localhost:9236’

– change mode from ‘0644’ to ‘0640’
– change group from ‘root’ to ‘git’
– restore selinux security context
Recipe:
* service[gitaly] action nothing (skipped due to action :nothing)
Recipe: gitaly::enable
* runit_service[gitaly] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/gitaly] action create (up to date)
* template[/opt/gitlab/sv/gitaly/run] action create
– update content in file /opt/gitlab/sv/gitaly/run from ca190e to 302140
— /opt/gitlab/sv/gitaly/run 2018-02-20 13:41:25.545784910 +0900
+++ /opt/gitlab/sv/gitaly/.chef-run20190416-27010-a9zo84 2019-04-16 12:22:35.285365521 +0900
@@ -8,8 +8,8 @@

cd /var/opt/gitlab/gitaly

-exec chpst -e /opt/gitlab/etc/gitaly -P \
– -U git \
– -u git \
+exec chpst -e /opt/gitlab/etc/gitaly/env -P \
+ -U git:git \
+ -u git:git \
/opt/gitlab/embedded/bin/gitaly /var/opt/gitlab/gitaly/config.toml
– restore selinux security context
* directory[/opt/gitlab/sv/gitaly/log] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitaly/log/run] action create (up to date)
* template[/var/log/gitlab/gitaly/config] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/env] action create
– create new directory /opt/gitlab/sv/gitaly/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for gitaly service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/gitaly/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/gitaly/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/gitaly/control] action create
– create new directory /opt/gitlab/sv/gitaly/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/gitaly] action create (up to date)
* file[/opt/gitlab/sv/gitaly/down] action delete (up to date)
* ruby_block[restart_service] action run
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/gitaly] action create (up to date)
* template[/opt/gitlab/sv/gitaly/run] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitaly/log/run] action create (up to date)
* template[/var/log/gitlab/gitaly/config] action create (up to date)
* directory[/opt/gitlab/sv/gitaly/env] action create (up to date)
* ruby_block[Delete unmanaged env files for gitaly service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/gitaly/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/gitaly/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/gitaly/control] action create (up to date)
* link[/opt/gitlab/init/gitaly] action create (up to date)
* file[/opt/gitlab/sv/gitaly/down] action delete (up to date)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/gitaly] action create (up to date)
* ruby_block[wait for gitaly service socket] action run (skipped due to not_if)
– execute the ruby block restart_service
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/gitaly] action create (up to date)
* ruby_block[wait for gitaly service socket] action run (skipped due to not_if)

* file[/var/opt/gitlab/gitaly/VERSION] action create
– update content in file /var/opt/gitlab/gitaly/VERSION from c417e2 to b19b04
— /var/opt/gitlab/gitaly/VERSION 2019-04-16 12:17:05.735800845 +0900
+++ /var/opt/gitlab/gitaly/.chef-VERSION20190416-27010-1kcnd71 2019-04-16 12:22:35.851349570 +0900
@@ -1,2 +1,2 @@
-Gitaly, version 0.100.1, built 20180726.011732
+Gitaly, version 1.27.1
– restore selinux security context
Recipe: postgresql::user
* account[Postgresql user and group] action create
* group[Postgresql user and group] action create (up to date)
* linux_user[Postgresql user and group] action create (up to date)
(up to date)
Recipe: postgresql::enable
* directory[/var/opt/gitlab/postgresql] action create (up to date)
* directory[/var/opt/gitlab/postgresql/data] action create (up to date)
* directory[/var/log/gitlab/postgresql] action create (up to date)
* link[/var/opt/gitlab/postgresql/data] action create (skipped due to not_if)
* file[/var/opt/gitlab/postgresql/.profile] action create (up to date)
* sysctl[kernel.shmmax] action create
* directory[create /etc/sysctl.d for kernel.shmmax] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.shmmax.conf kernel.shmmax] action create (up to date)
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.shmmax.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.shmmax] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.shmmax] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.shmmax] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.shmmax] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.shmmax] action nothing (skipped due to action :nothing)
(up to date)
* sysctl[kernel.shmall] action create
* directory[create /etc/sysctl.d for kernel.shmall] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.shmall.conf kernel.shmall] action create (up to date)
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.shmall.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.shmall] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.shmall] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.shmall] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.shmall] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.shmall] action nothing (skipped due to action :nothing)
(up to date)
* sysctl[kernel.sem] action create
* directory[create /etc/sysctl.d for kernel.sem] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-kernel.sem.conf kernel.sem] action create (up to date)
* link[/etc/sysctl.d/90-omnibus-gitlab-kernel.sem.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf kernel.sem] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf kernel.sem] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf kernel.sem] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf kernel.sem] action delete (skipped due to only_if)
* execute[load sysctl conf kernel.sem] action nothing (skipped due to action :nothing)
(up to date)
* execute[/opt/gitlab/embedded/bin/initdb -D /var/opt/gitlab/postgresql/data -E UTF8] action run (skipped due to not_if)
* file[/var/opt/gitlab/postgresql/data/server.crt] action create (up to date)
* file[/var/opt/gitlab/postgresql/data/server.key] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/postgresql.conf] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/runtime.conf] action create
– update content in file /var/opt/gitlab/postgresql/data/runtime.conf from f801d2 to aff97a
— /var/opt/gitlab/postgresql/data/runtime.conf 2019-04-16 12:16:25.107999610 +0900
+++ /var/opt/gitlab/postgresql/data/.chef-runtime20190416-27010-xr9yw9.conf 2019-04-16 12:22:35.919347654 +0900
@@ -97,6 +97,9 @@
# autovacuum, -1 means use
# vacuum_cost_limit

+# Parameters for gathering statistics
+default_statistics_target = 1000
+
# – Client connection timeouts
statement_timeout = 60000

– restore selinux security context
* execute[reload postgresql] action run
– execute /opt/gitlab/bin/gitlab-ctl hup postgresql
* execute[start postgresql] action run (skipped due to not_if)
* template[/var/opt/gitlab/postgresql/data/pg_hba.conf] action create (up to date)
* template[/var/opt/gitlab/postgresql/data/pg_ident.conf] action create (up to date)
Recipe:
* service[postgresql] action nothing (skipped due to action :nothing)
Recipe: postgresql::enable
* runit_service[postgresql] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/postgresql] action create (up to date)
* template[/opt/gitlab/sv/postgresql/run] action create
– update content in file /opt/gitlab/sv/postgresql/run from 870bb6 to dc5689
— /opt/gitlab/sv/postgresql/run 2018-02-20 13:40:15.512243507 +0900
+++ /opt/gitlab/sv/postgresql/.chef-run20190416-27010-1xdis56 2019-04-16 12:22:36.435333111 +0900
@@ -1,5 +1,5 @@
#!/bin/sh
exec 2>&1

-exec chpst -P -U gitlab-psql -u gitlab-psql /opt/gitlab/embedded/bin/postgres -D /var/opt/gitlab/postgresql/data
+exec chpst -P -U gitlab-psql:gitlab-psql -u gitlab-psql:gitlab-psql /opt/gitlab/embedded/bin/postgres -D /var/opt/gitlab/postgresql/data
– restore selinux security context
* directory[/opt/gitlab/sv/postgresql/log] action create (up to date)
* directory[/opt/gitlab/sv/postgresql/log/main] action create (up to date)
* template[/opt/gitlab/sv/postgresql/log/run] action create (up to date)
* template[/var/log/gitlab/postgresql/config] action create (up to date)
* directory[/opt/gitlab/sv/postgresql/env] action create
– create new directory /opt/gitlab/sv/postgresql/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for postgresql service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/postgresql/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/postgresql/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/postgresql/control] action create (up to date)
* template[/opt/gitlab/sv/postgresql/control/t] action create (up to date)
* link[/opt/gitlab/init/postgresql] action create (up to date)
* file[/opt/gitlab/sv/postgresql/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/postgresql] action create (up to date)
* ruby_block[wait for postgresql service socket] action run (skipped due to not_if)
* directory[/opt/gitlab/service/postgresql/supervise] action create (up to date)
* directory[/opt/gitlab/service/postgresql/log/supervise] action create (up to date)
* file[/opt/gitlab/sv/postgresql/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/log/supervise/ok] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/supervise/status] action touch
– change owner from ‘root’ to ‘gitlab-psql’
– change group from ‘root’ to ‘gitlab-psql’
– restore selinux security context
– update utime on file /opt/gitlab/sv/postgresql/supervise/status
* file[/opt/gitlab/sv/postgresql/log/supervise/status] action touch
– change owner from ‘root’ to ‘gitlab-psql’
– change group from ‘root’ to ‘gitlab-psql’
– restore selinux security context
– update utime on file /opt/gitlab/sv/postgresql/log/supervise/status
* file[/opt/gitlab/sv/postgresql/supervise/control] action touch (skipped due to only_if)
* file[/opt/gitlab/sv/postgresql/log/supervise/control] action touch (skipped due to only_if)

Recipe: postgresql::bin
* ruby_block[Link postgresql bin files to the correct version] action run (skipped due to only_if)
Recipe: postgresql::enable
* template[/opt/gitlab/etc/gitlab-psql-rc] action create
– update content in file /opt/gitlab/etc/gitlab-psql-rc from 4fdb89 to b7b8fc
— /opt/gitlab/etc/gitlab-psql-rc 2019-04-16 12:16:25.860977392 +0900
+++ /opt/gitlab/etc/.chef-gitlab-psql-rc20190416-27010-6bejrh 2019-04-16 12:22:36.581328997 +0900
@@ -1,4 +1,5 @@
psql_user=’gitlab-psql’
+psql_group=’gitlab-psql’
psql_host=’/var/opt/gitlab/postgresql’
psql_port=’5432′
psql_dbname=’gitlabhq_production’
– restore selinux security context
* postgresql_user[gitlab] action create
* execute[create gitlab postgresql user] action run (skipped due to not_if)
(up to date)
* execute[create gitlabhq_production database] action run (skipped due to not_if)
* postgresql_user[gitlab_replicator] action create
* execute[create gitlab_replicator postgresql user] action run (skipped due to not_if)
* execute[set options for gitlab_replicator postgresql user] action run (skipped due to not_if)
(up to date)
* postgresql_extension[pg_trgm] action enable
* postgresql_query[enable pg_trgm extension] action run (skipped due to only_if)
(up to date)
* ruby_block[warn pending postgresql restart] action run
– execute the ruby block warn pending postgresql restart
* execute[reload postgresql] action nothing (skipped due to action :nothing)
* execute[start postgresql] action nothing (skipped due to action :nothing)
Recipe: gitlab::database_migrations
* bash[migrate gitlab-rails database] action run
[execute] Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
== 20170925184228 AddFaviconToAppearances: migrating ==========================
— add_column(:appearances, :favicon, :string)
-> 0.0004s
== 20170925184228 AddFaviconToAppearances: migrated (0.0004s) =================

== 20180101160630 ChangeProjectIdForPrometheusMetrics: migrating ==============
— change_column_null(:prometheus_metrics, :project_id, true)
-> 0.0010s
== 20180101160630 ChangeProjectIdForPrometheusMetrics: migrated (0.0011s) =====

== 20180228172924 AddIncludePrivateContributionsToUsers: migrating ============
— add_column(:users, :include_private_contributions, :boolean)
-> 0.0012s
== 20180228172924 AddIncludePrivateContributionsToUsers: migrated (0.0012s) ===

== 20180308125206 AddUserInternalRegexToApplicationSetting: migrating =========
— add_column(:application_settings, :user_default_internal_regex, :string, {:null=>true})
-> 0.0013s
== 20180308125206 AddUserInternalRegexToApplicationSetting: migrated (0.0014s)

== 20180320142552 CreatePrometheusAlerts: migrating ===========================
— create_table(:prometheus_alerts, {})
-> 0.0986s
== 20180320142552 CreatePrometheusAlerts: migrated (0.0987s) ==================

== 20180408143354 RenameUsersRssTokenToFeedToken: migrating ===================
— transaction_open?()
-> 0.0000s
— columns(:users)
-> 0.0033s
— add_column(:users, :feed_token, :string, {:limit=>nil, :precision=>nil, :scale=>nil})
-> 0.0123s
— quote_table_name(:users)
-> 0.0000s
— quote_column_name(:rss_token)
-> 0.0000s
— quote_column_name(:feed_token)
-> 0.0000s
— execute(“CREATE OR REPLACE FUNCTION trigger_7dc952250ffd()\nRETURNS trigger AS\n$BODY$\nBEGIN\n NEW.\”feed_token\” := NEW.\”rss_token\”;\n RETURN NEW;\nEND;\n$BODY$\nLANGUAGE ‘plpgsql’\nVOLATILE\n”)
-> 0.0081s
— execute(“CREATE TRIGGER trigger_7dc952250ffd\nBEFORE INSERT OR UPDATE\nON \”users\”\nFOR EACH ROW\nEXECUTE PROCEDURE trigger_7dc952250ffd()\n”)
-> 0.0083s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”users\””)
-> 0.0152s
— exec_query(“SELECT \”users\”.\”id\” FROM \”users\” ORDER BY \”users\”.\”id\” ASC LIMIT 1″)
-> 0.0012s
— exec_query(“SELECT \”users\”.\”id\” FROM \”users\” WHERE \”users\”.\”id\” >= 1 ORDER BY \”users\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0012s
— execute(“UPDATE \”users\” SET \”feed_token\” = \”users\”.\”rss_token\” WHERE \”users\”.\”id\” >= 1 AND \”users\”.\”id\” < 2") -> 0.0138s
— exec_query(“SELECT \”users\”.\”id\” FROM \”users\” WHERE \”users\”.\”id\” >= 2 ORDER BY \”users\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0011s
— execute(“UPDATE \”users\” SET \”feed_token\” = \”users\”.\”rss_token\” WHERE \”users\”.\”id\” >= 2″)
-> 0.0060s
— indexes(:users)
-> 0.0171s
— transaction_open?()
-> 0.0000s
— index_exists?(:users, [“feed_token”], {:unique=>false, :name=>”index_users_on_feed_token”, :length=>[], :order=>{}, :using=>:btree, :algorithm=>:concurrently})
-> 0.0149s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— add_index(:users, [“feed_token”], {:unique=>false, :name=>”index_users_on_feed_token”, :length=>[], :order=>{}, :using=>:btree, :algorithm=>:concurrently})
-> 0.0263s
— execute(“RESET ALL”)
-> 0.0004s
— foreign_keys(:users)
-> 0.0069s
== 20180408143354 RenameUsersRssTokenToFeedToken: migrated (0.2081s) ==========

== 20180408143355 CleanupUsersRssTokenRename: migrating =======================
— execute(“DROP TRIGGER IF EXISTS trigger_7dc952250ffd ON users”)
-> 0.0044s
— execute(“DROP FUNCTION IF EXISTS trigger_7dc952250ffd()”)
-> 0.0082s
— remove_column(:users, :rss_token)
-> 0.0398s
== 20180408143355 CleanupUsersRssTokenRename: migrated (0.0974s) ==============

== 20180417102933 DropRepositoryStorageEventsForGeoEvents: migrating ==========
— transaction()
— remove_column(:geo_hashed_storage_migrated_events, :repository_storage_path)
-> 0.0009s
— remove_column(:geo_repository_created_events, :repository_storage_path)
-> 0.0007s
— remove_column(:geo_repository_deleted_events, :repository_storage_path)
-> 0.0006s
— remove_column(:geo_repository_renamed_events, :repository_storage_path)
-> 0.0006s
-> 0.0075s
== 20180417102933 DropRepositoryStorageEventsForGeoEvents: migrated (0.0075s) =

== 20180423165301 AddNegativeMatchingCommitMessagePushRule: migrating =========
— add_column(:push_rules, :commit_message_negative_regex, :string, {:null=>true})
-> 0.0008s
== 20180423165301 AddNegativeMatchingCommitMessagePushRule: migrated (0.0009s)

== 20180423204600 AddPagesAccessLevelToProjectFeature: migrating ==============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— transaction()
— add_column(:project_features, :pages_access_level, :integer, {:default=>nil})
-> 0.0008s
— change_column_default(:project_features, :pages_access_level, 30)
-> 0.0027s
-> 0.0099s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”project_features\””)
-> 0.0014s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” ORDER BY \”project_features\”.\”id\” ASC LIMIT 1″)
-> 0.0008s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 1 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 1 AND \”project_features\”.\”id\” < 5") -> 0.0165s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 5 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 5 AND \”project_features\”.\”id\” < 7") -> 0.0069s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 7 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 7 AND \”project_features\”.\”id\” < 9") -> 0.0058s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 9 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 9 AND \”project_features\”.\”id\” < 11") -> 0.0057s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 11 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 11 AND \”project_features\”.\”id\” < 13") -> 0.0065s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 13 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 13 AND \”project_features\”.\”id\” < 15") -> 0.0056s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 15 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 15 AND \”project_features\”.\”id\” < 17") -> 0.0064s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 17 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 17 AND \”project_features\”.\”id\” < 19") -> 0.0058s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 19 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0007sArel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 19 AND \”project_features\”.\”id\” < 21") -> 0.0063s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 21 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 21 AND \”project_features\”.\”id\” < 23") -> 0.0061s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 23 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 23 AND \”project_features\”.\”id\” < 25") -> 0.0063s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 25 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 25 AND \”project_features\”.\”id\” < 27") -> 0.0064s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 27 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0009s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 27 AND \”project_features\”.\”id\” < 29") -> 0.0058s
— exec_query(“SELECT \”project_features\”.\”id\” FROM \”project_features\” WHERE \”project_features\”.\”id\” >= 29 ORDER BY \”project_features\”.\”id\” ASC LIMIT 1 OFFSET 2″)
-> 0.0008s
— execute(“UPDATE \”project_features\” SET \”pages_access_level\” = 30 WHERE \”project_features\”.\”id\” >= 29″)
-> 0.0066s
— change_column_null(:project_features, :pages_access_level, false)
-> 0.0083s
— execute(“RESET ALL”)
-> 0.0005s
— change_column_default(:project_features, :pages_access_level, 20)
-> 0.0159s
== 20180423204600 AddPagesAccessLevelToProjectFeature: migrated (0.1747s) =====

== 20180424151928 FillFileStore: migrating ====================================
== 20180424151928 FillFileStore: migrated (0.0429s) ===========================

== 20180424160449 AddPipelineIidToCiPipelines: migrating ======================
— add_column(:ci_pipelines, :iid, :integer)
-> 0.0009s
== 20180424160449 AddPipelineIidToCiPipelines: migrated (0.0010s) =============

== 20180425205249 AddIndexConstraintsToPipelineIid: migrating =================
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipelines, {:column=>[“project_id”, “iid”]})
-> 0.0000s
— index_exists?(:ci_pipelines, [:project_id, :iid], {:unique=>true, :where=>”iid IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_iid”})
-> 0.0076s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:ci_pipelines, [:project_id, :iid], {:unique=>true, :where=>”iid IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_iid”})
-> 0.0407s
— execute(“RESET ALL”)
-> 0.0006s
== 20180425205249 AddIndexConstraintsToPipelineIid: migrated (0.0499s) ========

== 20180504195842 ProjectNameLowerIndex: migrating ============================
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“CREATE INDEX CONCURRENTLY index_projects_on_lower_name ON projects (LOWER(name))”)
-> 0.0240s
— execute(“RESET ALL”)
-> 0.0005s
== 20180504195842 ProjectNameLowerIndex: migrated (0.0252s) ===================

== 20180507083701 SetMinimalProjectBuildTimeout: migrating ====================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”projects\” WHERE \”projects\”.\”build_timeout\” < 600") -> 0.0026s
== 20180507083701 SetMinimalProjectBuildTimeout: migrated (0.0033s) ===========

== 20180508135515 SetRunnerTypeNotNull: migrating =============================
— change_column_null(:ci_runners, :runner_type, false)
-> 0.0009s
== 20180508135515 SetRunnerTypeNotNull: migrated (0.0010s) ====================

== 20180511090724 AddIndexOnCiRunnersRunnerType: migrating ====================
— transaction_open?()
-> 0.0000s
— index_name(:ci_runners, {:column=>[“runner_type”]})
-> 0.0000s
— index_exists?(:ci_runners, :runner_type, {:algorithm=>:concurrently, :name=>”index_ci_runners_on_runner_type”})
-> 0.0037s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:ci_runners, :runner_type, {:algorithm=>:concurrently, :name=>”index_ci_runners_on_runner_type”})
-> 0.0283s
— execute(“RESET ALL”)
-> 0.0003s
== 20180511090724 AddIndexOnCiRunnersRunnerType: migrated (0.0328s) ===========

== 20180511131058 CreateClustersApplicationsJupyter: migrating ================
— create_table(:clusters_applications_jupyter, {})
-> 0.0435s
== 20180511131058 CreateClustersApplicationsJupyter: migrated (0.0436s) =======

== 20180511174224 AddUniqueConstraintToProjectFeaturesProjectId: migrating ====
— transaction_open?()
-> 0.0000s
— index_exists?(:project_features, :project_id, {:unique=>true, :name=>”index_project_features_on_project_id_unique”, :algorithm=>:concurrently})
-> 0.0028s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:project_features, :project_id, {:unique=>true, :name=>”index_project_features_on_project_id_unique”, :algorithm=>:concurrently})
-> 0.0235s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0008s
— indexes(:project_features)
-> 0.0034s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:project_features, {:algorithm=>:concurrently, :name=>”index_project_features_on_project_id”})
-> 0.0000s
— index_name_exists?(:project_features, “index_project_features_on_project_id”, true)
-> 0.0019s
— remove_index(:project_features, {:algorithm=>:concurrently, :name=>”index_project_features_on_project_id”})
-> 0.0230s
— execute(“RESET ALL”)
-> 0.0004s
— rename_index(:project_features, “index_project_features_on_project_id_unique”, “index_project_features_on_project_id”)
-> 0.0031s
== 20180511174224 AddUniqueConstraintToProjectFeaturesProjectId: migrated (0.0643s)

== 20180512061621 AddNotNullConstraintToProjectFeaturesProjectId: migrating ===
— change_column_null(:project_features, :project_id, false)
-> 0.0006s
== 20180512061621 AddNotNullConstraintToProjectFeaturesProjectId: migrated (0.0024s)

== 20180514161336 RemoveGemnasiumService: migrating ===========================
— transaction_open?()
-> 0.0000s
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0003s
— execute(“DELETE FROM services WHERE type=’GemnasiumService’;”)
-> 0.0208s
== 20180514161336 RemoveGemnasiumService: migrated (0.0213s) ==================

== 20180515005612 AddSquashToMergeRequests: migrating =========================
— column_exists?(:merge_requests, :squash)
-> 0.0048s
== 20180515005612 AddSquashToMergeRequests: migrated (0.0049s) ================

== 20180515121227 CreateNotesDiffFiles: migrating =============================
— create_table(:note_diff_files, {})
-> 0.0746s
— add_foreign_key(:note_diff_files, :notes, {:column=>:diff_note_id, :on_delete=>:cascade})
-> 0.0080s
== 20180515121227 CreateNotesDiffFiles: migrated (0.0828s) ====================

== 20180517082340 AddNotNullConstraintsToProjectAuthorizations: migrating =====
— execute(“ALTER TABLE project_authorizations\n ALTER COLUMN user_id SET NOT NULL,\n ALTER COLUMN project_id SET NOT NULL,\n ALTER COLUMN access_level SET NOT NULL\n”)
-> 0.0010s
== 20180517082340 AddNotNullConstraintsToProjectAuthorizations: migrated (0.0011s)

== 20180520211048 AddDiscoveryTokenToNamespaces: migrating ====================
— add_column(:namespaces, :saml_discovery_token, :string)
-> 0.0011s
== 20180520211048 AddDiscoveryTokenToNamespaces: migrated (0.0012s) ===========

== 20180521162137 MigrateRemainingMrMetricsPopulatingBackgroundMigration: migrating Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

== 20180521162137 MigrateRemainingMrMetricsPopulatingBackgroundMigration: migrated (0.0146s)

== 20180521171529 IncreaseMysqlTextLimitForGpgKeys: migrating =================
== 20180521171529 IncreaseMysqlTextLimitForGpgKeys: migrated (0.0000s) ========

== 20180523042841 RenameMergeRequestsAllowMaintainerToPush: migrating =========
== 20180523042841 RenameMergeRequestsAllowMaintainerToPush: migrated (0.0000s)

== 20180523125103 CleanupMergeRequestsAllowMaintainerToPushRename: migrating ==
== 20180523125103 CleanupMergeRequestsAllowMaintainerToPushRename: migrated (0.0000s)

== 20180524115107 AddLastUpdateStartedAtToApplicationsPrometheus: migrating ===
— add_column(:clusters_applications_prometheus, :last_update_started_at, :datetime_with_timezone)
-> 0.0012s
== 20180524115107 AddLastUpdateStartedAtToApplicationsPrometheus: migrated (0.0013s)

== 20180524132016 MergeRequestsTargetIdIidStatePartialIndex: migrating ========
— transaction_open?()
-> 0.0000s
— index_exists?(:merge_requests, [:target_project_id, :iid], {:where=>”state = ‘opened'”, :name=>”index_merge_requests_on_target_project_id_and_iid_opened”, :algorithm=>:concurrently})
-> 0.0162s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:merge_requests, [:target_project_id, :iid], {:where=>”state = ‘opened'”, :name=>”index_merge_requests_on_target_project_id_and_iid_opened”, :algorithm=>:concurrently})
-> 0.0313s
— execute(“RESET ALL”)
-> 0.0004s
== 20180524132016 MergeRequestsTargetIdIidStatePartialIndex: migrated (0.0487s)

== 20180529152628 ScheduleToArchiveLegacyTraces: migrating ====================
== 20180529152628 ScheduleToArchiveLegacyTraces: migrated (0.0150s) ===========

== 20180530135500 AddIndexToStagesPosition: migrating =========================
— transaction_open?()
-> 0.0000s
— index_name(:ci_stages, {:column=>[“pipeline_id”, “position”]})
-> 0.0000s
— index_exists?(:ci_stages, [:pipeline_id, :position], {:algorithm=>:concurrently, :name=>”index_ci_stages_on_pipeline_id_and_position”})
-> 0.0046s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:ci_stages, [:pipeline_id, :position], {:algorithm=>:concurrently, :name=>”index_ci_stages_on_pipeline_id_and_position”})
-> 0.0259s
— execute(“RESET ALL”)
-> 0.0005s
== 20180530135500 AddIndexToStagesPosition: migrated (0.0319s) ================

== 20180531031410 AddIndexForActiveUsers: migrating ===========================
— transaction_open?()
-> 0.0000s
— index_exists?(:users, :state, {:name=>”index_users_on_state_and_internal_attrs”, :where=>”ghost <> true AND support_bot <> true”, :algorithm=>:concurrently})
-> 0.0159s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:users, :state, {:name=>”index_users_on_state_and_internal_attrs”, :where=>”ghost <> true AND support_bot <> true”, :algorithm=>:concurrently})
-> 0.0233s
— execute(“RESET ALL”)
-> 0.0004s
== 20180531031410 AddIndexForActiveUsers: migrated (0.0405s) ==================

== 20180531185349 AddRepositoryLanguages: migrating ===========================
— create_table(:programming_languages, {})
-> 0.0382s
— create_table(:repository_languages, {:id=>false})
-> 0.0011s
— add_index(:programming_languages, :name, {:unique=>true})
-> 0.0154s
— add_index(:repository_languages, [:project_id, :programming_language_id], {:unique=>true, :name=>”index_repository_languages_on_project_and_languages_id”})
-> 0.0285s
== 20180531185349 AddRepositoryLanguages: migrated (0.0834s) ==================

== 20180531220618 ChangeDefaultValueForDsaKeyRestriction: migrating ===========
— change_column(:application_settings, :dsa_key_restriction, :integer, {:null=>false, :default=>-1})
-> 0.0602s
— execute(“UPDATE application_settings SET dsa_key_restriction = -1”)
-> 0.0005s
== 20180531220618 ChangeDefaultValueForDsaKeyRestriction: migrated (0.0608s) ==

== 20180531221734 AddPseudonymizerEnabledToApplicationSettings: migrating =====
— add_column(:application_settings, :pseudonymizer_enabled, :boolean, {:null=>false, :default=>false})
-> 0.0745s
== 20180531221734 AddPseudonymizerEnabledToApplicationSettings: migrated (0.0746s)

== 20180601213245 AddDeployStrategyToProjectAutoDevops: migrating =============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— transaction()
— add_column(:project_auto_devops, :deploy_strategy, :integer, {:default=>nil})
-> 0.0010s
— change_column_default(:project_auto_devops, :deploy_strategy, 0)
-> 0.0034s
-> 0.0218s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”project_auto_devops\””)
-> 0.0161s
— exec_query(“SELECT \”project_auto_devops\”.\”id\” FROM \”project_auto_devops\” ORDER BY \”project_auto_devops\”.\”id\” ASC LIMIT 1″)
-> 0.0005s
— exec_query(“SELECT \”project_auto_devops\”.\”id\” FROM \”project_auto_devops\” WHERE \”project_auto_devops\”.\”id\” >= 1 ORDER BY \”project_auto_devops\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0004s
— execute(“UPDATE \”project_auto_devops\” SET \”deploy_strategy\” = 0 WHERE \”project_auto_devops\”.\”id\” >= 1 AND \”project_auto_devops\”.\”id\” < 2") -> 0.0140s
— exec_query(“SELECT \”project_auto_devops\”.\”id\” FROM \”project_auto_devops\” WHERE \”project_auto_devops\”.\”id\” >= 2 ORDER BY \”project_auto_devops\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0009s
— execute(“UPDATE \”project_auto_devops\” SET \”deploy_strategy\” = 0 WHERE \”project_auto_devops\”.\”id\” >= 2″)
-> 0.0287s
— change_column_null(:project_auto_devops, :deploy_strategy, false)
-> 0.0189s
— execute(“RESET ALL”)
-> 0.0004s
== 20180601213245 AddDeployStrategyToProjectAutoDevops: migrated (0.1066s) ====

== 20180603190921 MigrateObjectStorageUploadSidekiqQueue: migrating ===========
== 20180603190921 MigrateObjectStorageUploadSidekiqQueue: migrated (0.0004s) ==

== 20180604123514 CleanupStagesPositionMigration: migrating ===================
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— indexes(:ci_stages)
-> 0.0026s
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_stages, :id, {:where=>”position IS NULL”, :name=>”tmp_id_stage_position_partial_null_index”, :algorithm=>:concurrently})
-> 0.0020s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:ci_stages, :id, {:where=>”position IS NULL”, :name=>”tmp_id_stage_position_partial_null_index”, :algorithm=>:concurrently})
-> 0.0317s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0006s
— indexes(:ci_stages)
-> 0.0062s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— index_name(:ci_stages, {:algorithm=>:concurrently, :name=>”tmp_id_stage_position_partial_null_index”})
-> 0.0000s
— index_name_exists?(:ci_stages, “tmp_id_stage_position_partial_null_index”, true)
-> 0.0019s
— remove_index(:ci_stages, {:algorithm=>:concurrently, :name=>”tmp_id_stage_position_partial_null_index”})
-> 0.0130s
— execute(“RESET ALL”)
-> 0.0005s
— execute(“RESET ALL”)
-> 0.0005s
== 20180604123514 CleanupStagesPositionMigration: migrated (0.0713s) ==========

== 20180605213516 FixPartialIndexToProjectRepositoryStatesChecksumColumns: migrating
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— indexes(:project_repository_states)
-> 0.0055s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:project_repository_states, {:algorithm=>:concurrently, :name=>”idx_repository_states_on_checksums_partial”})
-> 0.0000s
— index_name_exists?(:project_repository_states, “idx_repository_states_on_checksums_partial”, true)
-> 0.0020s
— remove_index(:project_repository_states, {:algorithm=>:concurrently, :name=>”idx_repository_states_on_checksums_partial”})
-> 0.0110s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_exists?(:project_repository_states, :project_id, {:name=>”idx_repository_states_outdated_checksums”, :where=>”(repository_verification_checksum IS NULL AND last_repository_verification_failure is NULL) OR (wiki_verification_checksum IS NULL AND last_wiki_verification_failure IS NULL)”, :algorithm=>:concurrently})
-> 0.0043s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:project_repository_states, :project_id, {:name=>”idx_repository_states_outdated_checksums”, :where=>”(repository_verification_checksum IS NULL AND last_repository_verification_failure is NULL) OR (wiki_verification_checksum IS NULL AND last_wiki_verification_failure IS NULL)”, :algorithm=>:concurrently})
-> 0.0297s
— execute(“RESET ALL”)
-> 0.0003s
== 20180605213516 FixPartialIndexToProjectRepositoryStatesChecksumColumns: migrated (0.0558s)

== 20180607071808 AddPushEventsBranchFilterToWebHooks: migrating ==============
— add_column(:web_hooks, :push_events_branch_filter, :text)
-> 0.0010s
== 20180607071808 AddPushEventsBranchFilterToWebHooks: migrated (0.0011s) =====

== 20180607154422 AddUserToList: migrating ====================================
— add_column(:lists, :user_id, :integer)
-> 0.0011s
== 20180607154422 AddUserToList: migrated (0.0012s) ===========================

== 20180607154516 AddUserIndexToList: migrating ===============================
— transaction_open?()
-> 0.0000s
— index_name(:lists, {:column=>[“user_id”]})
-> 0.0000s
— index_exists?(:lists, :user_id, {:algorithm=>:concurrently, :name=>”index_lists_on_user_id”})
-> 0.0039s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:lists, :user_id, {:algorithm=>:concurrently, :name=>”index_lists_on_user_id”})
-> 0.0258s
— execute(“RESET ALL”)
-> 0.0005s
== 20180607154516 AddUserIndexToList: migrated (0.0310s) ======================

== 20180607154645 AddUserFkToList: migrating ==================================
— transaction_open?()
-> 0.0000s
— foreign_keys(:lists)
-> 0.0062s
— execute(“ALTER TABLE lists\nADD CONSTRAINT fk_d6cf4279f7\nFOREIGN KEY (user_id)\nREFERENCES users (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0096s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE lists VALIDATE CONSTRAINT fk_d6cf4279f7;”)
-> 0.0240s
— execute(“RESET ALL”)
-> 0.0004s
== 20180607154645 AddUserFkToList: migrated (0.0415s) =========================

== 20180608091413 AddGroupToTodos: migrating ==================================
— column_exists?(:todos, :group_id)
-> 0.0025s
— add_column(:todos, :group_id, :integer)
-> 0.0037s
— transaction_open?()
-> 0.0000s
— foreign_keys(:todos)
-> 0.0036s
— execute(“ALTER TABLE todos\nADD CONSTRAINT fk_a27c483435\nFOREIGN KEY (group_id)\nREFERENCES namespaces (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0045s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— execute(“ALTER TABLE todos VALIDATE CONSTRAINT fk_a27c483435;”)
-> 0.0080s
— execute(“RESET ALL”)
-> 0.0002s
— transaction_open?()
-> 0.0000s
— index_name(:todos, {:column=>[“group_id”]})
-> 0.0000s
— index_exists?(:todos, :group_id, {:algorithm=>:concurrently, :name=>”index_todos_on_group_id”})
-> 0.0032s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:todos, :group_id, {:algorithm=>:concurrently, :name=>”index_todos_on_group_id”})
-> 0.0191s
— execute(“RESET ALL”)
-> 0.0003s
— change_column_null(:todos, :project_id, true)
-> 0.0106s
== 20180608091413 AddGroupToTodos: migrated (0.0567s) =========================

== 20180608110058 RenameMergeRequestsAllowCollaboration: migrating ============
— column_exists?(:merge_requests, :allow_collaboration)
-> 0.0046s
== 20180608110058 RenameMergeRequestsAllowCollaboration: migrated (0.0046s) ===

== 20180608150653 AddIndexToProjectsOnRepositoryStorageLastRepositoryUpdatedAt: migrating
— transaction_open?()
-> 0.0000s
— index_exists?(:projects, [:id, :repository_storage, :last_repository_updated_at], {:name=>”idx_projects_on_repository_storage_last_repository_updated_at”, :algorithm=>:concurrently})
-> 0.0192s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:projects, [:id, :repository_storage, :last_repository_updated_at], {:name=>”idx_projects_on_repository_storage_last_repository_updated_at”, :algorithm=>:concurrently})
-> 0.0353s
— execute(“RESET ALL”)
-> 0.0005s
== 20180608150653 AddIndexToProjectsOnRepositoryStorageLastRepositoryUpdatedAt: migrated (0.0559s)

== 20180608201435 CleanupMergeRequestsAllowCollaborationRename: migrating =====
— column_exists?(:merge_requests, :allow_collaboration)
-> 0.0048s
== 20180608201435 CleanupMergeRequestsAllowCollaborationRename: migrated (0.0049s)

== 20180612103626 AddColumnsForHelmTillerCertificates: migrating ==============
— add_column(:clusters_applications_helm, :encrypted_ca_key, :text)
-> 0.0011s
— add_column(:clusters_applications_helm, :encrypted_ca_key_iv, :text)
-> 0.0008s
— add_column(:clusters_applications_helm, :ca_cert, :text)
-> 0.0009s
== 20180612103626 AddColumnsForHelmTillerCertificates: migrated (0.0030s) =====

== 20180612175636 AddGeoNodesVerificationMaxCapacity: migrating ===============
— add_column(:geo_nodes, :verification_max_capacity, :integer, {:default=>100, :null=>false})
-> 0.1142s
== 20180612175636 AddGeoNodesVerificationMaxCapacity: migrated (0.1144s) ======

== 20180613081317 CreateCiBuildsRunnerSession: migrating ======================
— create_table(:ci_builds_runner_session, {:id=>:bigserial})
-> 0.0620s
== 20180613081317 CreateCiBuildsRunnerSession: migrated (0.0621s) =============

== 20180615152524 AddProjectToApplicationSettings: migrating ==================
— add_column(:application_settings, :file_template_project_id, :integer)
-> 0.0067s
— transaction_open?()
-> 0.0000s
— foreign_keys(:application_settings)
-> 0.0066s
— execute(“ALTER TABLE application_settings\nADD CONSTRAINT fk_ec757bd087\nFOREIGN KEY (file_template_project_id)\nREFERENCES projects (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0181s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“ALTER TABLE application_settings VALIDATE CONSTRAINT fk_ec757bd087;”)
-> 0.0080s
— execute(“RESET ALL”)
-> 0.0005s
== 20180615152524 AddProjectToApplicationSettings: migrated (0.0410s) =========

== 20180618193715 SchedulePruneOrphanedGeoEvents: migrating ===================
== 20180618193715 SchedulePruneOrphanedGeoEvents: migrated (0.0024s) ==========

== 20180619121030 EnqueueDeleteDiffFilesWorkers: migrating ====================
— indexes(:merge_request_diffs)
-> 0.0029s
— transaction_open?()
-> 0.0000s
— index_exists?(:merge_request_diffs, :id, {:where=>”(state NOT IN (‘without_files’, ’empty’))”, :name=>”tmp_partial_diff_id_with_files_index”, :algorithm=>:concurrently})
-> 0.0028s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:merge_request_diffs, :id, {:where=>”(state NOT IN (‘without_files’, ’empty’))”, :name=>”tmp_partial_diff_id_with_files_index”, :algorithm=>:concurrently})
-> 0.0308s
— execute(“RESET ALL”)
-> 0.0005s
== 20180619121030 EnqueueDeleteDiffFilesWorkers: migrated (0.0405s) ===========

== 20180621100024 CreateSoftwareLicenses: migrating ===========================
— create_table(:software_licenses, {})
-> 0.0823s
== 20180621100024 CreateSoftwareLicenses: migrated (0.0824s) ==================

== 20180621100025 CreateSoftwareLicensePolicies: migrating ====================
— create_table(:software_license_policies, {})
-> 0.0324s
— transaction_open?()
-> 0.0000s
— index_exists?(:software_license_policies, [:project_id, :software_license_id], {:unique=>true, :name=>”index_software_license_policies_unique_per_project”, :algorithm=>:concurrently})
-> 0.0014s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:software_license_policies, [:project_id, :software_license_id], {:unique=>true, :name=>”index_software_license_policies_unique_per_project”, :algorithm=>:concurrently})
-> 0.0290s
— execute(“RESET ALL”)
-> 0.0004s
== 20180621100025 CreateSoftwareLicensePolicies: migrated (0.0638s) ===========

== 20180623053658 AddTrialEndsOnToNamespaces: migrating =======================
— add_column(:namespaces, :trial_ends_on, :datetime_with_timezone)
-> 0.0010s
== 20180623053658 AddTrialEndsOnToNamespaces: migrated (0.0011s) ==============

== 20180625113853 CreateImportExportUploads: migrating ========================Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

— create_table(:import_export_uploads, {})
-> 0.0796s
— add_index(:import_export_uploads, :updated_at)
-> 0.0167s
== 20180625113853 CreateImportExportUploads: migrated (0.0964s) ===============

== 20180626125654 AddIndexOnDeployableForDeployments: migrating ===============
— transaction_open?()
-> 0.0000s
— index_name(:deployments, {:column=>[“deployable_type”, “deployable_id”]})
-> 0.0000s
— index_exists?(:deployments, [:deployable_type, :deployable_id], {:algorithm=>:concurrently, :name=>”index_deployments_on_deployable_type_and_deployable_id”})
-> 0.0062s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:deployments, [:deployable_type, :deployable_id], {:algorithm=>:concurrently, :name=>”index_deployments_on_deployable_type_and_deployable_id”})
-> 0.0316s
— execute(“RESET ALL”)
-> 0.0005s
== 20180626125654 AddIndexOnDeployableForDeployments: migrated (0.0395s) ======

== 20180626171125 AddFeatureFlagsToProjects: migrating ========================
— create_table(:operations_feature_flags, {:id=>:bigserial})
-> 0.0710s
— create_table(:operations_feature_flags_clients, {:id=>:bigserial})
-> 0.0500s
== 20180626171125 AddFeatureFlagsToProjects: migrated (0.1212s) ===============

== 20180628124813 AlterWebHookLogsIndexes: migrating ==========================
— transaction_open?()
-> 0.0000s
— index_name(:web_hook_logs, {:column=>[“created_at”, “web_hook_id”]})
-> 0.0000s
— index_exists?(:web_hook_logs, [:created_at, :web_hook_id], {:algorithm=>:concurrently, :name=>”index_web_hook_logs_on_created_at_and_web_hook_id”})
-> 0.0030s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:web_hook_logs, [:created_at, :web_hook_id], {:algorithm=>:concurrently, :name=>”index_web_hook_logs_on_created_at_and_web_hook_id”})
-> 0.0263s
— execute(“RESET ALL”)
-> 0.0005s
== 20180628124813 AlterWebHookLogsIndexes: migrated (0.0309s) =================

== 20180629153018 CreateSiteStatistics: migrating =============================
— create_table(:site_statistics, {})
-> 0.0215s
— execute(“INSERT INTO site_statistics (id) VALUES(1)”)
-> 0.0010s
== 20180629153018 CreateSiteStatistics: migrated (0.0228s) ====================

== 20180629191052 AddPartialIndexToProjectsForLastRepositoryCheckAt: migrating
— transaction_open?()
-> 0.0000s
— index_exists?(:projects, :last_repository_check_at, {:where=>”last_repository_check_at IS NOT NULL”, :name=>”index_projects_on_last_repository_check_at”, :algorithm=>:concurrently})
-> 0.0212s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:projects, :last_repository_check_at, {:where=>”last_repository_check_at IS NOT NULL”, :name=>”index_projects_on_last_repository_check_at”, :algorithm=>:concurrently})
-> 0.0339s
— execute(“RESET ALL”)
-> 0.0005s
== 20180629191052 AddPartialIndexToProjectsForLastRepositoryCheckAt: migrated (0.0565s)

== 20180702114215 ScheduleWeightSystemNoteCommaCleanup: migrating =============
— transaction_open?()
-> 0.0000s
— index_name(:system_note_metadata, {:column=>[“action”]})
-> 0.0000s
— index_exists?(:system_note_metadata, :action, {:where=>”action = ‘weight'”, :algorithm=>:concurrently, :name=>”index_system_note_metadata_on_action”})
-> 0.0033s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:system_note_metadata, :action, {:where=>”action = ‘weight'”, :algorithm=>:concurrently, :name=>”index_system_note_metadata_on_action”})
-> 0.0250s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:system_note_metadata, {:column=>[“action”]})
-> 0.0000s
— index_exists?(:system_note_metadata, :action, {:where=>”action = ‘weight'”, :algorithm=>:concurrently, :name=>”index_system_note_metadata_on_action”})
-> 0.0040s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:system_note_metadata, {:where=>”action = ‘weight'”, :algorithm=>:concurrently, :name=>”index_system_note_metadata_on_action”, :column=>:action})
-> 0.0000s
— index_name_exists?(:system_note_metadata, “index_system_note_metadata_on_action”, true)
-> 0.0019s
— remove_index(:system_note_metadata, {:where=>”action = ‘weight'”, :algorithm=>:concurrently, :name=>”index_system_note_metadata_on_action”, :column=>:action})
-> 0.0165s
— execute(“RESET ALL”)
-> 0.0005s
== 20180702114215 ScheduleWeightSystemNoteCommaCleanup: migrated (0.0779s) ====

== 20180702120647 EnqueueFixCrossProjectLabelLinks: migrating =================
== 20180702120647 EnqueueFixCrossProjectLabelLinks: migrated (0.0227s) ========

== 20180702124358 RemoveOrphanedRoutes: migrating =============================
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“RESET ALL”)
-> 0.0005s
== 20180702124358 RemoveOrphanedRoutes: migrated (0.0113s) ====================

== 20180702134423 GenerateMissingRoutes: migrating ============================
== 20180702134423 GenerateMissingRoutes: migrated (0.0127s) ===================

== 20180702181530 AddRetryFieldsToProjectRepositoryStates: migrating ==========
— add_column(:project_repository_states, :repository_retry_at, :datetime_with_timezone)
-> 0.0009s
— add_column(:project_repository_states, :wiki_retry_at, :datetime_with_timezone)
-> 0.0007s
— add_column(:project_repository_states, :repository_retry_count, :integer)
-> 0.0007s
— add_column(:project_repository_states, :wiki_retry_count, :integer)
-> 0.0008s
== 20180702181530 AddRetryFieldsToProjectRepositoryStates: migrated (0.0034s) =

== 20180704145007 UpdateProjectIndexes: migrating =============================
— transaction_open?()
-> 0.0000s
— index_exists?(:projects, [:repository_storage, :created_at], {:name=>”idx_project_repository_check_partial”, :where=>”last_repository_check_at IS NULL”, :algorithm=>:concurrently})
-> 0.0209s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:projects, [:repository_storage, :created_at], {:name=>”idx_project_repository_check_partial”, :where=>”last_repository_check_at IS NULL”, :algorithm=>:concurrently})
-> 0.0252s
— execute(“RESET ALL”)
-> 0.0004s
== 20180704145007 UpdateProjectIndexes: migrated (0.0474s) ====================

== 20180704204006 AddHideThirdPartyOffersToApplicationSettings: migrating =====
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:application_settings, :hide_third_party_offers, :boolean, {:default=>nil})
-> 0.0012s
— change_column_default(:application_settings, :hide_third_party_offers, false)
-> 0.0361s
-> 0.0485s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0014s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0009s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0008s
— execute(“UPDATE \”application_settings\” SET \”hide_third_party_offers\” = ‘f’ WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0119s
— change_column_null(:application_settings, :hide_third_party_offers, false)
-> 0.0083s
— execute(“RESET ALL”)
-> 0.0005s
== 20180704204006 AddHideThirdPartyOffersToApplicationSettings: migrated (0.0749s)

== 20180705160945 AddFileFormatToCiJobArtifacts: migrating ====================
— add_column(:ci_job_artifacts, :file_format, :integer, {:limit=>2})
-> 0.0008s
== 20180705160945 AddFileFormatToCiJobArtifacts: migrated (0.0010s) ===========

== 20180706223200 PopulateSiteStatistics: migrating ===========================
— transaction()
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0003s
— execute(“UPDATE site_statistics SET repositories_count = (SELECT COUNT(*) FROM projects)”)
-> 0.0039s
-> 0.0075s
— transaction()
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0005s
— execute(“UPDATE site_statistics SET wikis_count = (SELECT COUNT(*) FROM project_features WHERE wiki_access_level != 0)”)Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0010s
-> 0.0084s
== 20180706223200 PopulateSiteStatistics: migrated (0.0161s) ==================

== 20180709153607 AddCustomProjectTemplatesGroupIdToApplicationSettings: migrating
— add_column(:application_settings, :custom_project_templates_group_id, :integer)
-> 0.0014s
— add_foreign_key(:application_settings, :namespaces, {:column=>:custom_project_templates_group_id, :on_delete=>:nullify})
-> 0.0037s
== 20180709153607 AddCustomProjectTemplatesGroupIdToApplicationSettings: migrated (0.0053s)

== 20180709183353 AddProtectedEnvironmentsTable: migrating ====================
— create_table(:protected_environments, {})
-> 0.0629s
— add_index(:protected_environments, [:project_id, :name], {:unique=>true})
-> 0.0250s
== 20180709183353 AddProtectedEnvironmentsTable: migrated (0.0882s) ===========

== 20180709184533 AddProtectedEnvironmentDeployAccessLevelTable: migrating ====
— create_table(:protected_environment_deploy_access_levels, {})
-> 0.1045s
== 20180709184533 AddProtectedEnvironmentDeployAccessLevelTable: migrated (0.1046s)

== 20180710162338 AddForeignKeyFromNotificationSettingsToUsers: migrating =====
— transaction_open?()
-> 0.0000s
— foreign_keys(:notification_settings)
-> 0.0041s
— execute(“ALTER TABLE notification_settings\nADD CONSTRAINT fk_0c95e91db7\nFOREIGN KEY (user_id)\nREFERENCES users (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0119s
— execute(“SET statement_timeout TO 0”)
-> 0.0008s
— execute(“ALTER TABLE notification_settings VALIDATE CONSTRAINT fk_0c95e91db7;”)
-> 0.0074s
— execute(“RESET ALL”)
-> 0.0005s
== 20180710162338 AddForeignKeyFromNotificationSettingsToUsers: migrated (0.0656s)

== 20180711014025 AddDateColumnsToEpics: migrating ============================
— change_table(:epics, {})
-> 0.0056s
== 20180711014025 AddDateColumnsToEpics: migrated (0.0058s) ===================

== 20180711014026 UpdateDateColumnsOnEpics: migrating =========================
== 20180711014026 UpdateDateColumnsOnEpics: migrated (0.0147s) ================

== 20180711103851 DropDuplicateProtectedTags: migrating =======================
== 20180711103851 DropDuplicateProtectedTags: migrated (0.0541s) ==============

== 20180711103922 AddProtectedTagsIndex: migrating ============================
— transaction_open?()
-> 0.0000s
— index_name(:protected_tags, {:column=>[“project_id”, “name”]})
-> 0.0000s
— index_exists?(:protected_tags, [:project_id, :name], {:unique=>true, :algorithm=>:concurrently, :name=>”index_protected_tags_on_project_id_and_name”})
-> 0.0033s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:protected_tags, [:project_id, :name], {:unique=>true, :algorithm=>:concurrently, :name=>”index_protected_tags_on_project_id_and_name”})
-> 0.0346s
— execute(“RESET ALL”)
-> 0.0005s
== 20180711103922 AddProtectedTagsIndex: migrated (0.0394s) ===================

== 20180713092803 CreateUserStatuses: migrating ===============================
— create_table(:user_statuses, {:id=>false, :primary_key=>:user_id})
-> 0.0393s
== 20180713092803 CreateUserStatuses: migrated (0.0394s) ======================

== 20180713171825 UpdateEpicDatesFromMilestones: migrating ====================
== 20180713171825 UpdateEpicDatesFromMilestones: migrated (0.0378s) ===========

== 20180717125853 RemoveRestrictedTodos: migrating ============================
== 20180717125853 RemoveRestrictedTodos: migrated (0.0306s) ===================

== 20180718005113 AddInstanceStatisticsVisibilityToApplicationSetting: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— transaction()
— add_column(:application_settings, :instance_statistics_visibility_private, :boolean, {:default=>nil})
-> 0.0004s
— change_column_default(:application_settings, :instance_statistics_visibility_private, false)
-> 0.0092s
-> 0.0164s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0006s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0003s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0003s
— execute(“UPDATE \”application_settings\” SET \”instance_statistics_visibility_private\” = ‘f’ WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0065s
— change_column_null(:application_settings, :instance_statistics_visibility_private, false)
-> 0.0083s
— execute(“RESET ALL”)
-> 0.0003s
== 20180718005113 AddInstanceStatisticsVisibilityToApplicationSetting: migrated (0.0336s)

== 20180718100455 CleanUpFromWeightSystemNoteCommaMigration: migrating ========
== 20180718100455 CleanUpFromWeightSystemNoteCommaMigration: migrated (0.0035s)

== 20180719161844 AddStorageConfigurationDigest: migrating ====================
— add_column(:geo_node_statuses, :storage_configuration_digest, :binary)
-> 0.0006s
== 20180719161844 AddStorageConfigurationDigest: migrated (0.0006s) ===========

== 20180720023512 AddReceiveMaxInputSizeToApplicationSettings: migrating ======
— add_column(:application_settings, :receive_max_input_size, :integer)
-> 0.0012s
== 20180720023512 AddReceiveMaxInputSizeToApplicationSettings: migrated (0.0013s)

== 20180720082636 AddNameIndexToCiBuilds: migrating ===========================
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, [:name], {:name=>”index_ci_builds_on_name_for_security_products_values”, :where=>”name IN (‘container_scanning’,’dast’,’dependency_scanning’,’license_management’,’sast’)”, :algorithm=>:concurrently})
-> 0.0149s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:ci_builds, [:name], {:name=>”index_ci_builds_on_name_for_security_products_values”, :where=>”name IN (‘container_scanning’,’dast’,’dependency_scanning’,’license_management’,’sast’)”, :algorithm=>:concurrently})
-> 0.0236s
— execute(“RESET ALL”)
-> 0.0005s
== 20180720082636 AddNameIndexToCiBuilds: migrated (0.0397s) ==================

== 20180720120716 CreatePackagesPackages: migrating ===========================
— create_table(:packages_packages, {:id=>:bigserial})
-> 0.1240s
== 20180720120716 CreatePackagesPackages: migrated (0.1241s) ==================

== 20180720120726 CreatePackagesPackageFiles: migrating =======================
— create_table(:packages_package_files, {:id=>:bigserial})
-> 0.0488s
— transaction_open?()
-> 0.0000s
— index_name(:packages_package_files, {:column=>[“package_id”, “file_name”]})
-> 0.0000s
— index_exists?(:packages_package_files, [:package_id, :file_name], {:algorithm=>:concurrently, :name=>”index_packages_package_files_on_package_id_and_file_name”})
-> 0.0023s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:packages_package_files, [:package_id, :file_name], {:algorithm=>:concurrently, :name=>”index_packages_package_files_on_package_id_and_file_name”})
-> 0.0366s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:packages_package_files)
-> 0.0068s
— execute(“ALTER TABLE packages_package_files\nADD CONSTRAINT fk_86f0f182f8\nFOREIGN KEY (package_id)\nREFERENCES packages_packages (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0104s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE packages_package_files VALIDATE CONSTRAINT fk_86f0f182f8;”)
-> 0.0076s
— execute(“RESET ALL”)
-> 0.0005s
== 20180720120726 CreatePackagesPackageFiles: migrated (0.1162s) ==============

== 20180720121404 CreatePackagesMavenMetadata: migrating ======================
— create_table(:packages_maven_metadata, {:id=>:bigserial})
-> 0.0488s
— transaction_open?()
-> 0.0000s
— index_name(:packages_maven_metadata, {:column=>[“package_id”, “path”]})
-> 0.0000s
— index_exists?(:packages_maven_metadata, [:package_id, :path], {:algorithm=>:concurrently, :name=>”index_packages_maven_metadata_on_package_id_and_path”})
-> 0.0023sArel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:packages_maven_metadata, [:package_id, :path], {:algorithm=>:concurrently, :name=>”index_packages_maven_metadata_on_package_id_and_path”})
-> 0.0281s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:packages_maven_metadata)
-> 0.0064s
— execute(“ALTER TABLE packages_maven_metadata\nADD CONSTRAINT fk_be88aed360\nFOREIGN KEY (package_id)\nREFERENCES packages_packages (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0529s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“ALTER TABLE packages_maven_metadata VALIDATE CONSTRAINT fk_be88aed360;”)
-> 0.0078s
— execute(“RESET ALL”)
-> 0.0005s
== 20180720121404 CreatePackagesMavenMetadata: migrated (0.1495s) =============

== 20180722103201 AddPrivateProfileToUsers: migrating =========================
— add_column(:users, :private_profile, :boolean)
-> 0.0012s
== 20180722103201 AddPrivateProfileToUsers: migrated (0.0013s) ================

== 20180723023517 AddNewEpicToNotificationSettings: migrating =================
— add_column(:notification_settings, :new_epic, :boolean)
-> 0.0007s
== 20180723023517 AddNewEpicToNotificationSettings: migrated (0.0009s) ========

== 20180723081631 AddRoadmapLayoutToUsers: migrating ==========================
— add_column(:users, :roadmap_layout, :integer, {:limit=>2})
-> 0.0011s
== 20180723081631 AddRoadmapLayoutToUsers: migrated (0.0012s) =================

== 20180723130817 DeleteInconsistentInternalIdRecords: migrating ==============
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— execute(“RESET ALL”)
-> 0.0001s
== 20180723130817 DeleteInconsistentInternalIdRecords: migrated (0.0162s) =====

== 20180723134433 AddBasicSnowplowAttributesToApplicationSettings: migrating ==
— add_column(:application_settings, :snowplow_enabled, :boolean, {:default=>false, :null=>false})
-> 0.0586s
— add_column(:application_settings, :snowplow_collector_uri, :string)
-> 0.0011s
— add_column(:application_settings, :snowplow_site_id, :string)
-> 0.0011s
— add_column(:application_settings, :snowplow_cookie_domain, :string)
-> 0.0012s
== 20180723134433 AddBasicSnowplowAttributesToApplicationSettings: migrated (0.0622s)

== 20180723135214 AddWebIdeClientSidePreviewEnabledToApplicationSettings: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— transaction()
— add_column(:application_settings, :web_ide_clientside_preview_enabled, :boolean, {:default=>nil})
-> 0.0014s
— change_column_default(:application_settings, :web_ide_clientside_preview_enabled, false)
-> 0.0313s
-> 0.0389s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0014s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0007s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0009s
— execute(“UPDATE \”application_settings\” SET \”web_ide_clientside_preview_enabled\” = ‘f’ WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0034s
— change_column_null(:application_settings, :web_ide_clientside_preview_enabled, false)
-> 0.0085s
— execute(“RESET ALL”)
-> 0.0006s
== 20180723135214 AddWebIdeClientSidePreviewEnabledToApplicationSettings: migrated (0.0569s)

== 20180724161450 AddMilestoneToLists: migrating ==============================
— add_reference(:lists, :milestone, {:index=>true, :foreign_key=>{:on_delete=>:cascade}})
-> 0.0253s
== 20180724161450 AddMilestoneToLists: migrated (0.0254s) =====================

== 20180726172057 CreateResourceLabelEvents: migrating ========================
— create_table(:resource_label_events, {:id=>:bigserial})
-> 0.1137s
== 20180726172057 CreateResourceLabelEvents: migrated (0.1138s) ===============

== 20180803001726 AddVerificationRetryCountsToGeoNodeStatuses: migrating ======
— add_column(:geo_node_statuses, :repositories_retrying_verification_count, :integer)
-> 0.0011s
— add_column(:geo_node_statuses, :wikis_retrying_verification_count, :integer)
-> 0.0008s
== 20180803001726 AddVerificationRetryCountsToGeoNodeStatuses: migrated (0.0021s)

== 20180803041220 AddProjectsCountToGeoNodeStatuses: migrating ================
— add_column(:geo_node_statuses, :projects_count, :integer)
-> 0.0009s
== 20180803041220 AddProjectsCountToGeoNodeStatuses: migrated (0.0010s) =======

== 20180806145747 AddIndexToEnvironmentNameForLike: migrating =================
— index_exists?(:environments, :name, {:name=>”index_environments_on_name_varchar_pattern_ops”})
-> 0.0040s
— execute(“CREATE INDEX CONCURRENTLY index_environments_on_name_varchar_pattern_ops ON environments (name varchar_pattern_ops);”)
-> 0.0269s
== 20180806145747 AddIndexToEnvironmentNameForLike: migrated (0.0312s) ========

== 20180807153545 RemoveRedundantStatusIndexOnCiBuilds: migrating =============
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:ci_builds, {:column=>[“status”]})
-> 0.0000s
— index_exists?(:ci_builds, :status, {:algorithm=>:concurrently, :name=>”index_ci_builds_on_status”})
-> 0.0174s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:ci_builds, {:algorithm=>:concurrently, :name=>”index_ci_builds_on_status”, :column=>:status})
-> 0.0001s
— index_name_exists?(:ci_builds, “index_ci_builds_on_status”, true)
-> 0.0019s
— remove_index(:ci_builds, {:algorithm=>:concurrently, :name=>”index_ci_builds_on_status”, :column=>:status})
-> 0.0242s
— execute(“RESET ALL”)
-> 0.0003s
== 20180807153545 RemoveRedundantStatusIndexOnCiBuilds: migrated (0.0457s) ====

== 20180808162000 AddUserShowAddSshKeyMessageToApplicationSettings: migrating =
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:application_settings, :user_show_add_ssh_key_message, :boolean, {:default=>nil})
-> 0.0013s
— change_column_default(:application_settings, :user_show_add_ssh_key_message, true)
-> 0.0344s
-> 0.0401s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0007s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0004s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0005s
— execute(“UPDATE \”application_settings\” SET \”user_show_add_ssh_key_message\” = ‘t’ WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0059s
— change_column_null(:application_settings, :user_show_add_ssh_key_message, false)
-> 0.0085s
— execute(“RESET ALL”)
-> 0.0002s
== 20180808162000 AddUserShowAddSshKeyMessageToApplicationSettings: migrated (0.0580s)

== 20180809195358 MigrateNullWikiAccessLevels: migrating ======================
— transaction()
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0002s
— execute(“UPDATE site_statistics SET wikis_count = (SELECT COUNT(*) FROM project_features WHERE wiki_access_level != 0)”)
-> 0.0004s
-> 0.0043s
== 20180809195358 MigrateNullWikiAccessLevels: migrated (0.0072s) =============

== 20180813101999 ChangeDefaultOfAutoDevopsInstanceWide: migrating ============
— change_column_default(:application_settings, :auto_devops_enabled, true)
-> 0.0345s
== 20180813101999 ChangeDefaultOfAutoDevopsInstanceWide: migrated (0.0346s) ===

== 20180813102000 EnableAutoDevopsInstanceWideForEveryone: migrating ==========
— execute(“UPDATE application_settings SET auto_devops_enabled = true”)
-> 0.0009s
== 20180813102000 EnableAutoDevopsInstanceWideForEveryone: migrated (0.0010s) =

== 20180814153625 AddCommitEmailToUsers: migrating ============================
— add_column(:users, :commit_email, :string)
-> 0.0008s
== 20180814153625 AddCommitEmailToUsers: migrated (0.0009s) ===================

== 20180815040323 AddAuthorizationTypeToClusterPlatformsKubernetes: migrating =
— add_column(:cluster_platforms_kubernetes, :authorization_type, :integer, {:limit=>2})
-> 0.0007s
== 20180815040323 AddAuthorizationTypeToClusterPlatformsKubernetes: migrated (0.0007s)

== 20180815043102 RemoveWikisCountAndRepositoriesCountFromGeoNodeStatuses: migrating
— remove_column(:geo_node_statuses, :wikis_count, :integer)
-> 0.0008s
— remove_column(:geo_node_statuses, :repositories_count, :integer)
-> 0.0007s
== 20180815043102 RemoveWikisCountAndRepositoriesCountFromGeoNodeStatuses: migrated (0.0016s)

== 20180815160409 AddFileLocationToCiJobArtifacts: migrating ==================
— add_column(:ci_job_artifacts, :file_location, :integer, {:limit=>2})
-> 0.0010s
== 20180815160409 AddFileLocationToCiJobArtifacts: migrated (0.0011s) =========

== 20180815170510 AddPartialIndexToCiBuildsArtifactsFile: migrating ===========
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, :id, {:where=>”artifacts_file <> ””, :name=>”partial_index_ci_builds_on_id_with_legacy_artifacts”, :algorithm=>:concurrently})
-> 0.0157s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— add_index(:ci_builds, :id, {:where=>”artifacts_file <> ””, :name=>”partial_index_ci_builds_on_id_with_legacy_artifacts”, :algorithm=>:concurrently})
-> 0.0396s
— execute(“RESET ALL”)
-> 0.0005s
== 20180815170510 AddPartialIndexToCiBuildsArtifactsFile: migrated (0.0567s) ==

== 20180815175440 AddIndexOnListType: migrating ===============================
— transaction_open?()
-> 0.0000s
— index_name(:lists, {:column=>[“list_type”]})
-> 0.0000s
— index_exists?(:lists, :list_type, {:algorithm=>:concurrently, :name=>”index_lists_on_list_type”})
-> 0.0062s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:lists, :list_type, {:algorithm=>:concurrently, :name=>”index_lists_on_list_type”})
-> 0.0237s
— execute(“RESET ALL”)
-> 0.0005s
== 20180815175440 AddIndexOnListType: migrated (0.0314s) ======================

== 20180816161409 MigrateLegacyArtifactsToJobArtifacts: migrating =============
== 20180816161409 MigrateLegacyArtifactsToJobArtifacts: migrated (0.0054s) ====

== 20180816193530 RenameLoginRootNamespaces: migrating ========================
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“RESET ALL”)
-> 0.0003s
== 20180816193530 RenameLoginRootNamespaces: migrated (0.0283s) ===============

== 20180823132905 AddPackagesEnabledToProject: migrating ======================
— add_column(:projects, :packages_enabled, :boolean)
-> 0.0007s
== 20180823132905 AddPackagesEnabledToProject: migrated (0.0008s) =============

== 20180826111825 RecalculateSiteStatistics: migrating ========================
— transaction()
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0003s
— execute(“UPDATE site_statistics SET repositories_count = (SELECT COUNT(*) FROM projects)”)
-> 0.0008s
-> 0.0080s
— transaction()
— execute(“SET LOCAL statement_timeout TO 0”)
-> 0.0003s
— execute(“UPDATE site_statistics SET wikis_count = (SELECT COUNT(*) FROM project_features WHERE wiki_access_level != 0)”)
-> 0.0005s
-> 0.0084s
== 20180826111825 RecalculateSiteStatistics: migrated (0.0165s) ===============

== 20180831134049 AllowManyPrometheusAlerts: migrating ========================
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:prometheus_alerts, {:column=>[“prometheus_metric_id”]})
-> 0.0001s
— index_exists?(:prometheus_alerts, :prometheus_metric_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_prometheus_metric_id”})
-> 0.0042s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:prometheus_alerts, {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_prometheus_metric_id”, :column=>:prometheus_metric_id})
-> 0.0000s
— index_name_exists?(:prometheus_alerts, “index_prometheus_alerts_on_prometheus_metric_id”, true)
-> 0.0021s
— remove_index(:prometheus_alerts, {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_prometheus_metric_id”, :column=>:prometheus_metric_id})
-> 0.0199s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:prometheus_alerts, {:column=>[“prometheus_metric_id”]})
-> 0.0000s
— index_exists?(:prometheus_alerts, :prometheus_metric_id, {:algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_prometheus_metric_id”})
-> 0.0028s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:prometheus_alerts, :prometheus_metric_id, {:algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_prometheus_metric_id”})
-> 0.0228s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:prometheus_alerts, {:column=>[“project_id”, “prometheus_metric_id”]})
-> 0.0000s
— index_exists?(:prometheus_alerts, [:project_id, :prometheus_metric_id], {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_project_id_and_prometheus_metric_id”})
-> 0.0025s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:prometheus_alerts, [:project_id, :prometheus_metric_id], {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_alerts_on_project_id_and_prometheus_metric_id”})
-> 0.0303s
— execute(“RESET ALL”)
-> 0.0005s
== 20180831134049 AllowManyPrometheusAlerts: migrated (0.0897s) ===============

== 20180831152625 AddMergeRequestsAuthorApprovalToProjects: migrating =========
— add_column(:projects, :merge_requests_author_approval, :boolean)
-> 0.0012s
== 20180831152625 AddMergeRequestsAuthorApprovalToProjects: migrated (0.0014s)

== 20180831164904 FixPrometheusMetricQueryLimits: migrating ===================
== 20180831164904 FixPrometheusMetricQueryLimits: migrated (0.0000s) ==========

== 20180831164905 AddCommonToPrometheusMetrics: migrating =====================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:prometheus_metrics, :common, :boolean, {:default=>nil})
-> 0.0008s
— change_column_default(:prometheus_metrics, :common, false)
-> 0.0036s
-> 0.0154s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”prometheus_metrics\””)
-> 0.0013s
— change_column_null(:prometheus_metrics, :common, false)
-> 0.0063s
— execute(“RESET ALL”)
-> 0.0003s
== 20180831164905 AddCommonToPrometheusMetrics: migrated (0.0245s) ============

== 20180831164907 AddIndexOnCommonForPrometheusMetrics: migrating =============
— transaction_open?()
-> 0.0000s
— index_name(:prometheus_metrics, {:column=>[“common”]})
-> 0.0000s
— index_exists?(:prometheus_metrics, :common, {:algorithm=>:concurrently, :name=>”index_prometheus_metrics_on_common”})
-> 0.0035s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:prometheus_metrics, :common, {:algorithm=>:concurrently, :name=>”index_prometheus_metrics_on_common”})
-> 0.0263s
— execute(“RESET ALL”)
-> 0.0002s
== 20180831164907 AddIndexOnCommonForPrometheusMetrics: migrated (0.0309s) ====

== 20180831164908 AddIdentifierToPrometheusMetric: migrating ==================
— add_column(:prometheus_metrics, :identifier, :string)
-> 0.0006s
== 20180831164908 AddIdentifierToPrometheusMetric: migrated (0.0006s) =========

== 20180831164909 AddIndexForIdentifierToPrometheusMetric: migrating ==========
— transaction_open?()
-> 0.0000s
— index_name(:prometheus_metrics, {:column=>[“identifier”]})
-> 0.0000s
— index_exists?(:prometheus_metrics, :identifier, {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_metrics_on_identifier”})
-> 0.0025s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:prometheus_metrics, :identifier, {:unique=>true, :algorithm=>:concurrently, :name=>”index_prometheus_metrics_on_identifier”})
-> 0.0201s
— execute(“RESET ALL”)
-> 0.0004s
== 20180831164909 AddIndexForIdentifierToPrometheusMetric: migrated (0.0235s) =

== 20180831164910 ImportCommonMetrics: migrating ==============================
== 20180831164910 ImportCommonMetrics: migrated (0.0387s) =====================

== 20180901171833 AddProjectConfigSourceStatusIndexToPipeline: migrating ======
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipelines, {:column=>[“project_id”, “status”, “config_source”]})
-> 0.0000s
— index_exists?(:ci_pipelines, [:project_id, :status, :config_source], {:algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_status_and_config_source”})
-> 0.0059s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:ci_pipelines, [:project_id, :status, :config_source], {:algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_status_and_config_source”})
-> 0.0248s
— execute(“RESET ALL”)
-> 0.0005s
== 20180901171833 AddProjectConfigSourceStatusIndexToPipeline: migrated (0.0320s)

== 20180901200537 AddResourceLabelEventReferenceFields: migrating =============
— add_column(:resource_label_events, :cached_markdown_version, :integer)
-> 0.0009s
— add_column(:resource_label_events, :reference, :text)
-> 0.0218s
— add_column(:resource_label_events, :reference_html, :text)
-> 0.0009s
== 20180901200537 AddResourceLabelEventReferenceFields: migrated (0.0238s) ====

== 20180906051323 RemoveOrphanedLabelLinks: migrating =========================
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:label_links)
-> 0.0070s
— execute(“ALTER TABLE label_links\nADD CONSTRAINT fk_d97dd08678\nFOREIGN KEY (label_id)\nREFERENCES labels (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0149s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“ALTER TABLE label_links VALIDATE CONSTRAINT fk_d97dd08678;”)
-> 0.0080s
— execute(“RESET ALL”)
-> 0.0006s
== 20180906051323 RemoveOrphanedLabelLinks: migrated (0.0569s) ================

== 20180906101639 AddUserPingConsentToApplicationSettings: migrating ==========
— add_column(:application_settings, :usage_stats_set_by_user_id, :integer)
-> 0.0071s
— transaction_open?()
-> 0.0000s
— foreign_keys(:application_settings)
-> 0.0074s
— execute(“ALTER TABLE application_settings\nADD CONSTRAINT fk_964370041d\nFOREIGN KEY (usage_stats_set_by_user_id)\nREFERENCES users (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0089s
— execute(“SET statement_timeout TO 0”)
-> 0.0007s
— execute(“ALTER TABLE application_settings VALIDATE CONSTRAINT fk_964370041d;”)
-> 0.0075s
— execute(“RESET ALL”)
-> 0.0005s
== 20180906101639 AddUserPingConsentToApplicationSettings: migrated (0.0329s) =

== 20180907015926 AddLegacyAbacToClusterProvidersGcp: migrating ===============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— transaction()
— add_column(:cluster_providers_gcp, :legacy_abac, :boolean, {:default=>nil})
-> 0.0011s
— change_column_default(:cluster_providers_gcp, :legacy_abac, true)
-> 0.0043s
-> 0.0148s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”cluster_providers_gcp\””)
-> 0.0022s
— change_column_null(:cluster_providers_gcp, :legacy_abac, false)
-> 0.0055s
— execute(“RESET ALL”)
-> 0.0005s
== 20180907015926 AddLegacyAbacToClusterProvidersGcp: migrated (0.0244s) ======

== 20180910104020 AddClosedColumnsToEpic: migrating ===========================
— add_reference(:epics, :closed_by, {:index=>true})
-> 0.0220s
— add_column(:epics, :closed_at, :datetime_with_timezone)
-> 0.0009s
== 20180910104020 AddClosedColumnsToEpic: migrated (0.0231s) ==================

== 20180910105100 AddStateToEpic: migrating ===================================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— transaction()
— add_column(:epics, :state, :integer, {:default=>nil, :limit=>2})
-> 0.0011s
— change_column_default(:epics, :state, 1)
-> 0.0052s
-> 0.0150s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”epics\””)
-> 0.0015s
— change_column_null(:epics, :state, false)
-> 0.0061s
— execute(“RESET ALL”)
-> 0.0005s
== 20180910105100 AddStateToEpic: migrated (0.0244s) ==========================

== 20180910115836 AddAttrEncryptedColumnsToWebHook: migrating =================
— add_column(:web_hooks, :encrypted_token, :string)
-> 0.0009s
— add_column(:web_hooks, :encrypted_token_iv, :string)
-> 0.0007s
— add_column(:web_hooks, :encrypted_url, :string)
-> 0.0008s
— add_column(:web_hooks, :encrypted_url_iv, :string)
-> 0.0007s
== 20180910115836 AddAttrEncryptedColumnsToWebHook: migrated (0.0034s) ========

== 20180910153412 AddTokenDigestToPersonalAccessTokens: migrating =============
— change_column(:personal_access_tokens, :token, :string, {:null=>true})
-> 0.0024s
— add_column(:personal_access_tokens, :token_digest, :string)
-> 0.0007s
== 20180910153412 AddTokenDigestToPersonalAccessTokens: migrated (0.0033s) ====

== 20180910153413 AddIndexToTokenDigestOnPersonalAccessTokens: migrating ======
— transaction_open?()
-> 0.0000s
— index_name(:personal_access_tokens, {:column=>[“token_digest”]})
-> 0.0000s
— index_exists?(:personal_access_tokens, :token_digest, {:unique=>true, :algorithm=>:concurrently, :name=>”index_personal_access_tokens_on_token_digest”})
-> 0.0038s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:personal_access_tokens, :token_digest, {:unique=>true, :algorithm=>:concurrently, :name=>”index_personal_access_tokens_on_token_digest”})
-> 0.0349s
— execute(“RESET ALL”)
-> 0.0004s
== 20180910153413 AddIndexToTokenDigestOnPersonalAccessTokens: migrated (0.0400s)

== 20180912111628 AddKnativeApplication: migrating ============================
— create_table(“clusters_applications_knative”, {})
-> 0.0401s
== 20180912111628 AddKnativeApplication: migrated (0.0402s) ===================

== 20180912113336 AllowPrometheusAlertsPerEnvironment: migrating ==============
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0008s
— index_exists?(:prometheus_alerts, “index_prometheus_alerts_metric_environment”, {:algorithm=>:concurrently})
-> 0.0047s
— transaction_open?()
-> 0.0000s
— index_exists?(:prometheus_alerts, [:project_id, :prometheus_metric_id, :environment_id], {:name=>”index_prometheus_alerts_metric_environment”, :unique=>true, :algorithm=>:concurrently})
-> 0.0048s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:prometheus_alerts, [:project_id, :prometheus_metric_id, :environment_id], {:name=>”index_prometheus_alerts_metric_environment”, :unique=>true, :algorithm=>:concurrently})
-> 0.0264s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0008s
— index_exists?(:prometheus_alerts, [:project_id, :prometheus_metric_id], {:algorithm=>:concurrently})
-> 0.0056s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— remove_index(:prometheus_alerts, {:algorithm=>:concurrently, :column=>[:project_id, :prometheus_metric_id]})
-> 0.0230s
— execute(“RESET ALL”)
-> 0.0004s
== 20180912113336 AllowPrometheusAlertsPerEnvironment: migrated (0.0691s) =====

== 20180913051323 ConsumeRemainingDiffFilesDeletionJobs: migrating ============
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— indexes(:merge_request_diffs)
-> 0.0034s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:merge_request_diffs, {:algorithm=>:concurrently, :name=>”tmp_partial_diff_id_with_files_index”})
-> 0.0000s
— index_name_exists?(:merge_request_diffs, “tmp_partial_diff_id_with_files_index”, true)
-> 0.0019s
— remove_index(:merge_request_diffs, {:algorithm=>:concurrently, :name=>”tmp_partial_diff_id_with_files_index”})Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0137s
— execute(“RESET ALL”)
-> 0.0005s
== 20180913051323 ConsumeRemainingDiffFilesDeletionJobs: migrated (0.0476s) ===

== 20180913142237 ScheduleDigestPersonalAccessTokens: migrating ===============
== 20180913142237 ScheduleDigestPersonalAccessTokens: migrated (0.0292s) ======

== 20180914162043 EncryptWebHooksColumns: migrating ===========================
== 20180914162043 EncryptWebHooksColumns: migrated (0.0011s) ==================

== 20180914195058 ScheduleRepositoryChecksumCleanup: migrating ================
== 20180914195058 ScheduleRepositoryChecksumCleanup: migrated (0.0022s) =======

== 20180914201132 RemoveSidekiqThrottlingFromApplicationSettings: migrating ===
— remove_column(:application_settings, :sidekiq_throttling_enabled, :boolean, {:default=>false})
-> 0.0006s
— remove_column(:application_settings, :sidekiq_throttling_queues, :string)
-> 0.0003s
— remove_column(:application_settings, :sidekiq_throttling_factor, :decimal)
-> 0.0003s
== 20180914201132 RemoveSidekiqThrottlingFromApplicationSettings: migrated (0.0015s)

== 20180916011959 AddIndexPipelinesProjectIdSource: migrating =================
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipelines, {:column=>[“project_id”, “source”]})
-> 0.0000s
— index_exists?(:ci_pipelines, [:project_id, :source], {:algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_source”})
-> 0.0040s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:ci_pipelines, [:project_id, :source], {:algorithm=>:concurrently, :name=>”index_ci_pipelines_on_project_id_and_source”})
-> 0.0272s
— execute(“RESET ALL”)
-> 0.0005s
== 20180916011959 AddIndexPipelinesProjectIdSource: migrated (0.0323s) ========

== 20180916014356 PopulateExternalPipelineSource: migrating ===================
== 20180916014356 PopulateExternalPipelineSource: migrated (0.0063s) ==========

== 20180917145556 CreateDraftNotes: migrating =================================
— create_table(:draft_notes, {:id=>:bigserial})
-> 0.0973s
— add_index(:draft_notes, :discussion_id)
-> 0.0331s
== 20180917145556 CreateDraftNotes: migrated (0.1307s) ========================

== 20180917171038 CreateVulnerabilityScanners: migrating ======================
— create_table(:vulnerability_scanners, {:id=>:bigserial})
-> 0.0654s
== 20180917171038 CreateVulnerabilityScanners: migrated (0.0655s) =============

== 20180917171533 CreateVulnerabilityOccurrences: migrating ===================
— create_table(:vulnerability_occurrences, {:id=>:bigserial})
-> 0.1152s
== 20180917171533 CreateVulnerabilityOccurrences: migrated (0.1153s) ==========

== 20180917171534 CreateVulnerabilityIdentifiers: migrating ===================
— create_table(:vulnerability_identifiers, {:id=>:bigserial})
-> 0.0638s
== 20180917171534 CreateVulnerabilityIdentifiers: migrated (0.0639s) ==========

== 20180917171535 CreateVulnerabilityOccurrenceIdentifiers: migrating =========
— create_table(:vulnerability_occurrence_identifiers, {:id=>:bigserial})
-> 0.0727s
== 20180917171535 CreateVulnerabilityOccurrenceIdentifiers: migrated (0.0728s)

== 20180917172041 RemoveWikisCountFromSiteStatistics: migrating ===============
— remove_column(:site_statistics, :wikis_count, :integer)
-> 0.0010s
== 20180917172041 RemoveWikisCountFromSiteStatistics: migrated (0.0011s) ======

== 20180917213751 CreateGeoResetChecksumEvents: migrating =====================
— create_table(:geo_reset_checksum_events, {:id=>:bigserial})
-> 0.0483s
— add_column(:geo_event_log, :reset_checksum_event_id, :integer, {:limit=>8})
-> 0.0012s
== 20180917213751 CreateGeoResetChecksumEvents: migrated (0.0498s) ============

== 20180917214204 AddGeoResetChecksumEventsForeignKey: migrating ==============
— transaction_open?()
-> 0.0000s
— foreign_keys(:geo_event_log)
-> 0.0066s
— execute(“ALTER TABLE geo_event_log\nADD CONSTRAINT fk_cff7185ad2\nFOREIGN KEY (reset_checksum_event_id)\nREFERENCES geo_reset_checksum_events (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0088s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— execute(“ALTER TABLE geo_event_log VALIDATE CONSTRAINT fk_cff7185ad2;”)
-> 0.0083s
— execute(“RESET ALL”)
-> 0.0003s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“reset_checksum_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :reset_checksum_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”})
-> 0.0031s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:geo_event_log, :reset_checksum_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”})
-> 0.0375s
— execute(“RESET ALL”)
-> 0.0004s
== 20180917214204 AddGeoResetChecksumEventsForeignKey: migrated (0.0661s) =====

== 20180920043317 AddForeignKeyToEpics: migrating =============================
— transaction_open?()
-> 0.0000s
— foreign_keys(:epics)
-> 0.0068s
— execute(“ALTER TABLE epics\nADD CONSTRAINT fk_aa5798e761\nFOREIGN KEY (closed_by_id)\nREFERENCES users (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0088s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE epics VALIDATE CONSTRAINT fk_aa5798e761;”)
-> 0.0073s
— execute(“RESET ALL”)
-> 0.0004s
== 20180920043317 AddForeignKeyToEpics: migrated (0.0246s) ====================

== 20180924070647 AddLabelEventEpicColumn: migrating ==========================
— column_exists?(:resource_label_events, :epic_id)
-> 0.0031s
== 20180924070647 AddLabelEventEpicColumn: migrated (0.0032s) =================

== 20180924141949 AddDiffMaxPatchBytesToApplicationSettings: migrating ========
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— transaction()
— add_column(:application_settings, :diff_max_patch_bytes, :integer, {:default=>nil})
-> 0.0011s
— change_column_default(:application_settings, :diff_max_patch_bytes, 102400)
-> 0.0179s
-> 0.0233s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0004s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0003s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0003s
— execute(“UPDATE \”application_settings\” SET \”diff_max_patch_bytes\” = 102400 WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0068s
— change_column_null(:application_settings, :diff_max_patch_bytes, false)
-> 0.0085s
— execute(“RESET ALL”)
-> 0.0002s
== 20180924141949 AddDiffMaxPatchBytesToApplicationSettings: migrated (0.0411s)

== 20180924190739 AddScheduledAtToCiBuilds: migrating =========================
— add_column(:ci_builds, :scheduled_at, :datetime_with_timezone)
-> 0.0003s
== 20180924190739 AddScheduledAtToCiBuilds: migrated (0.0003s) ================

== 20180924201039 AddPartialIndexToScheduledAt: migrating =====================
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, :scheduled_at, {:where=>”scheduled_at IS NOT NULL AND type = ‘Ci::Build’ AND status = ‘scheduled'”, :name=>”partial_index_ci_builds_on_scheduled_at_with_scheduled_jobs”, :algorithm=>:concurrently})
-> 0.0036s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:ci_builds, :scheduled_at, {:where=>”scheduled_at IS NOT NULL AND type = ‘Ci::Build’ AND status = ‘scheduled'”, :name=>”partial_index_ci_builds_on_scheduled_at_with_scheduled_jobs”, :algorithm=>:concurrently})
-> 0.0205s
— execute(“RESET ALL”)
-> 0.0003s
== 20180924201039 AddPartialIndexToScheduledAt: migrated (0.0246s) ============

== 20180925200829 CreateUserPreferences: migrating ============================
— create_table(:user_preferences, {})
-> 0.0481s
== 20180925200829 CreateUserPreferences: migrated (0.0483s) ===================

== 20180926101838 AddNamespaceFileTemplateProjectId: migrating ================
— add_column(:namespaces, :file_template_project_id, :integer)
-> 0.0070s
— transaction_open?()
-> 0.0000s
— foreign_keys(:namespaces)
-> 0.0069s
— execute(“ALTER TABLE namespaces\nADD CONSTRAINT fk_319256d87a\nFOREIGN KEY (file_template_project_id)\nREFERENCES projects (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0093s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“ALTER TABLE namespaces VALIDATE CONSTRAINT fk_319256d87a;”)
-> 0.0078s
— execute(“RESET ALL”)
-> 0.0005s
== 20180926101838 AddNamespaceFileTemplateProjectId: migrated (0.0329s) =======

== 20180926140319 CreatePrometheusAlertEvents: migrating ======================
— create_table(:prometheus_alert_events, {:id=>:bigserial})
-> 0.0895s
== 20180926140319 CreatePrometheusAlertEvents: migrated (0.0896s) =============

== 20180927073410 AddIndexToProjectDeployTokensDeployTokenId: migrating =======
— transaction_open?()
-> 0.0000s
— index_name(:project_deploy_tokens, {:column=>[“deploy_token_id”]})
-> 0.0000s
— index_exists?(:project_deploy_tokens, :deploy_token_id, {:algorithm=>:concurrently, :name=>”index_project_deploy_tokens_on_deploy_token_id”})
-> 0.0030s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:project_deploy_tokens, :deploy_token_id, {:algorithm=>:concurrently, :name=>”index_project_deploy_tokens_on_deploy_token_id”})
-> 0.0299s
— execute(“RESET ALL”)
-> 0.0005s
== 20180927073410 AddIndexToProjectDeployTokensDeployTokenId: migrated (0.0343s)

== 20180930171532 RecreateVulnerabilityOccurrencesAndVulnerabilityOccurrenceIdentifiers: migrating
— drop_table(:vulnerability_occurrence_identifiers)
-> 0.0035s
— drop_table(:vulnerability_occurrences)
-> 0.0047s
— create_table(:vulnerability_occurrences, {:id=>:bigserial})
-> 0.1389s
— create_table(:vulnerability_occurrence_identifiers, {:id=>:bigserial})
-> 0.0918s
== 20180930171532 RecreateVulnerabilityOccurrencesAndVulnerabilityOccurrenceIdentifiers: migrated (0.2393s)

== 20181001172126 CreateGeoCacheInvalidationEvents: migrating =================
— create_table(:geo_cache_invalidation_events, {:id=>:bigserial})
-> 0.0416s
— add_column(:geo_event_log, :cache_invalidation_event_id, :integer, {:limit=>8})
-> 0.0011s
== 20181001172126 CreateGeoCacheInvalidationEvents: migrated (0.0429s) ========

== 20181001172651 AddGeoCacheInvalidationEventsForeignKey: migrating ==========
— transaction_open?()
-> 0.0000s
— foreign_keys(:geo_event_log)
-> 0.0061s
— execute(“ALTER TABLE geo_event_log\nADD CONSTRAINT fk_42c3b54bed\nFOREIGN KEY (cache_invalidation_event_id)\nREFERENCES geo_cache_invalidation_events (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0088s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— execute(“ALTER TABLE geo_event_log VALIDATE CONSTRAINT fk_42c3b54bed;”)
-> 0.0081s
— execute(“RESET ALL”)
-> 0.0003s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“cache_invalidation_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :cache_invalidation_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”})
-> 0.0040s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:geo_event_log, :cache_invalidation_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”})
-> 0.0286s
— execute(“RESET ALL”)
-> 0.0005s
== 20181001172651 AddGeoCacheInvalidationEventsForeignKey: migrated (0.0578s) =

== 20181002172433 RemoveRestrictedTodosWithCte: migrating =====================
== 20181002172433 RemoveRestrictedTodosWithCte: migrated (0.0373s) ============

== 20181004131020 ChangeVulnOccurrenceColumns: migrating ======================
— drop_table(:vulnerability_occurrence_identifiers)
-> 0.0010s
— drop_table(:vulnerability_occurrences)
-> 0.0014s
— create_table(:vulnerability_occurrences, {:id=>:bigserial})
-> 0.0973s
— create_table(:vulnerability_occurrence_identifiers, {:id=>:bigserial})
-> 0.0501s
== 20181004131020 ChangeVulnOccurrenceColumns: migrated (0.1500s) =============

== 20181004131025 AddVulnOccurrencePipelines: migrating =======================
— create_table(:vulnerability_occurrence_pipelines, {:id=>:bigserial})
-> 0.0554s
== 20181004131025 AddVulnOccurrencePipelines: migrated (0.0555s) ==============

== 20181005110927 AddIndexToLfsObjectsFileStore: migrating ====================
— transaction_open?()
-> 0.0000s
— index_name(:lfs_objects, {:column=>[“file_store”]})
-> 0.0000s
— index_exists?(:lfs_objects, :file_store, {:algorithm=>:concurrently, :name=>”index_lfs_objects_on_file_store”})
-> 0.0031s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:lfs_objects, :file_store, {:algorithm=>:concurrently, :name=>”index_lfs_objects_on_file_store”})
-> 0.0456s
— execute(“RESET ALL”)
-> 0.0005s
== 20181005110927 AddIndexToLfsObjectsFileStore: migrated (0.0503s) ===========

== 20181005125926 AddIndexToUploadsStore: migrating ===========================
— transaction_open?()
-> 0.0000s
— index_name(:uploads, {:column=>[“store”]})
-> 0.0000s
— index_exists?(:uploads, :store, {:algorithm=>:concurrently, :name=>”index_uploads_on_store”})
-> 0.0051s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:uploads, :store, {:algorithm=>:concurrently, :name=>”index_uploads_on_store”})
-> 0.0268s
— execute(“RESET ALL”)
-> 0.0005s
== 20181005125926 AddIndexToUploadsStore: migrated (0.0336s) ==================

== 20181006004100 ImportCommonMetricsNginxVts: migrating ======================
== 20181006004100 ImportCommonMetricsNginxVts: migrated (0.0568s) =============

== 20181008145341 StealEncryptColumns: migrating ==============================
== 20181008145341 StealEncryptColumns: migrated (0.0008s) =====================

== 20181008145359 RemoveWebHooksTokenAndUrl: migrating ========================
— remove_column(:web_hooks, :token, :string)
-> 0.0004s
— remove_column(:web_hooks, :url, :string, {:limit=>2000})
-> 0.0002s
== 20181008145359 RemoveWebHooksTokenAndUrl: migrated (0.0006s) ===============

== 20181008200441 RemoveCircuitBreaker: migrating =============================
— column_exists?(:application_settings, :circuitbreaker_failure_count_threshold)
-> 0.0105s
— remove_column(:application_settings, :circuitbreaker_failure_count_threshold)
-> 0.0059s
— column_exists?(:application_settings, :circuitbreaker_failure_reset_time)
-> 0.0125s
— remove_column(:application_settings, :circuitbreaker_failure_reset_time)
-> 0.0038s
— column_exists?(:application_settings, :circuitbreaker_storage_timeout)
-> 0.0122s
— remove_column(:application_settings, :circuitbreaker_storage_timeout)
-> 0.0045s
— column_exists?(:application_settings, :circuitbreaker_access_retries)
-> 0.0121s
— remove_column(:application_settings, :circuitbreaker_access_retries)
-> 0.0047s
— column_exists?(:application_settings, :circuitbreaker_check_interval)
-> 0.0119s
— remove_column(:application_settings, :circuitbreaker_check_interval)
-> 0.0047s
== 20181008200441 RemoveCircuitBreaker: migrated (0.0831s) ====================

== 20181009190428 CreateClustersKubernetesNamespaces: migrating ===============
— create_table(:clusters_kubernetes_namespaces, {:id=>:bigserial})
-> 0.0996s
== 20181009190428 CreateClustersKubernetesNamespaces: migrated (0.0997s) ======

== 20181010133639 BackfillStoreProjectFullPathInRepo: migrating ===============
== 20181010133639 BackfillStoreProjectFullPathInRepo: migrated (0.0489s) ======

== 20181010235606 CreateBoardProjectRecentVisits: migrating ===================
— create_table(:board_project_recent_visits, {:id=>:bigserial})
-> 0.0835s
— add_index(:board_project_recent_visits, [:user_id, :project_id, :board_id], {:unique=>true, :name=>”index_board_project_recent_visits_on_user_project_and_board”})
-> 0.0167s
== 20181010235606 CreateBoardProjectRecentVisits: migrated (0.1005s) ==========

== 20181012151642 CreateUsersOpsDashboardProjects: migrating ==================
— create_table(:users_ops_dashboard_projects, {:id=>:bigserial})
-> 0.0660s
== 20181012151642 CreateUsersOpsDashboardProjects: migrated (0.0662s) =========

== 20181013005024 RemoveKodingFromApplicationSettings: migrating ==============
— remove_column(:application_settings, :koding_enabled)
-> 0.0014s
— remove_column(:application_settings, :koding_url)
-> 0.0012s
== 20181013005024 RemoveKodingFromApplicationSettings: migrated (0.0241s) =====

== 20181014121030 EnqueueRedactLinks: migrating ===============================
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“RESET ALL”)
-> 0.0004s
== 20181014121030 EnqueueRedactLinks: migrated (0.0392s) ======================

== 20181014131030 EnqueueRedactLinksInEpics: migrating ========================
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“RESET ALL”)
-> 0.0004s
== 20181014131030 EnqueueRedactLinksInEpics: migrated (0.0053s) ===============

== 20181014203236 CreateClusterGroups: migrating ==============================
— create_table(:cluster_groups, {})
-> 0.0574s
== 20181014203236 CreateClusterGroups: migrated (0.0576s) =====================

== 20181015155839 AddFinishedAtToDeployments: migrating =======================
— add_column(:deployments, :finished_at, :datetime_with_timezone)
-> 0.0011s
== 20181015155839 AddFinishedAtToDeployments: migrated (0.0012s) ==============

== 20181016141739 AddStatusToDeployments: migrating ===========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:deployments, :status, :integer, {:default=>nil, :limit=>2})
-> 0.0009s
— change_column_default(:deployments, :status, 2)
-> 0.0044s
-> 0.0151s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”deployments\””)
-> 0.0011s
— change_column_null(:deployments, :status, false)
-> 0.0068s
— execute(“RESET ALL”)
-> 0.0004s
== 20181016141739 AddStatusToDeployments: migrated (0.0246s) ==================

== 20181016152238 CreateBoardGroupRecentVisits: migrating =====================
— create_table(:board_group_recent_visits, {:id=>:bigserial})
-> 0.7769s
— add_index(:board_group_recent_visits, [:user_id, :group_id, :board_id], {:unique=>true, :name=>”index_board_group_recent_visits_on_user_group_and_board”})
-> 0.3246s
== 20181016152238 CreateBoardGroupRecentVisits: migrated (1.1018s) ============

== 20181017001059 AddClusterTypeToClusters: migrating =========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— transaction()
— add_column(:clusters, :cluster_type, :smallint, {:default=>nil})
-> 0.0003s
— change_column_default(:clusters, :cluster_type, 3)
-> 0.0011s
-> 0.0620s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”clusters\””)
-> 0.0013s
— change_column_null(:clusters, :cluster_type, false)
-> 0.0136s
— execute(“RESET ALL”)
-> 0.0006s
== 20181017001059 AddClusterTypeToClusters: migrated (0.0784s) ================

== 20181017131623 AddMissingGeoEvenLogIndexes: migrating ======================
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:geo_event_log, {:column=>[“cache_invalidation_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :cache_invalidation_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”})
-> 0.0084s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”, :column=>:cache_invalidation_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_cache_invalidation_event_id”, true)
-> 0.0021s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”, :column=>:cache_invalidation_event_id})
-> 0.0152s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“cache_invalidation_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :cache_invalidation_event_id, {:where=>”cache_invalidation_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”})
-> 0.0061s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:geo_event_log, :cache_invalidation_event_id, {:where=>”cache_invalidation_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_cache_invalidation_event_id”})
-> 0.0353s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:geo_event_log, {:column=>[“repositories_changed_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repositories_changed_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repositories_changed_event_id”})
-> 0.0080s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repositories_changed_event_id”, :column=>:repositories_changed_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_repositories_changed_event_id”, true)
-> 0.0020s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repositories_changed_event_id”, :column=>:repositories_changed_event_id})
-> 0.0286s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“repositories_changed_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repositories_changed_event_id, {:where=>”repositories_changed_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repositories_changed_event_id”})
-> 0.0072s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:geo_event_log, :repositories_changed_event_id, {:where=>”repositories_changed_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repositories_changed_event_id”})
-> 0.0338s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:geo_event_log, {:column=>[“repository_created_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_created_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_created_event_id”})
-> 0.0084s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_created_event_id”, :column=>:repository_created_event_id})
-> 0.0001s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_repository_created_event_id”, true)
-> 0.0020s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_created_event_id”, :column=>:repository_created_event_id})
-> 0.0195s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“repository_created_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_created_event_id, {:where=>”repository_created_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_created_event_id”})
-> 0.0068s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :repository_created_event_id, {:where=>”repository_created_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_created_event_id”})
-> 0.0346s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:geo_event_log, {:column=>[“repository_deleted_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_deleted_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_deleted_event_id”})
-> 0.0082s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_deleted_event_id”, :column=>:repository_deleted_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_repository_deleted_event_id”, true)
-> 0.0021s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_deleted_event_id”, :column=>:repository_deleted_event_id})
-> 0.0195s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“repository_deleted_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_deleted_event_id, {:where=>”repository_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_deleted_event_id”})
-> 0.0069s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:geo_event_log, :repository_deleted_event_id, {:where=>”repository_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_deleted_event_id”})
-> 0.0257s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0005s
— index_name(:geo_event_log, {:column=>[“repository_renamed_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_renamed_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_renamed_event_id”})
-> 0.0076s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_renamed_event_id”, :column=>:repository_renamed_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_repository_renamed_event_id”, true)
-> 0.0018s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_renamed_event_id”, :column=>:repository_renamed_event_id})
-> 0.0215s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“repository_renamed_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_renamed_event_id, {:where=>”repository_renamed_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_renamed_event_id”})
-> 0.0065s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:geo_event_log, :repository_renamed_event_id, {:where=>”repository_renamed_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_renamed_event_id”})
-> 0.0259s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0006s
— index_name(:geo_event_log, {:column=>[“repository_updated_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_updated_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_updated_event_id”})
-> 0.0074s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_updated_event_id”, :column=>:repository_updated_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_repository_updated_event_id”, true)
-> 0.0018s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_updated_event_id”, :column=>:repository_updated_event_id})
-> 0.0214s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“repository_updated_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :repository_updated_event_id, {:where=>”repository_updated_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_updated_event_id”})
-> 0.0075s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :repository_updated_event_id, {:where=>”repository_updated_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_repository_updated_event_id”})
-> 0.0586s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_name(:geo_event_log, {:column=>[“reset_checksum_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :reset_checksum_event_id, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”})
-> 0.0085s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— index_name(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”, :column=>:reset_checksum_event_id})
-> 0.0000s
— index_name_exists?(:geo_event_log, “index_geo_event_log_on_reset_checksum_event_id”, true)
-> 0.0018s
— remove_index(:geo_event_log, {:algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”, :column=>:reset_checksum_event_id})
-> 0.0364s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“reset_checksum_event_id”]})
-> 0.0001s
— index_exists?(:geo_event_log, :reset_checksum_event_id, {:where=>”reset_checksum_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”})
-> 0.0073s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :reset_checksum_event_id, {:where=>”reset_checksum_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_reset_checksum_event_id”})
-> 0.0674s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“hashed_storage_migrated_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :hashed_storage_migrated_event_id, {:where=>”hashed_storage_migrated_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_hashed_storage_migrated_event_id”})
-> 0.0085s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :hashed_storage_migrated_event_id, {:where=>”hashed_storage_migrated_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_hashed_storage_migrated_event_id”})
-> 0.0808s
— execute(“RESET ALL”)
-> 0.0002s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“lfs_object_deleted_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :lfs_object_deleted_event_id, {:where=>”lfs_object_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_lfs_object_deleted_event_id”})
-> 0.0021s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:geo_event_log, :lfs_object_deleted_event_id, {:where=>”lfs_object_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_lfs_object_deleted_event_id”})
-> 0.0815s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“hashed_storage_attachments_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :hashed_storage_attachments_event_id, {:where=>”hashed_storage_attachments_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_hashed_storage_attachments_event_id”})
-> 0.0086s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :hashed_storage_attachments_event_id, {:where=>”hashed_storage_attachments_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_hashed_storage_attachments_event_id”})
-> 0.0760s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“job_artifact_deleted_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :job_artifact_deleted_event_id, {:where=>”job_artifact_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_job_artifact_deleted_event_id”})Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0113s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:geo_event_log, :job_artifact_deleted_event_id, {:where=>”job_artifact_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_job_artifact_deleted_event_id”})
-> 0.0679s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:geo_event_log, {:column=>[“upload_deleted_event_id”]})
-> 0.0000s
— index_exists?(:geo_event_log, :upload_deleted_event_id, {:where=>”upload_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_upload_deleted_event_id”})
-> 0.0104s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:geo_event_log, :upload_deleted_event_id, {:where=>”upload_deleted_event_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_geo_event_log_on_upload_deleted_event_id”})
-> 0.0635s
— execute(“RESET ALL”)
-> 0.0006s
== 20181017131623 AddMissingGeoEvenLogIndexes: migrated (1.0051s) =============

== 20181019032400 AddShardsTable: migrating ===================================
— create_table(:shards, {})
-> 0.0546s
== 20181019032400 AddShardsTable: migrated (0.0547s) ==========================

== 20181019032408 AddRepositoriesTable: migrating =============================
— create_table(:repositories, {:id=>:bigserial})
-> 0.0788s
— add_column(:projects, :pool_repository_id, :bigint)
-> 0.0016s
— add_index(:projects, :pool_repository_id, {:where=>”pool_repository_id IS NOT NULL”})
-> 0.0154s
== 20181019032408 AddRepositoriesTable: migrated (0.0962s) ====================

== 20181019105553 AddProjectsPoolRepositoryIdForeignKey: migrating ============
— transaction_open?()
-> 0.0000s
— foreign_keys(:projects)
-> 0.0070s
— execute(“ALTER TABLE projects\nADD CONSTRAINT fk_6e5c14658a\nFOREIGN KEY (pool_repository_id)\nREFERENCES repositories (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0081s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE projects VALIDATE CONSTRAINT fk_6e5c14658a;”)
-> 0.0076s
— execute(“RESET ALL”)
-> 0.0005s
== 20181019105553 AddProjectsPoolRepositoryIdForeignKey: migrated (0.0247s) ===

== 20181022131445 AddIndexToNamespaceTrialEndsOn: migrating ===================
— transaction_open?()
-> 0.0000s
— index_name(:namespaces, {:column=>[“trial_ends_on”]})
-> 0.0000s
— index_exists?(:namespaces, :trial_ends_on, {:where=>”trial_ends_on IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_namespaces_on_trial_ends_on”})
-> 0.0147s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:namespaces, :trial_ends_on, {:where=>”trial_ends_on IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_namespaces_on_trial_ends_on”})
-> 0.0231s
— execute(“RESET ALL”)
-> 0.0005s
== 20181022131445 AddIndexToNamespaceTrialEndsOn: migrated (0.0392s) ==========

== 20181022135539 AddIndexOnStatusToDeployments: migrating ====================
— transaction_open?()
-> 0.0000s
— index_name(:deployments, {:column=>[“project_id”, “status”]})
-> 0.0001s
— index_exists?(:deployments, [:project_id, :status], {:algorithm=>:concurrently, :name=>”index_deployments_on_project_id_and_status”})
-> 0.0074s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:deployments, [:project_id, :status], {:algorithm=>:concurrently, :name=>”index_deployments_on_project_id_and_status”})
-> 0.0218s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:deployments, {:column=>[“environment_id”, “status”]})
-> 0.0000s
— index_exists?(:deployments, [:environment_id, :status], {:algorithm=>:concurrently, :name=>”index_deployments_on_environment_id_and_status”})
-> 0.0072s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:deployments, [:environment_id, :status], {:algorithm=>:concurrently, :name=>”index_deployments_on_environment_id_and_status”})
-> 0.0330s
— execute(“RESET ALL”)
-> 0.0004s
== 20181022135539 AddIndexOnStatusToDeployments: migrated (0.0724s) ===========

== 20181022173835 EnqueuePopulateClusterKubernetesNamespace: migrating ========
== 20181022173835 EnqueuePopulateClusterKubernetesNamespace: migrated (0.0012s)

== 20181023104858 AddArchiveBuildsDurationToApplicationSettings: migrating ====
— add_column(:application_settings, :archive_builds_in_seconds, :integer, {:allow_null=>true})
-> 0.0015s
== 20181023104858 AddArchiveBuildsDurationToApplicationSettings: migrated (0.0017s)

== 20181023144439 AddPartialIndexForLegacySuccessfulDeployments: migrating ====
— transaction_open?()
-> 0.0000s
— index_exists?(:deployments, :id, {:where=>”finished_at IS NULL AND status = 2″, :name=>”partial_index_deployments_for_legacy_successful_deployments”, :algorithm=>:concurrently})
-> 0.0094s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:deployments, :id, {:where=>”finished_at IS NULL AND status = 2″, :name=>”partial_index_deployments_for_legacy_successful_deployments”, :algorithm=>:concurrently})
-> 0.0366s
— execute(“RESET ALL”)
-> 0.0005s
== 20181023144439 AddPartialIndexForLegacySuccessfulDeployments: migrated (0.0475s)

== 20181025000427 AddTracingSettings: migrating ===============================
— create_table(:project_tracing_settings, {:id=>:bigserial})
-> 0.0630s
== 20181025000427 AddTracingSettings: migrated (0.0631s) ======================

== 20181025030732 CreateGitlabSubscriptions: migrating ========================
— create_table(:gitlab_subscriptions, {:id=>:bigserial})
-> 0.0795s
== 20181025030732 CreateGitlabSubscriptions: migrated (0.0796s) ===============

== 20181025115728 AddPrivateCommitEmailHostnameToApplicationSettings: migrating
— add_column(:application_settings, :commit_email_hostname, :string, {:null=>true})
-> 0.0016s
== 20181025115728 AddPrivateCommitEmailHostnameToApplicationSettings: migrated (0.0017s)

== 20181026085436 AddAlertManagerTokenToClustersApplicationPrometheus: migrating
— add_column(:clusters_applications_prometheus, :encrypted_alert_manager_token, :string)
-> 0.0010s
— add_column(:clusters_applications_prometheus, :encrypted_alert_manager_token_iv, :string)
-> 0.0009s
== 20181026085436 AddAlertManagerTokenToClustersApplicationPrometheus: migrated (0.0021s)

== 20181026091631 MigrateForbiddenRedirectUris: migrating =====================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”oauth_applications\” WHERE ((\”oauth_applications\”.\”redirect_uri\” ILIKE ‘data://%’ OR \”oauth_applications\”.\”redirect_uri\” ILIKE ‘vbscript://%’) OR \”oauth_applications\”.\”redirect_uri\” ILIKE ‘javascript://%’)”)
-> 0.0024s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”oauth_access_grants\” WHERE ((\”oauth_access_grants\”.\”redirect_uri\” ILIKE ‘data://%’ OR \”oauth_access_grants\”.\”redirect_uri\” ILIKE ‘vbscript://%’) OR \”oauth_access_grants\”.\”redirect_uri\” ILIKE ‘javascript://%’)”)
-> 0.0018s
== 20181026091631 MigrateForbiddenRedirectUris: migrated (0.0133s) ============

== 20181026143227 MigrateSnippetsAccessLevelDefaultValue: migrating ===========
— change_column_default(:project_features, :snippets_access_level, 20)
-> 0.0141s
— change_column_null(:project_features, :snippets_access_level, false)
-> 0.0052s
== 20181026143227 MigrateSnippetsAccessLevelDefaultValue: migrated (0.0226s) ==

== 20181027114222 AddFirstDayOfWeekToUserPreferences: migrating ===============
— add_column(:user_preferences, :first_day_of_week, :integer)
-> 0.0008s
== 20181027114222 AddFirstDayOfWeekToUserPreferences: migrated (0.0009s) ======

== 20181028092114 CreateSmartcardIdentities: migrating ========================
— create_table(:smartcard_identities, {:id=>:bigserial})
-> 0.0547s
== 20181028092114 CreateSmartcardIdentities: migrated (0.0548s) ===============

== 20181028092115 AddIndexToSmartcardIdentities: migrating ====================
— transaction_open?()
-> 0.0000s
— index_name(:smartcard_identities, {:column=>[“subject”, “issuer”]})
-> 0.0000s
— index_exists?(:smartcard_identities, [:subject, :issuer], {:unique=>true, :algorithm=>:concurrently, :name=>”index_smartcard_identities_on_subject_and_issuer”})Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0014s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:smartcard_identities, [:subject, :issuer], {:unique=>true, :algorithm=>:concurrently, :name=>”index_smartcard_identities_on_subject_and_issuer”})
-> 0.0214s
— execute(“RESET ALL”)
-> 0.0004s
== 20181028092115 AddIndexToSmartcardIdentities: migrated (0.0236s) ===========

== 20181028120717 AddFirstDayOfWeekToApplicationSettings: migrating ===========
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:application_settings, :first_day_of_week, :integer, {:default=>nil})
-> 0.0014s
— change_column_default(:application_settings, :first_day_of_week, 0)
-> 0.0353s
-> 0.0404s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0013s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0009s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0009s
— execute(“UPDATE \”application_settings\” SET \”first_day_of_week\” = 0 WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0119s
— change_column_null(:application_settings, :first_day_of_week, false)
-> 0.0081s
— execute(“RESET ALL”)
-> 0.0005s
== 20181028120717 AddFirstDayOfWeekToApplicationSettings: migrated (0.0664s) ==

== 20181030135124 FillEmptyFinishedAtInDeployments: migrating =================
== 20181030135124 FillEmptyFinishedAtInDeployments: migrated (0.0070s) ========

== 20181030154446 AddMissingIndexesForForeignKeys: migrating ==================
— transaction_open?()
-> 0.0000s
— index_name(:application_settings, {:column=>[“usage_stats_set_by_user_id”]})
-> 0.0000s
— index_exists?(:application_settings, :usage_stats_set_by_user_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_usage_stats_set_by_user_id”})
-> 0.0005s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:application_settings, :usage_stats_set_by_user_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_usage_stats_set_by_user_id”})
-> 0.0221s
— execute(“RESET ALL”)
-> 0.0002s
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipeline_schedules, {:column=>[“owner_id”]})
-> 0.0000s
— index_exists?(:ci_pipeline_schedules, :owner_id, {:algorithm=>:concurrently, :name=>”index_ci_pipeline_schedules_on_owner_id”})
-> 0.0011s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:ci_pipeline_schedules, :owner_id, {:algorithm=>:concurrently, :name=>”index_ci_pipeline_schedules_on_owner_id”})
-> 0.0319s
— execute(“RESET ALL”)
-> 0.0003s
— transaction_open?()
-> 0.0000s
— index_name(:ci_trigger_requests, {:column=>[“trigger_id”]})
-> 0.0000s
— index_exists?(:ci_trigger_requests, :trigger_id, {:algorithm=>:concurrently, :name=>”index_ci_trigger_requests_on_trigger_id”})
-> 0.0019s
— execute(“SET statement_timeout TO 0″)
-> 0.0002s
— add_index(:ci_trigger_requests, :trigger_id, {:algorithm=>:concurrently, :name=>”index_ci_trigger_requests_on_trigger_id”})
-> 0.0309s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:ci_triggers, {:column=>[“owner_id”]})
-> 0.0000s
— index_exists?(:ci_triggers, :owner_id, {:algorithm=>:concurrently, :name=>”index_ci_triggers_on_owner_id”})
-> 0.0032s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:ci_triggers, :owner_id, {:algorithm=>:concurrently, :name=>”index_ci_triggers_on_owner_id”})
-> 0.0371s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_helm, {:column=>[“cluster_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_helm, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_helm_on_cluster_id”})
-> 0.0020s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:clusters_applications_helm, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_helm_on_cluster_id”})
-> 0.0381s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_ingress, {:column=>[“cluster_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_ingress, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_ingress_on_cluster_id”})
-> 0.0020s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:clusters_applications_ingress, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_ingress_on_cluster_id”})
-> 0.0385s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_jupyter, {:column=>[“cluster_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_jupyter, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_jupyter_on_cluster_id”})
-> 0.0024s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:clusters_applications_jupyter, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_jupyter_on_cluster_id”})
-> 0.0295s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_jupyter, {:column=>[“oauth_application_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_jupyter, :oauth_application_id, {:algorithm=>:concurrently, :name=>”index_clusters_applications_jupyter_on_oauth_application_id”})
-> 0.0033s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:clusters_applications_jupyter, :oauth_application_id, {:algorithm=>:concurrently, :name=>”index_clusters_applications_jupyter_on_oauth_application_id”})
-> 0.0286s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_knative, {:column=>[“cluster_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_knative, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_knative_on_cluster_id”})
-> 0.0021s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:clusters_applications_knative, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_knative_on_cluster_id”})
-> 0.0298s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:clusters_applications_prometheus, {:column=>[“cluster_id”]})
-> 0.0000s
— index_exists?(:clusters_applications_prometheus, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_prometheus_on_cluster_id”})
-> 0.0021s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:clusters_applications_prometheus, :cluster_id, {:unique=>true, :algorithm=>:concurrently, :name=>”index_clusters_applications_prometheus_on_cluster_id”})
-> 0.0299s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:fork_network_members, {:column=>[“forked_from_project_id”]})
-> 0.0000s
— index_exists?(:fork_network_members, :forked_from_project_id, {:algorithm=>:concurrently, :name=>”index_fork_network_members_on_forked_from_project_id”})
-> 0.0041s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:fork_network_members, :forked_from_project_id, {:algorithm=>:concurrently, :name=>”index_fork_network_members_on_forked_from_project_id”})
-> 0.0276s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:internal_ids, {:column=>[“namespace_id”]})
-> 0.0000s
— index_exists?(:internal_ids, :namespace_id, {:algorithm=>:concurrently, :name=>”index_internal_ids_on_namespace_id”})
-> 0.0040s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:internal_ids, :namespace_id, {:algorithm=>:concurrently, :name=>”index_internal_ids_on_namespace_id”})Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.
Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0279s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_name(:internal_ids, {:column=>[“project_id”]})
-> 0.0000s
— index_exists?(:internal_ids, :project_id, {:algorithm=>:concurrently, :name=>”index_internal_ids_on_project_id”})
-> 0.0052s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:internal_ids, :project_id, {:algorithm=>:concurrently, :name=>”index_internal_ids_on_project_id”})
-> 0.0266s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:issues, {:column=>[“closed_by_id”]})
-> 0.0001s
— index_exists?(:issues, :closed_by_id, {:algorithm=>:concurrently, :name=>”index_issues_on_closed_by_id”})
-> 0.0157s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:issues, :closed_by_id, {:algorithm=>:concurrently, :name=>”index_issues_on_closed_by_id”})
-> 0.0330s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:label_priorities, {:column=>[“label_id”]})
-> 0.0000s
— index_exists?(:label_priorities, :label_id, {:algorithm=>:concurrently, :name=>”index_label_priorities_on_label_id”})
-> 0.0039s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:label_priorities, :label_id, {:algorithm=>:concurrently, :name=>”index_label_priorities_on_label_id”})
-> 0.0361s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:merge_request_metrics, {:column=>[“merged_by_id”]})
-> 0.0000s
— index_exists?(:merge_request_metrics, :merged_by_id, {:algorithm=>:concurrently, :name=>”index_merge_request_metrics_on_merged_by_id”})
-> 0.0045s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:merge_request_metrics, :merged_by_id, {:algorithm=>:concurrently, :name=>”index_merge_request_metrics_on_merged_by_id”})
-> 0.0359s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:merge_request_metrics, {:column=>[“latest_closed_by_id”]})
-> 0.0000s
— index_exists?(:merge_request_metrics, :latest_closed_by_id, {:algorithm=>:concurrently, :name=>”index_merge_request_metrics_on_latest_closed_by_id”})
-> 0.0054s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:merge_request_metrics, :latest_closed_by_id, {:algorithm=>:concurrently, :name=>”index_merge_request_metrics_on_latest_closed_by_id”})
-> 0.0349s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:oauth_openid_requests, {:column=>[“access_grant_id”]})
-> 0.0000s
— index_exists?(:oauth_openid_requests, :access_grant_id, {:algorithm=>:concurrently, :name=>”index_oauth_openid_requests_on_access_grant_id”})
-> 0.0021s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:oauth_openid_requests, :access_grant_id, {:algorithm=>:concurrently, :name=>”index_oauth_openid_requests_on_access_grant_id”})
-> 0.0380s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:project_deploy_tokens, {:column=>[“deploy_token_id”]})
-> 0.0000s
— index_exists?(:project_deploy_tokens, :deploy_token_id, {:algorithm=>:concurrently, :name=>”index_project_deploy_tokens_on_deploy_token_id”})
-> 0.0040s
— transaction_open?()
-> 0.0000s
— index_name(:protected_tag_create_access_levels, {:column=>[“group_id”]})
-> 0.0000s
— index_exists?(:protected_tag_create_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_tag_create_access_levels_on_group_id”})
-> 0.0041s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:protected_tag_create_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_tag_create_access_levels_on_group_id”})
-> 0.0401s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_name(:subscriptions, {:column=>[“project_id”]})
-> 0.0000s
— index_exists?(:subscriptions, :project_id, {:algorithm=>:concurrently, :name=>”index_subscriptions_on_project_id”})
-> 0.0038s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:subscriptions, :project_id, {:algorithm=>:concurrently, :name=>”index_subscriptions_on_project_id”})
-> 0.0365s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:user_statuses, {:column=>[“user_id”]})
-> 0.0000s
— index_exists?(:user_statuses, :user_id, {:algorithm=>:concurrently, :name=>”index_user_statuses_on_user_id”})
-> 0.0021s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:user_statuses, :user_id, {:algorithm=>:concurrently, :name=>”index_user_statuses_on_user_id”})
-> 0.0383s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:users, {:column=>[“accepted_term_id”]})
-> 0.0000s
— index_exists?(:users, :accepted_term_id, {:algorithm=>:concurrently, :name=>”index_users_on_accepted_term_id”})
-> 0.0177s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:users, :accepted_term_id, {:algorithm=>:concurrently, :name=>”index_users_on_accepted_term_id”})
-> 0.0393s
— execute(“RESET ALL”)
-> 0.0004s
== 20181030154446 AddMissingIndexesForForeignKeys: migrated (0.8589s) =========

== 20181031145139 AddProtectedCiVariablesToApplicationSettings: migrating =====
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0001s
— transaction()
— add_column(:application_settings, :protected_ci_variables, :boolean, {:default=>nil})
-> 0.0004s
— change_column_default(:application_settings, :protected_ci_variables, false)
-> 0.0106s
-> 0.0163s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”application_settings\””)
-> 0.0005s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1″)
-> 0.0003s
— exec_query(“SELECT \”application_settings\”.\”id\” FROM \”application_settings\” WHERE \”application_settings\”.\”id\” >= 1 ORDER BY \”application_settings\”.\”id\” ASC LIMIT 1 OFFSET 1″)
-> 0.0003s
— execute(“UPDATE \”application_settings\” SET \”protected_ci_variables\” = ‘f’ WHERE \”application_settings\”.\”id\” >= 1″)
-> 0.0066s
— change_column_null(:application_settings, :protected_ci_variables, false)
-> 0.0087s
— execute(“RESET ALL”)
-> 0.0006s
== 20181031145139 AddProtectedCiVariablesToApplicationSettings: migrated (0.0343s)

== 20181031190558 DropFkGcpClustersTable: migrating ===========================
— foreign_keys(:gcp_clusters)
-> 0.0063s
— remove_foreign_key(:gcp_clusters, {:column=>:project_id})
-> 0.0162s
— foreign_keys(:gcp_clusters)
-> 0.0066s
— remove_foreign_key(:gcp_clusters, {:column=>:user_id})
-> 0.0168s
— foreign_keys(:gcp_clusters)
-> 0.0064s
— remove_foreign_key(:gcp_clusters, {:column=>:service_id})
-> 0.0171s
== 20181031190558 DropFkGcpClustersTable: migrated (0.0742s) ==================

== 20181031190559 DropGcpClustersTable: migrating =============================
— drop_table(:gcp_clusters)
-> 0.0008s
== 20181031190559 DropGcpClustersTable: migrated (0.0008s) ====================

== 20181101091005 StealDigestColumn: migrating ================================
== 20181101091005 StealDigestColumn: migrated (0.0142s) =======================

== 20181101091124 RemoveTokenFromPersonalAccessTokens: migrating ==============
— remove_column(:personal_access_tokens, :token, :string)
-> 0.0010s
== 20181101091124 RemoveTokenFromPersonalAccessTokens: migrated (0.0011s) =====

== 20181101144347 AddIndexForStuckMrQuery: migrating ==========================
— transaction_open?()
-> 0.0000s
— index_name(:merge_requests, {:column=>[“id”, “merge_jid”]})
-> 0.0000s
— index_exists?(:merge_requests, [:id, :merge_jid], {:where=>”merge_jid IS NOT NULL and state = ‘locked'”, :algorithm=>:concurrently, :name=>”index_merge_requests_on_id_and_merge_jid”})
-> 0.0175s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:merge_requests, [:id, :merge_jid], {:where=>”merge_jid IS NOT NULL and state = ‘locked'”, :algorithm=>:concurrently, :name=>”index_merge_requests_on_id_and_merge_jid”})Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0338s
— execute(“RESET ALL”)
-> 0.0005s
== 20181101144347 AddIndexForStuckMrQuery: migrated (0.0528s) =================

== 20181101191341 CreateClustersApplicationsCertManager: migrating ============
— create_table(:clusters_applications_cert_managers, {})
-> 0.0643s
== 20181101191341 CreateClustersApplicationsCertManager: migrated (0.0644s) ===

== 20181105122803 AddMissingIndexesForForeignKeysEE: migrating ================
— transaction_open?()
-> 0.0000s
— index_name(:application_settings, {:column=>[“file_template_project_id”]})
-> 0.0000s
— index_exists?(:application_settings, :file_template_project_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_file_template_project_id”})
-> 0.0039s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:application_settings, :file_template_project_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_file_template_project_id”})
-> 0.0354s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_name(:application_settings, {:column=>[“custom_project_templates_group_id”]})
-> 0.0000s
— index_exists?(:application_settings, :custom_project_templates_group_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_custom_project_templates_group_id”})
-> 0.0047s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:application_settings, :custom_project_templates_group_id, {:algorithm=>:concurrently, :name=>”index_application_settings_on_custom_project_templates_group_id”})
-> 0.0357s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:board_assignees, {:column=>[“assignee_id”]})
-> 0.0000s
— index_exists?(:board_assignees, :assignee_id, {:algorithm=>:concurrently, :name=>”index_board_assignees_on_assignee_id”})
-> 0.0032s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:board_assignees, :assignee_id, {:algorithm=>:concurrently, :name=>”index_board_assignees_on_assignee_id”})
-> 0.0363s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:board_labels, {:column=>[“label_id”]})
-> 0.0001s
— index_exists?(:board_labels, :label_id, {:algorithm=>:concurrently, :name=>”index_board_labels_on_label_id”})
-> 0.0032s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:board_labels, :label_id, {:algorithm=>:concurrently, :name=>”index_board_labels_on_label_id”})
-> 0.0371s
— execute(“RESET ALL”)
-> 0.0005s
— index_name(:ci_pipeline_chat_data, {:column=>[“chat_name_id”]})
-> 0.0000s
— index_exists?(:ci_pipeline_chat_data, :chat_name_id, {:name=>”index_ci_pipeline_chat_data_on_chat_name_id”})
-> 0.0032s
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipeline_chat_data, {:column=>[“chat_name_id”]})
-> 0.0000s
— index_exists?(:ci_pipeline_chat_data, :chat_name_id, {:algorithm=>:concurrently, :name=>”index_ci_pipeline_chat_data_on_chat_name_id”})
-> 0.0029s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:ci_pipeline_chat_data, :chat_name_id, {:algorithm=>:concurrently, :name=>”index_ci_pipeline_chat_data_on_chat_name_id”})
-> 0.0338s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_name(:geo_node_namespace_links, {:column=>[“namespace_id”]})
-> 0.0000s
— index_exists?(:geo_node_namespace_links, :namespace_id, {:algorithm=>:concurrently, :name=>”index_geo_node_namespace_links_on_namespace_id”})
-> 0.0043s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:geo_node_namespace_links, :namespace_id, {:algorithm=>:concurrently, :name=>”index_geo_node_namespace_links_on_namespace_id”})
-> 0.0359s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:namespaces, {:column=>[“file_template_project_id”]})
-> 0.0000s
— index_exists?(:namespaces, :file_template_project_id, {:algorithm=>:concurrently, :name=>”index_namespaces_on_file_template_project_id”})
-> 0.0150s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:namespaces, :file_template_project_id, {:algorithm=>:concurrently, :name=>”index_namespaces_on_file_template_project_id”})
-> 0.0419s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:protected_branch_merge_access_levels, {:column=>[“group_id”]})
-> 0.0001s
— index_exists?(:protected_branch_merge_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_branch_merge_access_levels_on_group_id”})
-> 0.0038s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:protected_branch_merge_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_branch_merge_access_levels_on_group_id”})
-> 0.0362s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0001s
— index_name(:protected_branch_push_access_levels, {:column=>[“group_id”]})
-> 0.0000s
— index_exists?(:protected_branch_push_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_branch_push_access_levels_on_group_id”})
-> 0.0035s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:protected_branch_push_access_levels, :group_id, {:algorithm=>:concurrently, :name=>”index_protected_branch_push_access_levels_on_group_id”})
-> 0.0451s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_name(:software_license_policies, {:column=>[“software_license_id”]})
-> 0.0000s
— index_exists?(:software_license_policies, :software_license_id, {:algorithm=>:concurrently, :name=>”index_software_license_policies_on_software_license_id”})
-> 0.0031s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:software_license_policies, :software_license_id, {:algorithm=>:concurrently, :name=>”index_software_license_policies_on_software_license_id”})
-> 0.0371s
— execute(“RESET ALL”)
-> 0.0004s
== 20181105122803 AddMissingIndexesForForeignKeysEE: migrated (0.4409s) =======

== 20181105201455 StealFillStoreUpload: migrating =============================
== 20181105201455 StealFillStoreUpload: migrated (0.0061s) ====================

== 20181106135939 AddIndexToDeployments: migrating ============================
— transaction_open?()
-> 0.0000s
— index_name(:deployments, {:column=>[“project_id”, “status”, “created_at”]})
-> 0.0000s
— index_exists?(:deployments, [:project_id, :status, :created_at], {:algorithm=>:concurrently, :name=>”index_deployments_on_project_id_and_status_and_created_at”})
-> 0.0086s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:deployments, [:project_id, :status, :created_at], {:algorithm=>:concurrently, :name=>”index_deployments_on_project_id_and_status_and_created_at”})
-> 0.0568s
— execute(“RESET ALL”)
-> 0.0005s
== 20181106135939 AddIndexToDeployments: migrated (0.0668s) ===================

== 20181107054254 RemoveRestrictedTodosAgain: migrating =======================
== 20181107054254 RemoveRestrictedTodosAgain: migrated (0.0587s) ==============

== 20181108091549 CleanupEnvironmentsExternalUrl: migrating ===================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”environments\” WHERE \”environments\”.\”external_url\” ILIKE ‘javascript://%'”)
-> 0.0062s
== 20181108091549 CleanupEnvironmentsExternalUrl: migrated (0.0076s) ==========

== 20181112103239 DropDefaultValueOnStatusDeployments: migrating ==============
— change_column_default(:deployments, :status, nil)
-> 0.0017s
== 20181112103239 DropDefaultValueOnStatusDeployments: migrated (0.0018s) =====

== 20181114163403 AddEpicsSortToUserPreference: migrating =====================
— add_column(:user_preferences, :epics_sort, :string)
-> 0.0238s
== 20181114163403 AddEpicsSortToUserPreference: migrated (0.0238s) ============

== 20181115140140 AddEncryptedRunnersTokenToSettings: migrating ===============
— add_column(:application_settings, :runners_registration_token_encrypted, :string)
-> 0.0005s
== 20181115140140 AddEncryptedRunnersTokenToSettings: migrated (0.0005s) ======

== 20181115140251 EnqueuePrometheusUpdates: migrating =========================
== 20181115140251 EnqueuePrometheusUpdates: migrated (0.0025s) ================

== 20181116050532 KnativeExternalIp: migrating ================================
— add_column(:clusters_applications_knative, :external_ip, :string)
-> 0.0004s
== 20181116050532 KnativeExternalIp: migrated (0.0004s) =======================

== 20181116100917 SanitizeTracingExternalUrl: migrating =======================
== 20181116100917 SanitizeTracingExternalUrl: migrated (0.0033s) ==============

== 20181116141415 AddEncryptedRunnersTokenToNamespaces: migrating =============
— add_column(:namespaces, :runners_token_encrypted, :string)
-> 0.0007s
== 20181116141415 AddEncryptedRunnersTokenToNamespaces: migrated (0.0007s) ====

== 20181116141504 AddEncryptedRunnersTokenToProjects: migrating ===============
— add_column(:projects, :runners_token_encrypted, :string)
-> 0.0006s
== 20181116141504 AddEncryptedRunnersTokenToProjects: migrated (0.0007s) ======

== 20181119081539 AddMergeRequestIdToCiPipelines: migrating ===================
— add_column(:ci_pipelines, :merge_request_id, :integer)
-> 0.0011s
== 20181119081539 AddMergeRequestIdToCiPipelines: migrated (0.0012s) ==========

== 20181119132520 AddIndexesToCiBuildsAndPipelines: migrating =================
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_pipelines, [:project_id, :ref, :id], {:order=>{:id=>:desc}, :name=>”index_ci_pipelines_on_project_idandrefandiddesc”, :algorithm=>:concurrently})
-> 0.0103s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:ci_pipelines, [:project_id, :ref, :id], {:order=>{:id=>:desc}, :name=>”index_ci_pipelines_on_project_idandrefandiddesc”, :algorithm=>:concurrently})
-> 0.0382s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, [:commit_id, :artifacts_expire_at, :id], {:where=>”type::text = ‘Ci::Build’::text AND (retried = false OR retried IS NULL) AND (name::text = ANY (ARRAY[‘sast’::character varying, ‘dependency_scanning’::character varying, ‘sast:container’::character varying, ‘container_scanning’::character varying, ‘dast’::character varying]::text[]))”, :name=>”index_ci_builds_on_commit_id_and_artifacts_expireatandidpartial”, :algorithm=>:concurrently})
-> 0.0089s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:ci_builds, [:commit_id, :artifacts_expire_at, :id], {:where=>”type::text = ‘Ci::Build’::text AND (retried = false OR retried IS NULL) AND (name::text = ANY (ARRAY[‘sast’::character varying, ‘dependency_scanning’::character varying, ‘sast:container’::character varying, ‘container_scanning’::character varying, ‘dast’::character varying]::text[]))”, :name=>”index_ci_builds_on_commit_id_and_artifacts_expireatandidpartial”, :algorithm=>:concurrently})
-> 0.0315s
— execute(“RESET ALL”)
-> 0.0002s
== 20181119132520 AddIndexesToCiBuildsAndPipelines: migrated (0.0909s) ========

== 20181120082911 RenameRepositoriesPoolRepositories: migrating ===============
— rename_table(:repositories, :pool_repositories)
-> 0.0046s
== 20181120082911 RenameRepositoriesPoolRepositories: migrated (0.0046s) ======

== 20181120091639 AddForeignKeyToCiPipelinesMergeRequests: migrating ==========
— transaction_open?()
-> 0.0000s
— index_name(:ci_pipelines, {:column=>[“merge_request_id”]})
-> 0.0000s
— index_exists?(:ci_pipelines, :merge_request_id, {:where=>”merge_request_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_ci_pipelines_on_merge_request_id”})
-> 0.0120s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:ci_pipelines, :merge_request_id, {:where=>”merge_request_id IS NOT NULL”, :algorithm=>:concurrently, :name=>”index_ci_pipelines_on_merge_request_id”})
-> 0.0364s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— foreign_keys(:ci_pipelines)
-> 0.0067s
— execute(“ALTER TABLE ci_pipelines\nADD CONSTRAINT fk_a23be95014\nFOREIGN KEY (merge_request_id)\nREFERENCES merge_requests (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0087s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“ALTER TABLE ci_pipelines VALIDATE CONSTRAINT fk_a23be95014;”)
-> 0.0079s
— execute(“RESET ALL”)
-> 0.0005s
== 20181120091639 AddForeignKeyToCiPipelinesMergeRequests: migrated (0.0747s) =

== 20181120151656 AddTokenEncryptedToCiRunners: migrating =====================
— add_column(:ci_runners, :token_encrypted, :string)
-> 0.0010s
== 20181120151656 AddTokenEncryptedToCiRunners: migrated (0.0011s) ============

== 20181121101842 AddCiBuildsPartialIndexOnProjectIdAndStatus: migrating ======
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, [:project_id, :status], {:name=>”index_ci_builds_project_id_and_status_for_live_jobs_partial2″, :where=>”(((type)::text = ‘Ci::Build’::text) AND ((status)::text = ANY (ARRAY[(‘running’::character varying)::text, (‘pending’::character varying)::text, (‘created’::character varying)::text])))”, :algorithm=>:concurrently})
-> 0.0189s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:ci_builds, [:project_id, :status], {:name=>”index_ci_builds_project_id_and_status_for_live_jobs_partial2″, :where=>”(((type)::text = ‘Ci::Build’::text) AND ((status)::text = ANY (ARRAY[(‘running’::character varying)::text, (‘pending’::character varying)::text, (‘created’::character varying)::text])))”, :algorithm=>:concurrently})
-> 0.0385s
— execute(“RESET ALL”)
-> 0.0004s
== 20181121101842 AddCiBuildsPartialIndexOnProjectIdAndStatus: migrated (0.0587s)

== 20181121101843 RemoveRedundantCiBuildsPartialIndex: migrating ==============
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_exists?(:ci_builds, [:project_id, :status], {:name=>”index_ci_builds_project_id_and_status_for_live_jobs_partial”, :where=>”((status)::text = ANY (ARRAY[(‘running’::character varying)::text, (‘pending’::character varying)::text, (‘created’::character varying)::text]))”, :algorithm=>:concurrently})
-> 0.0211s
== 20181121101843 RemoveRedundantCiBuildsPartialIndex: migrated (0.0222s) =====

== 20181121111200 ScheduleRunnersTokenEncryption: migrating ===================
== 20181121111200 ScheduleRunnersTokenEncryption: migrated (0.1134s) ==========

== 20181121174028 AddLastVerificationColumnsToProjectRepositoryStates: migrating
— add_column(:project_repository_states, :last_repository_verification_ran_at, :datetime_with_timezone)
-> 0.0003s
— add_column(:project_repository_states, :last_wiki_verification_ran_at, :datetime_with_timezone)
-> 0.0002s
== 20181121174028 AddLastVerificationColumnsToProjectRepositoryStates: migrated (0.0006s)

== 20181121175359 AddIndexToLastVerificationColumnsOnProjectRepositoryStates: migrating
— transaction_open?()
-> 0.0000s
— index_exists?(:project_repository_states, [:project_id, :last_repository_verification_ran_at], {:name=>”idx_repository_states_on_last_repository_verification_ran_at”, :where=>”repository_verification_checksum IS NOT NULL AND last_repository_verification_failure IS NULL”, :algorithm=>:concurrently})
-> 0.0059s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:project_repository_states, [:project_id, :last_repository_verification_ran_at], {:name=>”idx_repository_states_on_last_repository_verification_ran_at”, :where=>”repository_verification_checksum IS NOT NULL AND last_repository_verification_failure IS NULL”, :algorithm=>:concurrently})
-> 0.0423s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— index_exists?(:project_repository_states, [:project_id, :last_wiki_verification_ran_at], {:name=>”idx_repository_states_on_last_wiki_verification_ran_at”, :where=>”wiki_verification_checksum IS NOT NULL AND last_wiki_verification_failure IS NULL”, :algorithm=>:concurrently})
-> 0.0064s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— add_index(:project_repository_states, [:project_id, :last_wiki_verification_ran_at], {:name=>”idx_repository_states_on_last_wiki_verification_ran_at”, :where=>”wiki_verification_checksum IS NOT NULL AND last_wiki_verification_failure IS NULL”, :algorithm=>:concurrently})
-> 0.0423s
— execute(“RESET ALL”)
-> 0.0005s
== 20181121175359 AddIndexToLastVerificationColumnsOnProjectRepositoryStates: migrated (0.0999s)

== 20181122160027 CreateProjectRepositories: migrating ========================
— create_table(:project_repositories, {:id=>:bigserial})
-> 0.1154s
== 20181122160027 CreateProjectRepositories: migrated (0.1155s) ===============

== 20181123042307 DropSiteStatistics: migrating ===============================
— drop_table(:site_statistics)
-> 0.0016s
== 20181123042307 DropSiteStatistics: migrated (0.0017s) ======================

== 20181123090058 AddParentToEpic: migrating ==================================
— add_column(:epics, :parent_id, :integer)
-> 0.0009s
== 20181123090058 AddParentToEpic: migrated (0.0011s) =========================

== 20181123100058 AddParentEpicFk: migrating ==================================
— transaction_open?()
-> 0.0000s
— foreign_keys(:epics)
-> 0.0069s
— execute(“ALTER TABLE epics\nADD CONSTRAINT fk_25b99c1be3\nFOREIGN KEY (parent_id)\nREFERENCES epics (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0087s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“ALTER TABLE epics VALIDATE CONSTRAINT fk_25b99c1be3;”)
-> 0.0247s
— execute(“RESET ALL”)
-> 0.0006s
— transaction_open?()
-> 0.0000s
— index_exists?(:epics, :parent_id, {:algorithm=>:concurrently})
-> 0.0093s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— add_index(:epics, :parent_id, {:algorithm=>:concurrently})
-> 0.0400s
— execute(“RESET ALL”)
-> 0.0004s
== 20181123100058 AddParentEpicFk: migrated (0.0926s) =========================

== 20181123135036 DropNotNullConstraintPoolRepositoryDiskPath: migrating ======
— change_column_null(:pool_repositories, :disk_path, true)
-> 0.0008s
== 20181123135036 DropNotNullConstraintPoolRepositoryDiskPath: migrated (0.0009s)

== 20181123144235 CreateSuggestions: migrating ================================
— create_table(:suggestions, {:id=>:bigserial})
-> 0.0986s
== 20181123144235 CreateSuggestions: migrated (0.0987s) =======================

== 20181126125616 RemoveProjectsIndexOnMirrorAndMirrorAt: migrating ===========
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0003s
— index_exists?(:projects, [:mirror, :mirror_last_update_at], {:name=>”index_projects_on_mirror_and_mirror_last_update_at”, :where=>”mirror”, :algorithm=>:concurrently})
-> 0.0097s
== 20181126125616 RemoveProjectsIndexOnMirrorAndMirrorAt: migrated (0.0102s) ==

== 20181126150622 AddEventsIndexOnProjectIdAndCreatedAt: migrating ============
— transaction_open?()
-> 0.0000s
— index_exists?(:events, [:project_id, :created_at], {:name=>”index_events_on_project_id_and_created_at”, :algorithm=>:concurrently})
-> 0.0023s
— execute(“SET statement_timeout TO 0″)
-> 0.0001s
— add_index(:events, [:project_id, :created_at], {:name=>”index_events_on_project_id_and_created_at”, :algorithm=>:concurrently})
-> 0.0237s
— execute(“RESET ALL”)
-> 0.0004s
== 20181126150622 AddEventsIndexOnProjectIdAndCreatedAt: migrated (0.0268s) ===

== 20181126153547 RemoveNotesIndexOnUpdatedAt: migrating ======================
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0007s
— index_exists?(:notes, [:updated_at], {:name=>”index_notes_on_updated_at”, :algorithm=>:concurrently})
-> 0.0110s
— execute(“SET statement_timeout TO 0″)
-> 0.0005s
— remove_index(:notes, {:name=>”index_notes_on_updated_at”, :algorithm=>:concurrently, :column=>[:updated_at]})
-> 0.0240s
— execute(“RESET ALL”)
-> 0.0004s
== 20181126153547 RemoveNotesIndexOnUpdatedAt: migrated (0.0371s) =============

== 20181127130125 CreateReviews: migrating ====================================
— create_table(:reviews, {:id=>:bigserial})
-> 0.0996s
— add_foreign_key(:reviews, :users, {:column=>:author_id, :on_delete=>:nullify})
-> 0.0030s
== 20181127130125 CreateReviews: migrated (0.1027s) ===========================

== 20181127133629 AddReviewIdToNotes: migrating ===============================
— add_column(:notes, :review_id, :bigint)
-> 0.0011s
== 20181127133629 AddReviewIdToNotes: migrated (0.0012s) ======================

== 20181127203117 AddMinimumReverificationIntervalToGeoNodes: migrating =======
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:geo_nodes, :minimum_reverification_interval, :integer, {:default=>nil})
-> 0.0010s
— change_column_default(:geo_nodes, :minimum_reverification_interval, 7)
-> 0.0044s
-> 0.0153s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”geo_nodes\””)
-> 0.0016s
— change_column_null(:geo_nodes, :minimum_reverification_interval, false)
-> 0.0061s
— execute(“RESET ALL”)
-> 0.0004s
== 20181127203117 AddMinimumReverificationIntervalToGeoNodes: migrated (0.0247s)

== 20181128123704 AddStateToPoolRepository: migrating =========================
— add_column(:pool_repositories, :state, :string, {:null=>true})
-> 0.0011s
— add_column(:pool_repositories, :source_project_id, :integer)
-> 0.0007s
— add_index(:pool_repositories, :source_project_id, {:unique=>true})
-> 0.0224s
— add_foreign_key(:pool_repositories, :projects, {:column=>:source_project_id, :on_delete=>:nullify})
-> 0.0042s
== 20181128123704 AddStateToPoolRepository: migrated (0.0288s) ================

== 20181129104854 AddTokenEncryptedToCiBuilds: migrating ======================
— add_column(:ci_builds, :token_encrypted, :string)
-> 0.0011s
== 20181129104854 AddTokenEncryptedToCiBuilds: migrated (0.0012s) =============

== 20181129104944 AddIndexToCiBuildsTokenEncrypted: migrating =================
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_builds, :token_encrypted, {:unique=>true, :where=>”token_encrypted IS NOT NULL”, :algorithm=>:concurrently})
-> 0.0220s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:ci_builds, :token_encrypted, {:unique=>true, :where=>”token_encrypted IS NOT NULL”, :algorithm=>:concurrently})
-> 0.0277s
— execute(“RESET ALL”)
-> 0.0005s
== 20181129104944 AddIndexToCiBuildsTokenEncrypted: migrated (0.0511s) ========

== 20181130102132 BackfillHashedProjectRepositories: migrating ================
== 20181130102132 BackfillHashedProjectRepositories: migrated (0.0576s) =======

== 20181201151856 AddEpicsStateToUserPreferences: migrating ===================
— add_column(:user_preferences, :roadmap_epics_state, :integer)
-> 0.0003s
== 20181201151856 AddEpicsStateToUserPreferences: migrated (0.0004s) ==========

== 20181203002526 AddProjectBfgObjectMapColumn: migrating =====================
— add_column(:projects, :bfg_object_map, :string)
-> 0.0004s
== 20181203002526 AddProjectBfgObjectMapColumn: migrated (0.0005s) ============

== 20181203154104 RemoveRedundantIndicesForProjectMirrorDataAndPushRules: migrating
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0003s
— index_exists?(:project_mirror_data, [:next_execution_timestamp], {:name=>”index_project_mirror_data_on_next_execution_timestamp”, :algorithm=>:concurrently})
-> 0.0021s
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0002s
— index_exists?(:push_rules, [:is_sample], {:name=>”index_push_rules_is_sample”, :algorithm=>:concurrently})
-> 0.0013s
== 20181203154104 RemoveRedundantIndicesForProjectMirrorDataAndPushRules: migrated (0.0042s)

== 20181204031328 CreateApprovalRules: migrating ==============================
— create_table(:approval_project_rules, {:id=>:bigserial})
-> 0.0839s
— create_table(:approval_merge_request_rules, {:id=>:bigserial})
-> 0.0745s
== 20181204031328 CreateApprovalRules: migrated (0.1586s) =====================

== 20181204031329 CreateApprovalRulesApprovals: migrating =====================
— create_table(:approval_merge_request_rules_approved_approvers, {:id=>:bigserial})
-> 0.0747s
== 20181204031329 CreateApprovalRulesApprovals: migrated (0.0748s) ============

== 20181204031330 CreateApprovalRuleMembers: migrating ========================
— create_table(“approval_merge_request_rules_users”, {:id=>:bigserial})
-> 0.0869s
— create_table(“approval_merge_request_rules_groups”, {:id=>:bigserial})
-> 0.0504s
— create_table(“approval_project_rules_users”, {:id=>:bigserial})
-> 0.0499s
— create_table(“approval_project_rules_groups”, {:id=>:bigserial})
-> 0.0498s
== 20181204031330 CreateApprovalRuleMembers: migrated (0.2374s) ===============

== 20181204031331 CreateApprovalMergeRequestRulesApprovalProjectRules: migrating
— create_table(:approval_merge_request_rule_sources, {:id=>:bigserial})
-> 0.0505s
== 20181204031331 CreateApprovalMergeRequestRulesApprovalProjectRules: migrated (0.0507s)

== 20181204040404 MigrateProjectApprovers: migrating ==========================
== 20181204040404 MigrateProjectApprovers: migrated (0.0191s) =================

== 20181204135519 AddCustomProjectTemplatesGroupIdToNamespaces: migrating =====
— add_column(:namespaces, :custom_project_templates_group_id, :integer)
-> 0.0013s
== 20181204135519 AddCustomProjectTemplatesGroupIdToNamespaces: migrated (0.0014s)

== 20181204135932 AddIndexAndForeignKeyForCustomProjectTemplatesGroupIdOnNamespaces: migrating
— transaction_open?()
-> 0.0000s
— index_exists?(:namespaces, [:custom_project_templates_group_id, :type], {:where=>”custom_project_templates_group_id IS NOT NULL”, :algorithm=>:concurrently})
-> 0.0153s
— execute(“SET statement_timeout TO 0″)
-> 0.0003s
— add_index(:namespaces, [:custom_project_templates_group_id, :type], {:where=>”custom_project_templates_group_id IS NOT NULL”, :algorithm=>:concurrently})
-> 0.0178s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:namespaces)
-> 0.0060s
— execute(“ALTER TABLE namespaces\nADD CONSTRAINT fk_e7a0b20a6b\nFOREIGN KEY (custom_project_templates_group_id)\nREFERENCES namespaces (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0161s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— execute(“ALTER TABLE namespaces VALIDATE CONSTRAINT fk_e7a0b20a6b;”)
-> 0.0082s
— execute(“RESET ALL”)
-> 0.0003s
== 20181204135932 AddIndexAndForeignKeyForCustomProjectTemplatesGroupIdOnNamespaces: migrated (0.0657s)

== 20181204154019 PopulateMrMetricsWithEventsData: migrating ==================
— Scheduling `PopulateMergeRequestMetricsWithEventsData` jobs
== 20181204154019 PopulateMrMetricsWithEventsData: migrated (0.0017s) =========

== 20181205093951 AddReviewForeignKeyToNotes: migrating =======================
— transaction_open?()
-> 0.0000s
— foreign_keys(:notes)
-> 0.0023s
— execute(“ALTER TABLE notes\nADD CONSTRAINT fk_2e82291620\nFOREIGN KEY (review_id)\nREFERENCES reviews (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0054s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— execute(“ALTER TABLE notes VALIDATE CONSTRAINT fk_2e82291620;”)
-> 0.0082s
— execute(“RESET ALL”)
-> 0.0003s
— transaction_open?()
-> 0.0000s
— index_exists?(:notes, :review_id, {:algorithm=>:concurrently})
-> 0.0052s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— add_index(:notes, :review_id, {:algorithm=>:concurrently})
-> 0.0212s
— execute(“RESET ALL”)
-> 0.0004s
== 20181205093951 AddReviewForeignKeyToNotes: migrated (0.0442s) ==============

== 20181205171941 CreateProjectDailyStatistics: migrating =====================
— create_table(:project_daily_statistics, {:id=>:bigserial})
-> 0.0333s
== 20181205171941 CreateProjectDailyStatistics: migrated (0.0334s) ============

== 20181206121338 AddHostedPlanIdFkToGitlabSubscriptions: migrating ===========
— transaction_open?()
-> 0.0000s
— foreign_keys(:gitlab_subscriptions)
-> 0.0070s
— execute(“ALTER TABLE gitlab_subscriptions\nADD CONSTRAINT fk_bd0c4019c3\nFOREIGN KEY (hosted_plan_id)\nREFERENCES plans (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0081s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— execute(“ALTER TABLE gitlab_subscriptions VALIDATE CONSTRAINT fk_bd0c4019c3;”)
-> 0.0078s
— execute(“RESET ALL”)
-> 0.0005s
== 20181206121338 AddHostedPlanIdFkToGitlabSubscriptions: migrated (0.0246s) ==

== 20181206121340 GenerateGitlabComSubscriptionsFromPlanId: migrating =========
== 20181206121340 GenerateGitlabComSubscriptionsFromPlanId: migrated (0.0001s)

== 20181211092510 AddNameAuthorIdAndShaToReleases: migrating ==================
— add_column(:releases, :author_id, :integer)
-> 0.0011s
— add_column(:releases, :name, :string)
-> 0.0007s
— add_column(:releases, :sha, :string)
-> 0.0008s
== 20181211092510 AddNameAuthorIdAndShaToReleases: migrated (0.0029s) =========

== 20181211092514 AddAuthorIdIndexAndFkToReleases: migrating ==================
— transaction_open?()
-> 0.0000s
— index_exists?(:releases, :author_id, {:algorithm=>:concurrently})
-> 0.0040s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— add_index(:releases, :author_id, {:algorithm=>:concurrently})
-> 0.0301s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:releases)
-> 0.0076s
— execute(“ALTER TABLE releases\nADD CONSTRAINT fk_8e4456f90f\nFOREIGN KEY (author_id)\nREFERENCES users (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0142s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“ALTER TABLE releases VALIDATE CONSTRAINT fk_8e4456f90f;”)
-> 0.0244s
— execute(“RESET ALL”)
-> 0.0002s
== 20181211092514 AddAuthorIdIndexAndFkToReleases: migrated (0.0830s) =========

== 20181212104941 BackfillReleasesNameWithTagName: migrating ==================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”releases\””)
-> 0.0007s
== 20181212104941 BackfillReleasesNameWithTagName: migrated (0.0075s) =========

== 20181212171634 CreateErrorTrackingSettings: migrating ======================
— create_table(:project_error_tracking_settings, {:id=>:int, :primary_key=>:project_id, :default=>nil})
-> 0.0360s
== 20181212171634 CreateErrorTrackingSettings: migrated (0.0362s) =============

== 20181215161939 AddOnDeleteCascadeToNamespaceIdFkOnGitlabSubscriptions: migrating
— remove_foreign_key(:gitlab_subscriptions, {:column=>:namespace_id})
-> 0.0153s
— transaction_open?()
-> 0.0000s
— foreign_keys(:gitlab_subscriptions)
-> 0.0075s
— execute(“ALTER TABLE gitlab_subscriptions\nADD CONSTRAINT fk_e2595d00a1\nFOREIGN KEY (namespace_id)\nREFERENCES namespaces (id)\nON DELETE CASCADE\nNOT VALID;\n”)
-> 0.0085s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE gitlab_subscriptions VALIDATE CONSTRAINT fk_e2595d00a1;”)
-> 0.0074s
— execute(“RESET ALL”)
-> 0.0005s
== 20181215161939 AddOnDeleteCascadeToNamespaceIdFkOnGitlabSubscriptions: migrated (0.0409s)

== 20181219130552 UpdateProjectImportVisibilityLevel: migrating ===============
— Updating project visibility to 0 on gitlab_project imports.
-> 0.0152s
— Updating project visibility to 10 on gitlab_project imports.
-> 0.0040s
== 20181219130552 UpdateProjectImportVisibilityLevel: migrated (0.0194s) ======

== 20181219145520 MigrateClusterConfigureWorkerSidekiqQueue: migrating ========
== 20181219145520 MigrateClusterConfigureWorkerSidekiqQueue: migrated (0.0005s)

== 20181219145521 AddOptionsToBuildMetadata: migrating ========================
— add_column(:ci_builds_metadata, :config_options, :jsonb)
-> 0.0176s
— add_column(:ci_builds_metadata, :config_variables, :jsonb)
-> 0.0010s
== 20181219145521 AddOptionsToBuildMetadata: migrated (0.0189s) ===============

== 20181220163029 AddPackageTypeToPackages: migrating =========================
— add_column(:packages_packages, :package_type, :integer, {:limit=>2})
-> 0.0009s
== 20181220163029 AddPackageTypeToPackages: migrated (0.0011s) ================

== 20181220165848 UpdatePackageType: migrating ================================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”packages_packages\” WHERE \”packages_packages\”.\”package_type\” IS NULL”)
-> 0.0012s
— change_column_null(:packages_packages, :package_type, false)
-> 0.0061s
== 20181220165848 UpdatePackageType: migrated (0.0079s) =======================

== 20181221135205 CreateProjectFeatureUsage: migrating ========================
— create_table(:project_feature_usages, {:id=>false, :primary_key=>:project_id})
-> 0.0674s
== 20181221135205 CreateProjectFeatureUsage: migrated (0.0675s) ===============

== 20181228140935 AddEpicNotesFilterToUserPreference: migrating ===============
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— transaction()
— add_column(:user_preferences, :epic_notes_filter, :integer, {:default=>nil, :limit=>2})
-> 0.0011s
— change_column_default(:user_preferences, :epic_notes_filter, 0)
-> 0.0043s
-> 0.0238s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”user_preferences\””)
-> 0.0016s
— change_column_null(:user_preferences, :epic_notes_filter, false)
-> 0.0225s
— execute(“RESET ALL”)
-> 0.0005s
== 20181228140935 AddEpicNotesFilterToUserPreference: migrated (0.0499s) ======

== 20181228175414 CreateReleasesLinkTable: migrating ==========================
— create_table(:release_links, {:id=>:bigserial})
-> 0.0677s
== 20181228175414 CreateReleasesLinkTable: migrated (0.0678s) =================

== 20190102152410 DeleteInconsistentInternalIdRecords2: migrating =============
— execute(“SET statement_timeout TO 0”)
-> 0.0008s
— execute(“RESET ALL”)
-> 0.0004s
== 20190102152410 DeleteInconsistentInternalIdRecords2: migrated (0.0122s) ====

== 20190103140724 MakeLegacyFalseDefault: migrating ===========================
— change_column_default(:cluster_providers_gcp, :legacy_abac, {:from=>true, :to=>false})
-> 0.0041s
== 20190103140724 MakeLegacyFalseDefault: migrated (0.0042s) ==================

== 20190104182041 CleanupLegacyArtifactMigration: migrating ===================
== 20190104182041 CleanupLegacyArtifactMigration: migrated (0.0061s) ==========

== 20190107151020 AddServicesTypeIndex: migrating =============================
— index_exists?(:services, :type)
-> 0.0027s
— transaction_open?()
-> 0.0000s
— index_exists?(:services, :type, {:algorithm=>:concurrently})
-> 0.0026s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— add_index(:services, :type, {:algorithm=>:concurrently})
-> 0.0301s
— execute(“RESET ALL”)
-> 0.0006s
== 20190107151020 AddServicesTypeIndex: migrated (0.0367s) ====================

== 20190108192941 RemovePartialIndexFromCiBuildsArtifactsFile: migrating ======
— transaction_open?()
-> 0.0000s
— select_one(“SELECT current_setting(‘server_version_num’) AS v”)
-> 0.0006s
— indexes(:ci_builds)
-> 0.0207s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— remove_index(:ci_builds, {:algorithm=>:concurrently, :name=>”partial_index_ci_builds_on_id_with_legacy_artifacts”})
-> 0.0149s
— execute(“RESET ALL”)
-> 0.0005s
== 20190108192941 RemovePartialIndexFromCiBuildsArtifactsFile: migrated (0.0377s)

== 20190109153125 AddMergeRequestExternalDiffs: migrating =====================
— add_column(:merge_request_diffs, :external_diff, :string)
-> 0.0011s
— add_column(:merge_request_diffs, :external_diff_store, :integer)
-> 0.0007s
— add_column(:merge_request_diffs, :stored_externally, :boolean)
-> 0.0007s
— add_column(:merge_request_diff_files, :external_diff_offset, :integer)
-> 0.0006s
— add_column(:merge_request_diff_files, :external_diff_size, :integer)
-> 0.0007s
— change_column_null(:merge_request_diff_files, :diff, true)
-> 0.0007s
== 20190109153125 AddMergeRequestExternalDiffs: migrated (0.0049s) ============

== 20190110200434 CreateFeatureFlagScopes: migrating ==========================
— create_table(:operations_feature_flag_scopes, {:id=>:bigserial})
-> 0.0586s
== 20190110200434 CreateFeatureFlagScopes: migrated (0.0587s) =================

== 20190111183834 CreateDefaultScopeToFeatureFlags: migrating =================
— execute(“INSERT INTO operations_feature_flag_scopes (feature_flag_id, environment_scope, active, created_at, updated_at)\nSELECT id, ‘*’, active, created_at, updated_at\nFROM operations_feature_flags\nWHERE NOT EXISTS (\n SELECT 1\n FROM operations_feature_flag_scopes\n WHERE operations_feature_flags.id = operations_feature_flag_scopes.feature_flag_id AND\n environment_scope = ‘*’\n);\n”)
-> 0.0007s
== 20190111183834 CreateDefaultScopeToFeatureFlags: migrated (0.0007s) ========

== 20190111231855 FixImportDataAuthMethodForMirrors: migrating ================
== 20190111231855 FixImportDataAuthMethodForMirrors: migrated (0.0028s) =======

== 20190114040404 CorrectApprovalsRequired: migrating =========================
== 20190114040404 CorrectApprovalsRequired: migrated (0.0481s) ================

== 20190114040405 ConsumeRemainingMigrateApproverToApprovalRulesInBatchJobs: migrating
== 20190114040405 ConsumeRemainingMigrateApproverToApprovalRulesInBatchJobs: migrated (0.0054s)

== 20190114172110 AddDomainToCluster: migrating ===============================
— add_column(:clusters, :domain, :string)
-> 0.0007s
== 20190114172110 AddDomainToCluster: migrated (0.0008s) ======================

== 20190115054215 MigrateDeleteContainerRepositoryWorker: migrating ===========
== 20190115054215 MigrateDeleteContainerRepositoryWorker: migrated (0.0004s) ==

== 20190115054216 AddErrorNotificationSentToRemoteMirrors: migrating ==========
— add_column(:remote_mirrors, :error_notification_sent, :boolean)
-> 0.0012s
== 20190115054216 AddErrorNotificationSentToRemoteMirrors: migrated (0.0014s) =

== 20190115092821 AddColumnsProjectErrorTrackingSettings: migrating ===========
— add_column(:project_error_tracking_settings, :project_name, :string)
-> 0.0010s
— add_column(:project_error_tracking_settings, :organization_name, :string)
-> 0.0010s
— change_column_default(:project_error_tracking_settings, :enabled, {:from=>true, :to=>false})
-> 0.0038s
— change_column_null(:project_error_tracking_settings, :api_url, true)
-> 0.0007s
== 20190115092821 AddColumnsProjectErrorTrackingSettings: migrated (0.0068s) ==

== 20190116234221 AddSortingFieldsToUserPreference: migrating =================
— add_column(:user_preferences, :issues_sort, :string)
-> 0.0009s
— add_column(:user_preferences, :merge_requests_sort, :string)
-> 0.0007s
== 20190116234221 AddSortingFieldsToUserPreference: migrated (0.0018s) ========

== 20190121140418 AddEnforcedSsoToSamlProvider: migrating =====================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:saml_providers, :enforced_sso, :boolean, {:default=>nil})
-> 0.0009s
— change_column_default(:saml_providers, :enforced_sso, false)
-> 0.0033s
-> 0.0156s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”saml_providers\””)
-> 0.0012s
— change_column_null(:saml_providers, :enforced_sso, false)
-> 0.0065s
— execute(“RESET ALL”)
-> 0.0005s
== 20190121140418 AddEnforcedSsoToSamlProvider: migrated (0.0251s) ============

== 20190121140658 CreateProjectAlertingSettings: migrating ====================
— create_table(:project_alerting_settings, {:id=>:int, :primary_key=>:project_id})
-> 0.0529s
== 20190121140658 CreateProjectAlertingSettings: migrated (0.0530s) ===========

== 20190122101816 AddGroupViewToUsers: migrating ==============================
— add_column(:users, :group_view, :integer)
-> 0.0014s
== 20190122101816 AddGroupViewToUsers: migrated (0.0016s) =====================

== 20190123211816 AddRoadmapSortToUserPreferences: migrating ==================
— add_column(:user_preferences, :roadmaps_sort, :string, {:null=>true})
-> 0.0010s
== 20190123211816 AddRoadmapSortToUserPreferences: migrated (0.0011s) =========

== 20190124200344 MigrateStorageMigratorSidekiqQueue: migrating ===============
== 20190124200344 MigrateStorageMigratorSidekiqQueue: migrated (0.0004s) ======

== 20190128104236 AddRelativePositionToEpics: migrating =======================
— add_column(:epics, :relative_position, :integer)Arel performing automatic type casting is deprecated, and will be removed in Arel 8.0. If you are seeing this, it is because you are manually passing a value to an Arel predicate, and the `Arel::Table` object was constructed manually. The easiest way to remove this warning is to use an `Arel::Table` object returned from calling `arel_table` on an ActiveRecord::Base subclass.

If you’re certain the value is already of the right type, change `attribute.eq(value)` to `attribute.eq(Arel::Nodes::Quoted.new(value))` (you will be able to remove that in Arel 8.0, it is only required to silence this deprecation warning).

You can also silence this warning globally by setting `$arel_silence_type_casting_deprecation` to `true`. (Do NOT do this if you are a library author)

If you are passing user input to a predicate, you must either give an appropriate type caster object to the `Arel::Table`, or manually cast the value before passing it to Arel.

-> 0.0012s
== 20190128104236 AddRelativePositionToEpics: migrated (0.0013s) ==============

== 20190128172533 AddIndexToNameOnApprovalMergeRequestRules: migrating ========
— transaction_open?()
-> 0.0000s
— index_exists?(:approval_merge_request_rules, [:merge_request_id, :code_owner, :name], {:unique=>true, :where=>”code_owner = ‘t'”, :name=>”approval_rule_name_index_for_code_owners”, :algorithm=>:concurrently})
-> 0.0032s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:approval_merge_request_rules, [:merge_request_id, :code_owner, :name], {:unique=>true, :where=>”code_owner = ‘t'”, :name=>”approval_rule_name_index_for_code_owners”, :algorithm=>:concurrently})
-> 0.0318s
— execute(“RESET ALL”)
-> 0.0005s
== 20190128172533 AddIndexToNameOnApprovalMergeRequestRules: migrated (0.0364s)

== 20190129013538 AddMergeRequestIdToVulnerabilityFeedback: migrating =========
— add_column(:vulnerability_feedback, :merge_request_id, :integer, {:null=>true})
-> 0.0013s
== 20190129013538 AddMergeRequestIdToVulnerabilityFeedback: migrated (0.0014s)

== 20190130091630 AddLocalCachedMarkdownVersion: migrating ====================
— add_column(:application_settings, :local_markdown_version, :integer, {:default=>0, :null=>false})
-> 0.1366s
== 20190130091630 AddLocalCachedMarkdownVersion: migrated (0.1367s) ===========

== 20190130164903 AddGroupViewIndexToUsers: migrating =========================
— transaction_open?()
-> 0.0000s
— index_exists?(:users, :group_view, {:algorithm=>:concurrently})
-> 0.0184s
— execute(“SET statement_timeout TO 0”)
-> 0.0005s
— add_index(:users, :group_view, {:algorithm=>:concurrently})
-> 0.0326s
— execute(“RESET ALL”)
-> 0.0006s
== 20190130164903 AddGroupViewIndexToUsers: migrated (0.0526s) ================

== 20190131122559 FixNullTypeLabels: migrating ================================
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”labels\” WHERE \”labels\”.\”project_id\” IS NOT NULL AND \”labels\”.\”template\” = ‘f’ AND \”labels\”.\”type\” IS NULL”)
-> 0.0013s
== 20190131122559 FixNullTypeLabels: migrated (0.0022s) =======================

== 20190204115450 MigrateAutoDevOpsDomainToClusterDomain: migrating ===========
— execute(“UPDATE clusters\nSET domain = project_auto_devops.domain\nFROM cluster_projects, project_auto_devops\nWHERE\n cluster_projects.cluster_id = clusters.id\n AND project_auto_devops.project_id = cluster_projects.project_id\n AND project_auto_devops.domain != ”\n”)
-> 0.0037s
== 20190204115450 MigrateAutoDevOpsDomainToClusterDomain: migrated (0.0039s) ==

== 20190206193120 AddIndexToTags: migrating ===================================
— transaction_open?()
-> 0.0000s
— index_exists?(:tags, :name, {:name=>”index_tags_on_name_trigram”, :using=>:gin, :opclasses=>{:name=>:gin_trgm_ops}, :algorithm=>:concurrently})
-> 0.0033s
— execute(“SET statement_timeout TO 0″)
-> 0.0004s
— add_index(:tags, :name, {:name=>”index_tags_on_name_trigram”, :using=>:gin, :opclasses=>{:name=>:gin_trgm_ops}, :algorithm=>:concurrently})
-> 0.0225s
— execute(“RESET ALL”)
-> 0.0005s
== 20190206193120 AddIndexToTags: migrated (0.0273s) ==========================

== 20190215154930 AddMergePipelinesEnabledToCiCdSettings: migrating ===========
— add_column(:project_ci_cd_settings, :merge_pipelines_enabled, :boolean)
-> 0.0011s
== 20190215154930 AddMergePipelinesEnabledToCiCdSettings: migrated (0.0012s) ==

== 20190218031401 SetDefaultPositionForChildEpics: migrating ==================
== 20190218031401 SetDefaultPositionForChildEpics: migrated (0.0016s) =========

== 20190218134158 AddMaskedToCiVariables: migrating ===========================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:ci_variables, :masked, :boolean, {:default=>nil})
-> 0.0011s
— change_column_default(:ci_variables, :masked, false)
-> 0.0041s
-> 0.0155s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”ci_variables\””)
-> 0.0027s
— change_column_null(:ci_variables, :masked, false)
-> 0.0047s
— execute(“RESET ALL”)
-> 0.0005s
== 20190218134158 AddMaskedToCiVariables: migrated (0.0248s) ==================

== 20190218134209 AddMaskedToCiGroupVariables: migrating ======================
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:ci_group_variables, :masked, :boolean, {:default=>nil})
-> 0.0009s
— change_column_default(:ci_group_variables, :masked, false)
-> 0.0040s
-> 0.0151s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”ci_group_variables\””)
-> 0.0025s
— change_column_null(:ci_group_variables, :masked, false)
-> 0.0055s
— execute(“RESET ALL”)
-> 0.0003s
== 20190218134209 AddMaskedToCiGroupVariables: migrated (0.0247s) =============

== 20190218144405 CreateJiraConnectInstallations: migrating ===================
— create_table(:jira_connect_installations, {:id=>:bigserial})
-> 0.0342s
— add_index(:jira_connect_installations, :client_key, {:unique=>true})
-> 0.0165s
== 20190218144405 CreateJiraConnectInstallations: migrated (0.0509s) ==========

== 20190219134239 AddMergeRequestsRequireCodeownerApprovalToProjects: migrating
— add_column(:projects, :merge_requests_require_code_owner_approval, :boolean)
-> 0.0006s
== 20190219134239 AddMergeRequestsRequireCodeownerApprovalToProjects: migrated (0.0007s)

== 20190220112238 AddSamlProviderGroupManagedAccountsFlag: migrating ==========
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— transaction()
— add_column(:saml_providers, :enforced_group_managed_accounts, :boolean, {:default=>nil})
-> 0.0006s
— change_column_default(:saml_providers, :enforced_group_managed_accounts, false)
-> 0.0025s
-> 0.0075s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”saml_providers\””)
-> 0.0010s
— change_column_null(:saml_providers, :enforced_group_managed_accounts, false)
-> 0.0073s
— execute(“RESET ALL”)
-> 0.0006s
== 20190220112238 AddSamlProviderGroupManagedAccountsFlag: migrated (0.0171s) =

== 20190220142344 AddEmailHeaderAndFooterEnabledFlagToAppearancesTable: migrating
— transaction_open?()
-> 0.0000s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— transaction()
— add_column(:appearances, :email_header_and_footer_enabled, :boolean, {:default=>nil})
-> 0.0008s
— change_column_default(:appearances, :email_header_and_footer_enabled, false)
-> 0.0044s
-> 0.0152s
— transaction_open?()
-> 0.0000s
— exec_query(“SELECT COUNT(*) AS count FROM \”appearances\””)
-> 0.0015s
— change_column_null(:appearances, :email_header_and_footer_enabled, false)
-> 0.0060s
— execute(“RESET ALL”)
-> 0.0005s
== 20190220142344 AddEmailHeaderAndFooterEnabledFlagToAppearancesTable: migrated (0.0243s)

== 20190220150130 AddExtraShasToCiPipelines: migrating ========================
— add_column(:ci_pipelines, :source_sha, :binary)
-> 0.0010s
— add_column(:ci_pipelines, :target_sha, :binary)
-> 0.0008s
== 20190220150130 AddExtraShasToCiPipelines: migrated (0.0019s) ===============

== 20190222105948 AddUserManagingGroupRelation: migrating =====================
— add_column(:users, :managing_group_id, :integer)
-> 0.0011s
== 20190222105948 AddUserManagingGroupRelation: migrated (0.0011s) ============

== 20190222110418 AddUserManagingGroupRelationFk: migrating ===================
— transaction_open?()
-> 0.0000s
— index_exists?(:users, :managing_group_id, {:algorithm=>:concurrently})
-> 0.0193s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— add_index(:users, :managing_group_id, {:algorithm=>:concurrently})
-> 0.0324s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— foreign_keys(:users)
-> 0.0072s
— execute(“ALTER TABLE users\nADD CONSTRAINT fk_a4b8fefe3e\nFOREIGN KEY (managing_group_id)\nREFERENCES namespaces (id)\nON DELETE SET NULL\nNOT VALID;\n”)/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20190301081611_migrate_project_migrate_sidekiq_queue.rb:8: warning: already initialized constant MigrateProjectMigrateSidekiqQueue::DOWNTIME
/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20190301081611_migrate_project_migrate_sidekiq_queue.rb:6: warning: previous definition of DOWNTIME was here

-> 0.0135s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— execute(“ALTER TABLE users VALIDATE CONSTRAINT fk_a4b8fefe3e;”)
-> 0.0077s
— execute(“RESET ALL”)
-> 0.0002s
== 20190222110418 AddUserManagingGroupRelationFk: migrated (0.0826s) ==========

== 20190225160300 StealEncryptRunnersTokens: migrating ========================
== 20190225160300 StealEncryptRunnersTokens: migrated (0.3192s) ===============

== 20190225160301 AddRunnerTokensIndexes: migrating ===========================
— transaction_open?()
-> 0.0000s
— index_exists?(:ci_runners, :token_encrypted, {:algorithm=>:concurrently})
-> 0.0028s
— execute(“SET statement_timeout TO 0”)
-> 0.0001s
— add_index(:ci_runners, :token_encrypted, {:algorithm=>:concurrently})
-> 0.0235s
— execute(“RESET ALL”)
-> 0.0002s
— transaction_open?()
-> 0.0000s
— index_exists?(:projects, :runners_token_encrypted, {:algorithm=>:concurrently})
-> 0.0111s
— execute(“SET statement_timeout TO 0”)
-> 0.0002s
— add_index(:projects, :runners_token_encrypted, {:algorithm=>:concurrently})
-> 0.0302s
— execute(“RESET ALL”)
-> 0.0004s
— transaction_open?()
-> 0.0000s
— index_exists?(:namespaces, :runners_token_encrypted, {:unique=>true, :algorithm=>:concurrently})
-> 0.0164s
— execute(“SET statement_timeout TO 0”)
-> 0.0004s
— add_index(:namespaces, :runners_token_encrypted, {:unique=>true, :algorithm=>:concurrently})
-> 0.0326s
— execute(“RESET ALL”)
-> 0.0004s
== 20190225160301 AddRunnerTokensIndexes: migrated (0.1194s) ==================

== 20190225173106 CreateInsights: migrating ===================================
— create_table(:insights)
-> 0.0840s
== 20190225173106 CreateInsights: migrated (0.0841s) ==========================

== 20190226154144 CreateProjectIncidentManagementSettings: migrating ==========
— create_table(:project_incident_management_settings, {:id=>:int, :primary_key=>:project_id})
-> 0.0438s
== 20190226154144 CreateProjectIncidentManagementSettings: migrated (0.0440s) =

== 20190301081611 MigrateProjectMigrateSidekiqQueue: migrating ================
== 20190301081611 MigrateProjectMigrateSidekiqQueue: migrated (0.0004s) =======

== 20190301182031 AddMergeRequestIdIndexOnVulnerabilityFeedback: migrating ====
— transaction_open?()
-> 0.0000s
— index_exists?(:vulnerability_feedback, :merge_request_id, {:algorithm=>:concurrently})
-> 0.0061s
— execute(“SET statement_timeout TO 0”)
-> 0.0003s
— add_index(:vulnerability_feedback, :merge_request_id, {:algorithm=>:concurrently})
-> 0.0360s
— execute(“RESET ALL”)
-> 0.0005s
— transaction_open?()
-> 0.0000s
— foreign_keys(:vulnerability_feedback)
-> 0.0074s
— execute(“ALTER TABLE vulnerability_feedback\nADD CONSTRAINT fk_563ff1912e\nFOREIGN KEY (merge_request_id)\nREFERENCES merge_requests (id)\nON DELETE SET NULL\nNOT VALID;\n”)
-> 0.0140s
— execute(“SET statement_timeout TO 0”)
-> 0.0006s
— execute(“ALTER TABLE vulnerability_feedback VALIDATE CONSTRAINT fk_563ff1912e;”)
-> 0.0075s
— execute(“RESET ALL”)
-> 0.0006s
== 20190301182031 AddMergeRequestIdIndexOnVulnerabilityFeedback: migrated (0.0748s)

== 20190301182457 AddExternalHostnameToIngressAndKnative: migrating ===========
— add_column(:clusters_applications_ingress, :external_hostname, :string)
-> 0.0011s
— add_column(:clusters_applications_knative, :external_hostname, :string)
-> 0.0010s
== 20190301182457 AddExternalHostnameToIngressAndKnative: migrated (0.0023s) ==

== 20190312071108 AddDetectedRepositoryLanguagesToProjects: migrating =========
— add_column(:projects, :detected_repository_languages, :boolean)
-> 0.0015s
== 20190312071108 AddDetectedRepositoryLanguagesToProjects: migrated (0.0016s)

– execute “bash” “/tmp/chef-script20190416-27010-wpaf5x”
Recipe: gitlab::gitlab-rails
* execute[clear the gitlab-rails cache] action run
– execute /opt/gitlab/bin/gitlab-rake cache:clear
Recipe: gitlab::logrotate_folders_and_configs
* directory[/var/opt/gitlab/logrotate] action create (up to date)
* directory[/var/opt/gitlab/logrotate/logrotate.d] action create (up to date)
* directory[/var/log/gitlab/logrotate] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.conf] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/nginx] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/unicorn] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-rails] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-shell] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-workhorse] action create (up to date)
* template[/var/opt/gitlab/logrotate/logrotate.d/gitlab-pages] action create (up to date)
Recipe: gitlab::unicorn
* directory[/var/log/gitlab/unicorn] action create (up to date)
* directory[/opt/gitlab/var/unicorn] action create (up to date)
* directory[/var/opt/gitlab/gitlab-rails/sockets] action create (up to date)
* directory[/var/opt/gitlab/gitlab-rails/etc] action create (up to date)
* template[/var/opt/gitlab/gitlab-rails/etc/unicorn.rb] action create
– update content in file /var/opt/gitlab/gitlab-rails/etc/unicorn.rb from 1401d1 to 632514
— /var/opt/gitlab/gitlab-rails/etc/unicorn.rb 2018-03-09 14:15:22.816578222 +0900
+++ /var/opt/gitlab/gitlab-rails/etc/.chef-unicorn20190416-27010-5l1fzy.rb 2019-04-16 12:23:19.070124738 +0900
@@ -18,27 +18,29 @@
# How many worker processes
worker_processes 5

-# What to do before we fork a worker
-before_fork do |server, worker|
– old_pid = “#{server.config[:pid]}.oldbin”
– if old_pid != server.pid
– begin
– sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
– Process.kill(sig, File.read(old_pid).to_i)
– rescue Errno::ENOENT, Errno::ESRCH
– end
– end
+# Load the Gitlab::Cluster::LifecycleEvents module
+# to notify the application of unicorn events
+require_relative “/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/cluster/lifecycle_events”

– ActiveRecord::Base.connection.disconnect! if defined?(ActiveRecord::Base)
+before_exec do |server|
+ Gitlab::Cluster::LifecycleEvents.do_master_restart
+end

+before_fork do |server, worker|
+ Gitlab::Cluster::LifecycleEvents.do_before_fork
+
+ old_pid = “#{server.config[:pid]}.oldbin”
+ if old_pid != server.pid
+ begin
+ sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
+ Process.kill(sig, File.read(old_pid).to_i)
+ rescue Errno::ENOENT, Errno::ESRCH
+ end
+ end
end

-# What to do after we fork a worker
after_fork do |server, worker|
– ActiveRecord::Base.establish_connection if defined?(ActiveRecord::Base)
– defined?(::Prometheus::Client.reinitialize_on_pid_change) &&
– Prometheus::Client.reinitialize_on_pid_change

+ Gitlab::Cluster::LifecycleEvents.do_worker_start
end

# Where to drop a pidfile
– restore selinux security context
Recipe:
* service[unicorn] action nothing (skipped due to action :nothing)
Recipe: gitlab::unicorn
* runit_service[unicorn] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/unicorn] action create (up to date)
* template[/opt/gitlab/sv/unicorn/run] action create
– update content in file /opt/gitlab/sv/unicorn/run from dab388 to b7c020
— /opt/gitlab/sv/unicorn/run 2018-02-20 13:41:07.501168190 +0900
+++ /opt/gitlab/sv/unicorn/.chef-run20190416-27010-5o4h0f 2019-04-16 12:23:19.110123561 +0900
@@ -5,7 +5,6 @@

# Setup run directory.
mkdir -p /run/gitlab/unicorn
-rm /run/gitlab/unicorn/*.db 2> /dev/null
chmod 0700 /run/gitlab/unicorn
chown git /run/gitlab/unicorn
export prometheus_run_dir=’/run/gitlab/unicorn’
@@ -13,11 +12,12 @@

-exec chpst -P -u git \
+exec chpst -P -u git:git -U git:git \
/usr/bin/env \
current_pidfile=/opt/gitlab/var/unicorn/unicorn.pid \
rails_app=gitlab-rails \
user=git \
+ group=git \
environment=production \
unicorn_rb=/var/opt/gitlab/gitlab-rails/etc/unicorn.rb \
prometheus_multiproc_dir=”${prometheus_run_dir}” \
– restore selinux security context
* directory[/opt/gitlab/sv/unicorn/log] action create (up to date)
* directory[/opt/gitlab/sv/unicorn/log/main] action create (up to date)
* template[/opt/gitlab/sv/unicorn/log/run] action create (up to date)
* template[/var/log/gitlab/unicorn/config] action create (up to date)
* directory[/opt/gitlab/sv/unicorn/env] action create
– create new directory /opt/gitlab/sv/unicorn/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for unicorn service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/unicorn/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/unicorn/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/unicorn/control] action create (up to date)
* template[/opt/gitlab/sv/unicorn/control/t] action create (up to date)
* link[/opt/gitlab/init/unicorn] action create (up to date)
* file[/opt/gitlab/sv/unicorn/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/unicorn] action create (up to date)
* ruby_block[wait for unicorn service socket] action run (skipped due to not_if)

* sysctl[net.core.somaxconn] action create
* directory[create /etc/sysctl.d for net.core.somaxconn] action create (up to date)
* file[create /opt/gitlab/embedded/etc/90-omnibus-gitlab-net.core.somaxconn.conf net.core.somaxconn] action create (up to date)
* link[/etc/sysctl.d/90-omnibus-gitlab-net.core.somaxconn.conf] action create (up to date)
* file[delete /etc/sysctl.d/90-postgresql.conf net.core.somaxconn] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-unicorn.conf net.core.somaxconn] action delete (skipped due to only_if)
* file[delete /opt/gitlab/embedded/etc/90-omnibus-gitlab.conf net.core.somaxconn] action delete (skipped due to only_if)
* file[delete /etc/sysctl.d/90-omnibus-gitlab.conf net.core.somaxconn] action delete (skipped due to only_if)
* execute[load sysctl conf net.core.somaxconn] action nothing (skipped due to action :nothing)
(up to date)
Recipe:
* service[puma] action nothing (skipped due to action :nothing)
Recipe: gitlab::puma_disable
* runit_service[puma] action disable
* ruby_block[disable puma] action run (skipped due to only_if)
(up to date)
Recipe: gitlab::sidekiq
* directory[/var/log/gitlab/sidekiq] action create (up to date)
Recipe:
* service[sidekiq] action nothing (skipped due to action :nothing)
Recipe: gitlab::sidekiq
* runit_service[sidekiq] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/sidekiq] action create (up to date)
* template[/opt/gitlab/sv/sidekiq/run] action create
– update content in file /opt/gitlab/sv/sidekiq/run from f49a74 to 09a7f9
— /opt/gitlab/sv/sidekiq/run 2018-02-20 13:41:09.222131947 +0900
+++ /opt/gitlab/sv/sidekiq/.chef-run20190416-27010-2ff2xu 2019-04-16 12:23:19.212120559 +0900
@@ -13,7 +13,8 @@

exec chpst -e /opt/gitlab/etc/gitlab-rails/env -P \
– -U git -u git \
+ -U git:git \
+ -u git:git \
/usr/bin/env \
prometheus_multiproc_dir=”${prometheus_run_dir}” \
/opt/gitlab/embedded/bin/bundle exec sidekiq \
– restore selinux security context
* directory[/opt/gitlab/sv/sidekiq/log] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq/log/main] action create (up to date)
* template[/opt/gitlab/sv/sidekiq/log/run] action create (up to date)
* template[/var/log/gitlab/sidekiq/config] action create (up to date)
* directory[/opt/gitlab/sv/sidekiq/env] action create
– create new directory /opt/gitlab/sv/sidekiq/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for sidekiq service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/sidekiq/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/sidekiq/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/sidekiq/control] action create
– create new directory /opt/gitlab/sv/sidekiq/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/sidekiq] action create (up to date)
* file[/opt/gitlab/sv/sidekiq/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/sidekiq] action create (up to date)
* ruby_block[wait for sidekiq service socket] action run (skipped due to not_if)

Recipe: gitlab::gitlab-workhorse
* directory[/var/opt/gitlab/gitlab-workhorse] action create (up to date)
* directory[/var/log/gitlab/gitlab-workhorse] action create (up to date)
* directory[/opt/gitlab/etc/gitlab-workhorse] action create (up to date)
* env_dir[/opt/gitlab/etc/gitlab-workhorse/env] action create
* directory[/opt/gitlab/etc/gitlab-workhorse/env] action create (up to date)
* file[/opt/gitlab/etc/gitlab-workhorse/env/PATH] action create (up to date)
* file[/opt/gitlab/etc/gitlab-workhorse/env/HOME] action create (up to date)
* file[/opt/gitlab/etc/gitlab-workhorse/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/gitlab-workhorse/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/gitlab-workhorse/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/gitlab-workhorse/env/SSL_CERT_DIR 2019-04-16 12:23:19.330117086 +0900
+++ /opt/gitlab/etc/gitlab-workhorse/env/.chef-SSL_CERT_DIR20190416-27010-ypgxgp 2019-04-16 12:23:19.330117086 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context

Recipe:
* service[gitlab-workhorse] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-workhorse
* runit_service[gitlab-workhorse] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/gitlab-workhorse] action create (up to date)
* template[/opt/gitlab/sv/gitlab-workhorse/run] action create
– update content in file /opt/gitlab/sv/gitlab-workhorse/run from 275116 to f289d2
— /opt/gitlab/sv/gitlab-workhorse/run 2018-02-20 13:41:11.791077718 +0900
+++ /opt/gitlab/sv/gitlab-workhorse/.chef-run20190416-27010-lkm15q 2019-04-16 12:23:19.372115849 +0900
@@ -9,8 +9,8 @@
cd /var/opt/gitlab/gitlab-workhorse

exec chpst -e /opt/gitlab/etc/gitlab-workhorse/env -P \
– -U git \
– -u git \
+ -U git:git \
+ -u git:git \
/opt/gitlab/embedded/bin/gitlab-workhorse \
-listenNetwork unix \
-listenUmask 0 \
– restore selinux security context
* directory[/opt/gitlab/sv/gitlab-workhorse/log] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitlab-workhorse/log/run] action create (up to date)
* template[/var/log/gitlab/gitlab-workhorse/config] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-workhorse/env] action create
– create new directory /opt/gitlab/sv/gitlab-workhorse/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for gitlab-workhorse service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/gitlab-workhorse/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/gitlab-workhorse/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/gitlab-workhorse/control] action create
– create new directory /opt/gitlab/sv/gitlab-workhorse/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/gitlab-workhorse] action create (up to date)
* file[/opt/gitlab/sv/gitlab-workhorse/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/gitlab-workhorse] action create (up to date)
* ruby_block[wait for gitlab-workhorse service socket] action run (skipped due to not_if)

* file[/var/opt/gitlab/gitlab-workhorse/VERSION] action create
– update content in file /var/opt/gitlab/gitlab-workhorse/VERSION from 28a20c to ecdafa
— /var/opt/gitlab/gitlab-workhorse/VERSION 2019-04-16 12:17:05.432809785 +0900
+++ /var/opt/gitlab/gitlab-workhorse/.chef-VERSION20190416-27010-1lk4de5 2019-04-16 12:23:19.497112170 +0900
@@ -1,2 +1,2 @@
-gitlab-workhorse v4.2.1-20180711.082039
+gitlab-workhorse v8.3.3-20190404.174123
– restore selinux security context
* template[/var/opt/gitlab/gitlab-workhorse/config.toml] action create
– change mode from ‘0644’ to ‘0640’
– change owner from ‘git’ to ‘root’
– change group from ‘root’ to ‘git’
– restore selinux security context
Recipe:
* service[mailroom] action nothing (skipped due to action :nothing)
Recipe: gitlab::mailroom_disable
* runit_service[mailroom] action disable
* ruby_block[disable mailroom] action run (skipped due to only_if)
(up to date)
Recipe: gitlab::nginx
* directory[/var/opt/gitlab/nginx] action create (up to date)
* directory[/var/opt/gitlab/nginx/conf] action create (up to date)
* directory[/var/log/gitlab/nginx] action create (up to date)
* link[/var/opt/gitlab/nginx/logs] action create (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-http.conf] action create
– update content in file /var/opt/gitlab/nginx/conf/gitlab-http.conf from 3e6762 to a359f1
— /var/opt/gitlab/nginx/conf/gitlab-http.conf 2019-04-16 12:17:05.494807956 +0900
+++ /var/opt/gitlab/nginx/conf/.chef-gitlab-http20190416-27010-1ia0flm.conf 2019-04-16 12:23:19.575109875 +0900
@@ -30,11 +30,7 @@
## configuration ##
###################################

-upstream gitlab-workhorse {
– server unix:/var/opt/gitlab/gitlab-workhorse/socket;
-}


server {
listen *:2443 ssl http2;

@@ -105,7 +101,7 @@
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;

– location ~ (\.git/gitlab-lfs/objects|\.git/info/lfs/objects/batch$) {
+ location ~ (.git/git-receive-pack$|.git/info/refs?service=git-receive-pack$|.git/gitlab-lfs/objects|.git/info/lfs/objects/batch$) {
proxy_cache off;
proxy_pass http://gitlab-workhorse;
proxy_request_buffering off;
@@ -122,10 +118,9 @@
}

error_page 404 /404.html;
– error_page 422 /422.html;
error_page 500 /500.html;
error_page 502 /502.html;
– location ~ ^/(404|422|500|502)(-custom)?\.html$ {
+ location ~ ^/(404|500|502)(-custom)?\.html$ {
root /opt/gitlab/embedded/service/gitlab-rails/public;
internal;
}
– restore selinux security context
* template[/var/opt/gitlab/nginx/conf/gitlab-smartcard-http.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-pages.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-registry.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/gitlab-mattermost-http.conf] action delete (up to date)
* template[/var/opt/gitlab/nginx/conf/nginx-status.conf] action create
– update content in file /var/opt/gitlab/nginx/conf/nginx-status.conf from bee808 to 27c186
— /var/opt/gitlab/nginx/conf/nginx-status.conf 2018-02-20 13:41:17.033966586 +0900
+++ /var/opt/gitlab/nginx/conf/.chef-nginx-status20190416-27010-n9tb38.conf 2019-04-16 12:23:19.613108756 +0900
@@ -2,7 +2,15 @@
listen *:8060;
server_name localhost;
location /nginx_status {
– stub_status on;
+ stub_status;
+ server_tokens off;
+ access_log off;
+ allow 127.0.0.1;
+ deny all;
+ }
+ location /metrics {
+ vhost_traffic_status_display;
+ vhost_traffic_status_display_format prometheus;
server_tokens off;
access_log off;
allow 127.0.0.1;
– restore selinux security context
* template[/var/opt/gitlab/nginx/conf/nginx.conf] action create
– update content in file /var/opt/gitlab/nginx/conf/nginx.conf from db89cc to 8a7cab
— /var/opt/gitlab/nginx/conf/nginx.conf 2018-02-20 13:41:17.056966097 +0900
+++ /var/opt/gitlab/nginx/conf/.chef-nginx20190416-27010-16i3mgn.conf 2019-04-16 12:23:19.673106990 +0900
@@ -71,7 +71,15 @@
~^(?.*)\? $temp;
}

+ # Enable vts status module.
+ vhost_traffic_status_zone;
+
+ upstream gitlab-workhorse {
+ server unix:/var/opt/gitlab/gitlab-workhorse/socket;
+ }
+
include /var/opt/gitlab/nginx/conf/gitlab-http.conf;
+

– restore selinux security context
Recipe:
* service[nginx] action nothing (skipped due to action :nothing)
Recipe: nginx::enable
* runit_service[nginx] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/nginx] action create (up to date)
* template[/opt/gitlab/sv/nginx/run] action create (up to date)
* directory[/opt/gitlab/sv/nginx/log] action create (up to date)
* directory[/opt/gitlab/sv/nginx/log/main] action create (up to date)
* template[/opt/gitlab/sv/nginx/log/run] action create (up to date)
* template[/var/log/gitlab/nginx/config] action create (up to date)
* directory[/opt/gitlab/sv/nginx/env] action create
– create new directory /opt/gitlab/sv/nginx/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for nginx service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/nginx/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/nginx/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/nginx/control] action create
– create new directory /opt/gitlab/sv/nginx/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/nginx] action create (up to date)
* file[/opt/gitlab/sv/nginx/down] action delete (up to date)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/nginx] action create (up to date)
* ruby_block[wait for nginx service socket] action run (skipped due to not_if)

* execute[reload nginx] action nothing (skipped due to action :nothing)
Recipe:
* service[remote-syslog] action nothing (skipped due to action :nothing)
Recipe: gitlab::remote-syslog_disable
* runit_service[remote-syslog] action disable
* ruby_block[disable remote-syslog] action run (skipped due to only_if)
(up to date)
Recipe:
* service[logrotate] action nothing (skipped due to action :nothing)
Recipe: gitlab::logrotate
* runit_service[logrotate] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/logrotate] action create (up to date)
* template[/opt/gitlab/sv/logrotate/run] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/log] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/log/main] action create (up to date)
* template[/opt/gitlab/sv/logrotate/log/run] action create (up to date)
* template[/var/log/gitlab/logrotate/config] action create (up to date)
* directory[/opt/gitlab/sv/logrotate/env] action create
– create new directory /opt/gitlab/sv/logrotate/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for logrotate service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/logrotate/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/logrotate/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/logrotate/control] action create (up to date)
* template[/opt/gitlab/sv/logrotate/control/t] action create (up to date)
* link[/opt/gitlab/init/logrotate] action create (up to date)
* file[/opt/gitlab/sv/logrotate/down] action delete (up to date)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/logrotate] action create (up to date)
* ruby_block[wait for logrotate service socket] action run (skipped due to not_if)

Recipe:
* service[gitlab-pages] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-pages_disable
* runit_service[gitlab-pages] action disable
* ruby_block[disable gitlab-pages] action run (skipped due to only_if)
(up to date)
Recipe:
* service[storage-check] action nothing (skipped due to action :nothing)
Recipe: gitlab::storage-check_disable
* runit_service[storage-check] action disable
* ruby_block[disable storage-check] action run (skipped due to only_if)
(up to date)
Recipe:
* service[registry] action nothing (skipped due to action :nothing)
Recipe: registry::disable
* runit_service[registry] action disable
* ruby_block[disable registry] action run (skipped due to only_if)
(up to date)
Recipe:
* service[mattermost] action nothing (skipped due to action :nothing)
Recipe: mattermost::disable
* runit_service[mattermost] action disable
* ruby_block[disable mattermost] action run (skipped due to only_if)
(up to date)
Recipe: gitlab::gitlab-healthcheck
* template[/opt/gitlab/etc/gitlab-healthcheck-rc] action create (up to date)
Recipe: gitlab::prometheus_user
* account[Prometheus user and group] action create
* group[Prometheus user and group] action create (up to date)
* linux_user[Prometheus user and group] action create (up to date)
(up to date)
Recipe: gitlab::node-exporter
* directory[/var/log/gitlab/node-exporter] action create (up to date)
* directory[/opt/gitlab/etc/node-exporter/env] action create
– create new directory /opt/gitlab/etc/node-exporter/env
– change mode from ” to ‘0700’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* env_dir[/opt/gitlab/etc/node-exporter/env] action create
* directory[/opt/gitlab/etc/node-exporter/env] action create (up to date)
* file[/opt/gitlab/etc/node-exporter/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/node-exporter/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/node-exporter/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/node-exporter/env/SSL_CERT_DIR 2019-04-16 12:23:19.895100456 +0900
+++ /opt/gitlab/etc/node-exporter/env/.chef-SSL_CERT_DIR20190416-27010-sa7a5y 2019-04-16 12:23:19.895100456 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context

* directory[/var/opt/gitlab/node-exporter/textfile_collector] action create (up to date)
Recipe:
* service[node-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab::node-exporter
* runit_service[node-exporter] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/node-exporter] action create (up to date)
* template[/opt/gitlab/sv/node-exporter/run] action create
– update content in file /opt/gitlab/sv/node-exporter/run from e2ca0b to fcf4b4
— /opt/gitlab/sv/node-exporter/run 2018-02-20 13:41:28.112729833 +0900
+++ /opt/gitlab/sv/node-exporter/.chef-run20190416-27010-1i8xqkf 2019-04-16 12:23:19.930099426 +0900
@@ -2,5 +2,8 @@
exec 2>&1

umask 077
-exec chpst -P -U gitlab-prometheus -u gitlab-prometheus /opt/gitlab/embedded/bin/node_exporter –web.listen-address=localhost:9100 –collector.textfile.directory=/var/opt/gitlab/node-exporter/textfile_collector
+exec chpst -P -e /opt/gitlab/etc/node-exporter/env \
+ -U gitlab-prometheus:gitlab-prometheus \
+ -u gitlab-prometheus:gitlab-prometheus \
+ /opt/gitlab/embedded/bin/node_exporter –web.listen-address=localhost:9100 –collector.textfile.directory=/var/opt/gitlab/node-exporter/textfile_collector
– restore selinux security context
* directory[/opt/gitlab/sv/node-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/node-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/node-exporter/config] action create (up to date)
* directory[/opt/gitlab/sv/node-exporter/env] action create
– create new directory /opt/gitlab/sv/node-exporter/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for node-exporter service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/node-exporter/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/node-exporter/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/node-exporter/control] action create
– create new directory /opt/gitlab/sv/node-exporter/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/node-exporter] action create (up to date)
* file[/opt/gitlab/sv/node-exporter/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/node-exporter] action create (up to date)
* ruby_block[wait for node-exporter service socket] action run (skipped due to not_if)

Recipe: gitlab::gitlab-monitor
* directory[/var/opt/gitlab/gitlab-monitor] action create (up to date)
* directory[/var/log/gitlab/gitlab-monitor] action create (up to date)
* template[/var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml] action create
– update content in file /var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml from 595ab9 to ea37d4
— /var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml 2018-02-20 13:41:33.728608881 +0900
+++ /var/opt/gitlab/gitlab-monitor/.chef-gitlab-monitor20190416-27010-1iraye5.yml 2019-04-16 12:23:20.039096218 +0900
@@ -11,7 +11,7 @@

# Probes config
probes:
– git_process:
+ git_process: &git_process
class_name: GitProcessProber # `class_name` is redundant here
methods:
– probe_git
@@ -32,21 +32,21 @@
class_name: Database::RowCountProber
<<: *db_common - process: + process: &process methods: - - probe_memory - - probe_age + - probe_stat - probe_count + - probe_smaps opts: - pid_or_pattern: "sidekiq .* \\[.*?\\]" name: sidekiq - - pid_or_pattern: "unicorn worker\\[.*?\\]" + - pid_or_pattern: "unicorn.* worker\\[.*?\\]" name: unicorn - pid_or_pattern: "git-upload-pack --stateless-rpc" name: git_upload_pack quantiles: true - sidekiq: + sidekiq: &sidekiq methods: - probe_queues - probe_jobs @@ -55,4 +55,23 @@ - probe_dead opts: redis_url: "unix:/var/opt/gitlab/redis/redis.socket" + redis_enable_client: true + + metrics: + multiple: true + git_process: + <<: *git_process + process: + <<: *process + sidekiq: + <<: *sidekiq + ci_builds: + class_name: Database::CiBuildsProber + <<: *db_common + tuple_stats: + class_name: Database::TuplesProber + <<: *db_common + rows_count: + class_name: Database::RowCountProber + <<: *db_common - change mode from '0644' to '0600' - restore selinux security context Recipe:
* service[gitlab-monitor] action nothing (skipped due to action :nothing)
Recipe: gitlab::gitlab-monitor
* runit_service[gitlab-monitor] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/gitlab-monitor] action create (up to date)
* template[/opt/gitlab/sv/gitlab-monitor/run] action create
– update content in file /opt/gitlab/sv/gitlab-monitor/run from 2f480f to 0f4ff9
— /opt/gitlab/sv/gitlab-monitor/run 2018-02-20 13:41:33.814607025 +0900
+++ /opt/gitlab/sv/gitlab-monitor/.chef-run20190416-27010-rqm9hv 2019-04-16 12:23:20.102094363 +0900
@@ -2,5 +2,9 @@
exec 2>&1

umask 077
-exec chpst -P -U git -u git /opt/gitlab/embedded/bin/gitlab-mon web -c /var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml
+exec chpst -P \
+ -U git:git \
+ -u git:git \
+ /opt/gitlab/embedded/bin/gitlab-mon web \
+ -c /var/opt/gitlab/gitlab-monitor/gitlab-monitor.yml
– restore selinux security context
* directory[/opt/gitlab/sv/gitlab-monitor/log] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor/log/main] action create (up to date)
* template[/opt/gitlab/sv/gitlab-monitor/log/run] action create (up to date)
* template[/var/log/gitlab/gitlab-monitor/config] action create (up to date)
* directory[/opt/gitlab/sv/gitlab-monitor/env] action create
– create new directory /opt/gitlab/sv/gitlab-monitor/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for gitlab-monitor service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/gitlab-monitor/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/gitlab-monitor/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/gitlab-monitor/control] action create
– create new directory /opt/gitlab/sv/gitlab-monitor/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/gitlab-monitor] action create (up to date)
* file[/opt/gitlab/sv/gitlab-monitor/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/gitlab-monitor] action create (up to date)
* ruby_block[wait for gitlab-monitor service socket] action run (skipped due to not_if)

Recipe: gitlab::redis-exporter
* directory[/var/log/gitlab/redis-exporter] action create (up to date)
* directory[/opt/gitlab/etc/redis-exporter/env] action create
– create new directory /opt/gitlab/etc/redis-exporter/env
– change mode from ” to ‘0700’
– change owner from ” to ‘gitlab-redis’
– restore selinux security context
* env_dir[/opt/gitlab/etc/redis-exporter/env] action create
* directory[/opt/gitlab/etc/redis-exporter/env] action create (up to date)
* file[/opt/gitlab/etc/redis-exporter/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/redis-exporter/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/redis-exporter/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/redis-exporter/env/SSL_CERT_DIR 2019-04-16 12:23:20.229090625 +0900
+++ /opt/gitlab/etc/redis-exporter/env/.chef-SSL_CERT_DIR20190416-27010-x0qh80 2019-04-16 12:23:20.229090625 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context

Recipe:
* service[redis-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab::redis-exporter
* runit_service[redis-exporter] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/redis-exporter] action create (up to date)
* template[/opt/gitlab/sv/redis-exporter/run] action create
– update content in file /opt/gitlab/sv/redis-exporter/run from fa3e94 to 84f53a
— /opt/gitlab/sv/redis-exporter/run 2018-02-20 13:41:39.382486510 +0900
+++ /opt/gitlab/sv/redis-exporter/.chef-run20190416-27010-meow5u 2019-04-16 12:23:20.271089389 +0900
@@ -2,5 +2,8 @@
exec 2>&1

umask 077
-exec chpst -P -U gitlab-redis:git -u gitlab-redis:git /opt/gitlab/embedded/bin/redis_exporter -web.listen-address=localhost:9121 -redis.addr=unix:///var/opt/gitlab/redis/redis.socket
+exec chpst -P -e /opt/gitlab/etc/redis-exporter/env \
+ -U gitlab-redis:git \
+ -u gitlab-redis:git \
+ /opt/gitlab/embedded/bin/redis_exporter -web.listen-address=localhost:9121 -redis.addr=unix:///var/opt/gitlab/redis/redis.socket
– restore selinux security context
* directory[/opt/gitlab/sv/redis-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/redis-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/redis-exporter/config] action create (up to date)
* directory[/opt/gitlab/sv/redis-exporter/env] action create
– create new directory /opt/gitlab/sv/redis-exporter/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for redis-exporter service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/redis-exporter/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/redis-exporter/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/redis-exporter/control] action create
– create new directory /opt/gitlab/sv/redis-exporter/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/redis-exporter] action create (up to date)
* file[/opt/gitlab/sv/redis-exporter/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/redis-exporter] action create (up to date)
* ruby_block[wait for redis-exporter service socket] action run (skipped due to not_if)

Recipe: gitlab::prometheus
* directory[/var/opt/gitlab/prometheus] action create (up to date)
* directory[/var/opt/gitlab/prometheus/rules] action create
– create new directory /var/opt/gitlab/prometheus/rules
– change mode from ” to ‘0750’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* directory[/var/log/gitlab/prometheus] action create (up to date)
* directory[/opt/gitlab/etc/prometheus/env] action create
– create new directory /opt/gitlab/etc/prometheus/env
– change mode from ” to ‘0700’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* env_dir[/opt/gitlab/etc/prometheus/env] action create
* directory[/opt/gitlab/etc/prometheus/env] action create (up to date)
* file[/opt/gitlab/etc/prometheus/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/prometheus/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/prometheus/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/prometheus/env/SSL_CERT_DIR 2019-04-16 12:23:20.423084915 +0900
+++ /opt/gitlab/etc/prometheus/env/.chef-SSL_CERT_DIR20190416-27010-12z2khb 2019-04-16 12:23:20.423084915 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context

* link[Link prometheus executable to correct binary] action create
– create symlink at /opt/gitlab/embedded/bin/prometheus to /opt/gitlab/embedded/bin/prometheus1
* execute[reload prometheus] action nothing (skipped due to action :nothing)
* file[Prometheus config] action create
– update content in file /var/opt/gitlab/prometheus/prometheus.yml from 9d7b10 to 8ab258
— /var/opt/gitlab/prometheus/prometheus.yml 2018-03-09 14:15:23.172567959 +0900
+++ /var/opt/gitlab/prometheus/.chef-prometheus20190416-27010-1ez0c2r.yml 2019-04-16 12:23:20.452084062 +0900
@@ -2,11 +2,19 @@
global:
scrape_interval: 15s
scrape_timeout: 15s
+remote_read: []
+remote_write: []
+rule_files:
+- “/var/opt/gitlab/prometheus/rules/*.rules”
scrape_configs:
– job_name: prometheus
static_configs:
– targets:
– localhost:9090
+- job_name: nginx
+ static_configs:
+ – targets:
+ – localhost:8060
– job_name: redis
static_configs:
– targets:
@@ -28,10 +36,22 @@
static_configs:
– targets:
– 127.0.0.1:7070
+ relabel_configs:
+ – source_labels:
+ – __address__
+ regex: 127.0.0.1:(.*)
+ replacement: localhost:$1
+ target_label: instance
– job_name: gitlab-sidekiq
static_configs:
– targets:
– 127.0.0.1:8082
+ relabel_configs:
+ – source_labels:
+ – __address__
+ regex: 127.0.0.1:(.*)
+ replacement: localhost:$1
+ target_label: instance
– job_name: gitlab_monitor_database
metrics_path: “/database”
static_configs:
@@ -143,4 +163,9 @@
– __meta_kubernetes_pod_name
action: replace
target_label: kubernetes_pod_name
+alerting:
+ alertmanagers:
+ – static_configs:
+ – targets:
+ – localhost:9093
– restore selinux security context
Recipe:
* service[prometheus] action nothing (skipped due to action :nothing)
Recipe: gitlab::prometheus
* runit_service[prometheus] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/prometheus] action create (up to date)
* template[/opt/gitlab/sv/prometheus/run] action create
– update content in file /opt/gitlab/sv/prometheus/run from d15715 to 32ddcc
— /opt/gitlab/sv/prometheus/run 2019-04-16 12:17:05.876796685 +0900
+++ /opt/gitlab/sv/prometheus/.chef-run20190416-27010-f4w4md 2019-04-16 12:23:20.509082384 +0900
@@ -2,6 +2,8 @@
exec 2>&1

umask 077
-exec chpst -P -U gitlab-prometheus -u gitlab-prometheus \
+exec chpst -P -e /opt/gitlab/etc/prometheus/env \
+ -U gitlab-prometheus:gitlab-prometheus \
+ -u gitlab-prometheus:gitlab-prometheus \
/opt/gitlab/embedded/bin/prometheus -web.listen-address=localhost:9090 -storage.local.path=/var/opt/gitlab/prometheus/data -storage.local.chunk-encoding-version=2 -storage.local.target-heap-size=358414909 -config.file=/var/opt/gitlab/prometheus/prometheus.yml
– restore selinux security context
* directory[/opt/gitlab/sv/prometheus/log] action create (up to date)
* directory[/opt/gitlab/sv/prometheus/log/main] action create (up to date)
* template[/opt/gitlab/sv/prometheus/log/run] action create (up to date)
* template[/var/log/gitlab/prometheus/config] action create (up to date)
* directory[/opt/gitlab/sv/prometheus/env] action create
– create new directory /opt/gitlab/sv/prometheus/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for prometheus service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/prometheus/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/prometheus/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/prometheus/control] action create
– create new directory /opt/gitlab/sv/prometheus/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/prometheus] action create (up to date)
* file[/opt/gitlab/sv/prometheus/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/prometheus] action create (up to date)
* ruby_block[wait for prometheus service socket] action run (skipped due to not_if)

* template[/var/opt/gitlab/prometheus/rules/gitlab.rules] action create
– create new file /var/opt/gitlab/prometheus/rules/gitlab.rules
– update content in file /var/opt/gitlab/prometheus/rules/gitlab.rules from none to 41cced
— /var/opt/gitlab/prometheus/rules/gitlab.rules 2019-04-16 12:23:20.609079441 +0900
+++ /var/opt/gitlab/prometheus/rules/.chef-gitlab20190416-27010-f4iq3i.rules 2019-04-16 12:23:20.609079441 +0900
@@ -1 +1,83 @@
+# Prometheus alerting rules for GitLab components.
+#
+
+ALERT ServiceDown
+ IF avg_over_time(up[5m]) * 100 < 50 + ANNOTATIONS { + summary = "The service {{ $labels.job }} is not responding", + description = "The service {{ $labels.job }} instance {{ $labels.instance }} is not responding for more than 50% of the time for 5 minutes.", + } + +ALERT RedisDown + IF avg_over_time(redis_up[5m]) * 100 < 50 + ANNOTATIONS { + summary = "The Redis service {{ $labels.job }} is not responding", + description = "The Redis service {{ $labels.job }} instance {{ $labels.instance }} is not responding for more than 50% of the time for 5 minutes.", + } + +ALERT PostgresDown + IF avg_over_time(pg_up[5m]) * 100 < 50 + ANNOTATIONS { + summary = "The Postgres service {{ $labels.job }} is not responding", + description = "The Postgres service {{ $labels.job }} instance {{ $labels.instance }} is not responding for more than 50% of the time for 5 minutes.", + } + +ALERT UnicornQueueing + IF avg_over_time(unicorn_queued_connections[30m]) > 1
+ ANNOTATIONS {
+ summary = “Unicorn is queueing requests”,
+ description = ‘Unicorn instance {{ $labels.instance }} is queueing requests with an average of {{ $value | printf “%.1f” }} over the last 30 minutes.’,
+ }
+
+instance:unicorn_utilization:ratio = sum by (instance) (unicorn_active_connections) / count by (instance) (ruby_memory_bytes)
+
+ALERT HighUnicornUtilization
+ IF instance:unicorn_utilization:ratio * 100 > 90
+ FOR 60m
+ ANNOTATIONS {
+ summary = ‘Unicorn is has high utilization’,
+ description = ‘Unicorn instance {{ $labels.instance }} has more than 90% worker utilization ({{ $value | printf “%.1f” }}%) over the last 60 minutes.’,
+ }
+
+ALERT SidekiqJobsQueuing
+ IF sum by (name) (sidekiq_queue_size) > 0
+ FOR 60m
+ ANNOTATIONS {
+ summary = ‘Sidekiq has jobs queued’,
+ description = ‘Sidekiq queue {{ $labels.name }} has {{ $value }} jobs queued for 60 minutes.’,
+ }
+
+job_grpc:grpc_server_handled_total:rate5m = sum by (job, grpc_code, grpc_method, grpc_service, grpc_type) (rate(grpc_server_handled_total[5m]))
+
+ALERT HighgRPCResourceExhaustedRate
+ IF sum without (grpc_code) (job_grpc:grpc_server_handled_total:rate5m{grpc_code=”ResourceExhausted”}) / sum without (grpc_code) (job_grpc:grpc_server_handled_total:rate5m) * 100 > 1
+ FOR 60m
+ ANNOTATIONS {
+ summary = ‘High gRPC ResourceExhausted error rate’,
+ description = ‘gRPC is returning more than 1% ({{ $value | printf “%.1f” }}%) ResourceExhausted errors over the last 60 minutes.’,
+ }
+
+ALERT PostgresDatabaseDeadlocks
+ IF increase(pg_stat_database_deadlocks[5m]) > 0
+ ANNOTATIONS {
+ summary = ‘Postgres database has deadlocks’,
+ description = ‘Postgres database {{ $labels.instance }} had {{ $value | printf “%d” }} deadlocks in the last 5 minutes.’,
+ }
+
+ALERT PostgresDatabaseDeadlockCancels
+ IF increase(pg_stat_database_conflicts_confl_deadlock[5m]) > 0
+ ANNOTATIONS {
+ summary = ‘Postgres database has queries canceled due to deadlocks’,
+ description = ‘Postgres database {{ $labels.instance }} had {{ $value | printf “%d” }} queries canceled due to deadlocks in the last 5 minutes.’,
+ }
+
+job_route_method_code:gitlab_workhorse_http_request_duration_seconds_count:rate5m = sum by (job, route, method, code) (rate(gitlab_workhorse_http_request_duration_seconds_count[5m]))
+
+ALERT WorkhorseHighErrorRate
+ IF sum without (job, code) (job_route_method_code:gitlab_workhorse_http_request_duration_seconds_count:rate5m{code=~”5..”}) / sum without (job,code) (job_route_method_code:gitlab_workhorse_http_request_duration_seconds_count:rate5m) * 100 > 1
+ FOR 60m
+ ANNOTATIONS {
+ summary = ‘Workhorse has high error rates’,
+ description = ‘Workhorse route {{ $labels.route }} method {{ $labels.method }} has more than 1% errors ({{ $value | printf “%.1f” }}%) for the last 60 minutes.’,
+ }
– change mode from ” to ‘0644’
– change owner from ” to ‘gitlab-prometheus’
– restore selinux security context
* template[/var/opt/gitlab/prometheus/rules/node.rules] action create
– create new file /var/opt/gitlab/prometheus/rules/node.rules
– update content in file /var/opt/gitlab/prometheus/rules/node.rules from none to 46259b
— /var/opt/gitlab/prometheus/rules/node.rules 2019-04-16 12:23:20.639078558 +0900
+++ /var/opt/gitlab/prometheus/rules/.chef-node20190416-27010-1eo72d8.rules 2019-04-16 12:23:20.639078558 +0900
@@ -1 +1,34 @@
+# The count of CPUs per node, useful for getting CPU time as a percent of total.
+instance:node_cpus:count = count(node_cpu{mode=”idle”}) without (cpu,mode)
+instance:node_cpus:count = count(node_cpu_seconds_total{mode=”idle”}) without (cpu,mode)
+
+# CPU in use by CPU.
+instance_cpu:node_cpu_seconds_not_idle:rate5m = sum(rate(node_cpu{mode!=”idle”}[5m])) without (mode)
+instance_cpu:node_cpu_seconds_not_idle:rate5m = sum(rate(node_cpu_seconds_total{mode!=”idle”}[5m])) without (mode)
+
+# CPU in use by mode.
+instance_mode:node_cpu_seconds:rate5m = sum(rate(node_cpu[5m])) without (cpu)
+instance_mode:node_cpu_seconds:rate5m = sum(rate(node_cpu_seconds_total[5m])) without (cpu)
+
+# CPU in use ratio.
+instance:node_cpu_utilization:ratio = sum(instance_mode:node_cpu_seconds:rate5m{mode!=”idle”}) without (mode) / instance:node_cpus:count
+
+# Filesystem available ratio.
+instance:node_filesystem_avail:ratio = node_filesystem_avail_bytes / (node_filesystem_size_bytes > 0)
+
+ALERT FilesystemAlmostFull
+ IF instance:node_filesystem_avail:ratio * 100 < 5 + FOR 10m + ANNOTATIONS { + summary = "The filesystem {{ $labels.device }}:{{ $labels.mountpoint }} is almost full", + description = 'The filesystem {{ $labels.device }}:{{ $labels.mountpoint }} on {{ $labels.instance }} has {{ $value | printf "%.2f" }}% space available.', + } + +ALERT FilesystemFullIn1Day + IF predict_linear(node_filesystem_avail_bytes[6h], 24 * 3600) < 0 + FOR 30m + ANNOTATIONS { + summary = "The filesystem {{ $labels.device }}:{{ $labels.mountpoint }} will be full within 24 hours", + description = "The filesystem {{ $labels.device }}:{{ $labels.mountpoint }} on {{ $labels.instance }} will be full in the next 24 hours.", + } - change mode from '' to '0644' - change owner from '' to 'gitlab-prometheus' - restore selinux security context Recipe: gitlab::alertmanager * directory[/var/opt/gitlab/alertmanager] action create (up to date) * directory[/var/log/gitlab/alertmanager] action create (up to date) * directory[/opt/gitlab/etc/alertmanager/env] action create - create new directory /opt/gitlab/etc/alertmanager/env - change mode from '' to '0700' - change owner from '' to 'gitlab-prometheus' - restore selinux security context * env_dir[/opt/gitlab/etc/alertmanager/env] action create * directory[/opt/gitlab/etc/alertmanager/env] action create (up to date) * file[/opt/gitlab/etc/alertmanager/env/SSL_CERT_DIR] action create - create new file /opt/gitlab/etc/alertmanager/env/SSL_CERT_DIR - update content in file /opt/gitlab/etc/alertmanager/env/SSL_CERT_DIR from none to 4f45cf --- /opt/gitlab/etc/alertmanager/env/SSL_CERT_DIR 2019-04-16 12:23:20.701076733 +0900 +++ /opt/gitlab/etc/alertmanager/env/.chef-SSL_CERT_DIR20190416-27010-1mk8o4l 2019-04-16 12:23:20.701076733 +0900 @@ -1 +1,2 @@ +/opt/gitlab/embedded/ssl/certs/ - restore selinux security context * file[Alertmanager config] action create (up to date) Recipe:
* service[alertmanager] action nothing (skipped due to action :nothing)
Recipe: gitlab::alertmanager
* runit_service[alertmanager] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/alertmanager] action create (up to date)
* template[/opt/gitlab/sv/alertmanager/run] action create
– update content in file /opt/gitlab/sv/alertmanager/run from ac906d to 36da8b
— /opt/gitlab/sv/alertmanager/run 2019-04-16 12:17:06.121789456 +0900
+++ /opt/gitlab/sv/alertmanager/.chef-run20190416-27010-1mds7h5 2019-04-16 12:23:20.754075173 +0900
@@ -2,6 +2,8 @@
exec 2>&1

umask 077
-exec chpst -P -U gitlab-prometheus -u gitlab-prometheus \
+exec chpst -P -e /opt/gitlab/etc/alertmanager/env \
+ -U gitlab-prometheus:gitlab-prometheus \
+ -u gitlab-prometheus:gitlab-prometheus \
/opt/gitlab/embedded/bin/alertmanager –web.listen-address=localhost:9093 –storage.path=/var/opt/gitlab/alertmanager/data –config.file=/var/opt/gitlab/alertmanager/alertmanager.yml
– restore selinux security context
* directory[/opt/gitlab/sv/alertmanager/log] action create (up to date)
* directory[/opt/gitlab/sv/alertmanager/log/main] action create (up to date)
* template[/opt/gitlab/sv/alertmanager/log/run] action create (up to date)
* template[/var/log/gitlab/alertmanager/config] action create (up to date)
* directory[/opt/gitlab/sv/alertmanager/env] action create
– create new directory /opt/gitlab/sv/alertmanager/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for alertmanager service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/alertmanager/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/alertmanager/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/alertmanager/control] action create
– create new directory /opt/gitlab/sv/alertmanager/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/alertmanager] action create (up to date)
* file[/opt/gitlab/sv/alertmanager/down] action delete (up to date)
* ruby_block[restart_service] action run (skipped due to only_if)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/alertmanager] action create (up to date)
* ruby_block[wait for alertmanager service socket] action run (skipped due to not_if)

Recipe: gitlab::postgres-exporter
* directory[/var/log/gitlab/postgres-exporter] action create (up to date)
* directory[/var/opt/gitlab/postgres-exporter] action create (up to date)
* env_dir[/opt/gitlab/etc/postgres-exporter/env] action create
* directory[/opt/gitlab/etc/postgres-exporter/env] action create (up to date)
* file[/opt/gitlab/etc/postgres-exporter/env/SSL_CERT_DIR] action create
– create new file /opt/gitlab/etc/postgres-exporter/env/SSL_CERT_DIR
– update content in file /opt/gitlab/etc/postgres-exporter/env/SSL_CERT_DIR from none to 4f45cf
— /opt/gitlab/etc/postgres-exporter/env/SSL_CERT_DIR 2019-04-16 12:23:20.863071965 +0900
+++ /opt/gitlab/etc/postgres-exporter/env/.chef-SSL_CERT_DIR20190416-27010-qf76kz 2019-04-16 12:23:20.863071965 +0900
@@ -1 +1,2 @@
+/opt/gitlab/embedded/ssl/certs/
– restore selinux security context
* file[/opt/gitlab/etc/postgres-exporter/env/DATA_SOURCE_NAME] action create (up to date)

Recipe:
* service[postgres-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab::postgres-exporter
* runit_service[postgres-exporter] action enable
* ruby_block[restart_service] action nothing (skipped due to action :nothing)
* ruby_block[restart_log_service] action nothing (skipped due to action :nothing)
* ruby_block[reload_log_service] action nothing (skipped due to action :nothing)
* directory[/opt/gitlab/sv/postgres-exporter] action create (up to date)
* template[/opt/gitlab/sv/postgres-exporter/run] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/log] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/log/main] action create (up to date)
* template[/opt/gitlab/sv/postgres-exporter/log/run] action create (up to date)
* template[/var/log/gitlab/postgres-exporter/config] action create (up to date)
* directory[/opt/gitlab/sv/postgres-exporter/env] action create
– create new directory /opt/gitlab/sv/postgres-exporter/env
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* ruby_block[Delete unmanaged env files for postgres-exporter service] action run (skipped due to only_if)
* template[/opt/gitlab/sv/postgres-exporter/check] action create (skipped due to only_if)
* template[/opt/gitlab/sv/postgres-exporter/finish] action create (skipped due to only_if)
* directory[/opt/gitlab/sv/postgres-exporter/control] action create
– create new directory /opt/gitlab/sv/postgres-exporter/control
– change mode from ” to ‘0755’
– change owner from ” to ‘root’
– change group from ” to ‘root’
– restore selinux security context
* link[/opt/gitlab/init/postgres-exporter] action create (up to date)
* file[/opt/gitlab/sv/postgres-exporter/down] action delete (up to date)
* directory[/opt/gitlab/service] action create (up to date)
* link[/opt/gitlab/service/postgres-exporter] action create (up to date)
* ruby_block[wait for postgres-exporter service socket] action run (skipped due to not_if)

* template[/var/opt/gitlab/postgres-exporter/queries.yaml] action create (up to date)
Recipe:
* service[grafana] action nothing (skipped due to action :nothing)
Recipe: gitlab::grafana_disable
* runit_service[grafana] action disable
* ruby_block[disable grafana] action run (skipped due to only_if)
(up to date)
Recipe: gitlab::deprecate-skip-auto-migrations
* file[/etc/gitlab/skip-auto-reconfigure] action create (skipped due to only_if)
* ruby_block[skip-auto-migrations deprecation] action run (skipped due to only_if)
Recipe: gitlab-ee::sentinel_disable
* account[user and group for sentinel] action create
* group[user and group for sentinel] action create (up to date)
* linux_user[user and group for sentinel] action create (up to date)
(up to date)
Recipe:
* service[sentinel] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::sentinel_disable
* runit_service[sentinel] action disable
* ruby_block[disable sentinel] action run (skipped due to only_if)
(up to date)
* file[/var/opt/gitlab/sentinel/sentinel.conf] action delete (up to date)
* directory[/var/opt/gitlab/sentinel] action delete (up to date)
Recipe:
* service[sidekiq-cluster] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::sidekiq-cluster_disable
* runit_service[sidekiq-cluster] action disable
* ruby_block[disable sidekiq-cluster] action run (skipped due to only_if)
(up to date)
Recipe:
* service[geo-postgresql] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::geo-postgresql_disable
* runit_service[geo-postgresql] action disable
* ruby_block[disable geo-postgresql] action run (skipped due to only_if)
(up to date)
Recipe:
* service[geo-logcursor] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::geo-logcursor_disable
* runit_service[geo-logcursor] action disable
* ruby_block[disable geo-logcursor] action run (skipped due to only_if)
(up to date)
Recipe:
* service[pgbouncer] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::pgbouncer_disable
* runit_service[pgbouncer] action disable
* ruby_block[disable pgbouncer] action run (skipped due to only_if)
(up to date)
Recipe:
* service[pgbouncer-exporter] action nothing (skipped due to action :nothing)
Recipe: gitlab-ee::pgbouncer-exporter_disable
* runit_service[pgbouncer-exporter] action disable
* ruby_block[disable pgbouncer-exporter] action run (skipped due to only_if)
(up to date)
Recipe:
* service[consul] action nothing (skipped due to action :nothing)
Recipe: consul::disable_daemon
* runit_service[consul] action disable
* ruby_block[disable consul] action run (skipped due to only_if)
(up to date)
Recipe:
* service[repmgrd] action nothing (skipped due to action :nothing)
Recipe: repmgr::repmgrd_disable
* runit_service[repmgrd] action disable
* ruby_block[disable repmgrd] action run (skipped due to only_if)
(up to date)
Recipe: gitlab-ee::geo-secondary_disable
* templatesymlink[Removes database_geo.yml symlink] action delete
* file[/var/opt/gitlab/gitlab-rails/etc/database_geo.yml] action delete (up to date)
* link[/opt/gitlab/embedded/service/gitlab-rails/config/database_geo.yml] action delete (up to date)
(up to date)
Recipe: gitlab::gitlab-rails
* execute[clear the gitlab-rails cache] action run
– execute /opt/gitlab/bin/gitlab-rake cache:clear
Recipe:
* service[gitaly] action restart
– restart service service[gitaly]
* service[gitlab-workhorse] action restart
– restart service service[gitlab-workhorse]
* service[node-exporter] action restart
– restart service service[node-exporter]
* service[gitlab-monitor] action restart
– restart service service[gitlab-monitor]
* service[redis-exporter] action restart
– restart service service[redis-exporter]
* service[prometheus] action restart
– restart service service[prometheus]
Recipe: gitlab::prometheus
* execute[reload prometheus] action run
– execute /opt/gitlab/bin/gitlab-ctl hup prometheus
Recipe:
* service[alertmanager] action restart
– restart service service[alertmanager]
* service[postgres-exporter] action restart
– restart service service[postgres-exporter]

Running handlers:
Running handlers complete
Chef Client finished, 141/733 resources updated in 01 minutes 49 seconds

Deprecations:
== Prometheus ==
Detected Prometheus version 1.x. Version 1.x has been deprecated and support will be removed in GitLab version 12.0.
To upgrade to Prometheus 2.x, use `gitlab-ctl prometheus-upgrade` command.
Running this command will migrate all your existing data to format supported by Prometheus 2.x.
This can be a time consuming operation. To skip migrating the data, and instead remove and start fresh, run `gitlab-ctl prometheus-upgrade –skip-data-migration`.
Check https://docs.gitlab.com/omnibus/update/gitlab_11_changes.html#114 for details.

Warnings:
The version of the running postgresql service is different than what is installed.
Please restart postgresql to start the new version.

sudo gitlab-ctl restart postgresql

gitlab Reconfigured!
Checking for an omnibus managed postgresql: OK
Checking for a newer version of PostgreSQL to install
No new version of PostgreSQL installed, nothing to upgrade to
Ensuring PostgreSQL is updated: OK
Restarting previously running GitLab services
ok: run: alertmanager: (pid 28792) 2s
ok: run: gitaly: (pid 28596) 5s
ok: run: gitlab-monitor: (pid 28733) 4s
ok: run: gitlab-workhorse: (pid 28630) 4s
ok: run: logrotate: (pid 28836) 0s
ok: run: nginx: (pid 28842) 1s
ok: run: node-exporter: (pid 28646) 5s
ok: run: postgres-exporter: (pid 28813) 2s
ok: run: postgresql: (pid 24768) 431s
ok: run: prometheus: (pid 28759) 4s
ok: run: redis: (pid 28016) 60s
ok: run: redis-exporter: (pid 28747) 4s
ok: run: sidekiq: (pid 28858) 0s
ok: run: unicorn: (pid 28866) 0s

_______ __ __ __
/ ____(_) /_/ / ____ _/ /_
/ / __/ / __/ / / __ `/ __ \
/ /_/ / / /_/ /___/ /_/ / /_/ /
\____/_/\__/_____/\__,_/_.___/

Upgrade complete! If your GitLab server is misbehaving try running
sudo gitlab-ctl restart
before anything else.
If you need to roll back to the previous version you can use the database
backup made during the upgrade (scroll up for the filename).

Verifying : gitlab-ee-11.9.8-ee.0.el7.x86_64 1/2
Verifying : gitlab-ee-10.8.7-ee.0.el7.x86_64 2/2

Updated:
gitlab-ee.x86_64 0:11.9.8-ee.0.el7

Complete!
[root@withkdev ~]#

완료

카테고리: Uncategorized

댓글 남기기