EzDev.org

mysql-5.6

mysql-5.6 - Facebook's branch of the Oracle MySQL v5.6 database. This includes MyRocks. GitHub - facebook/mysql-5.6: Facebook's branch of the Oracle MySQL v5.6 database. This includes MyRocks. mysql-5.6 - facebook's branch of the oracle mysql v5.6 database. this includes myrocks.


How do i migrate mysql data directory in docker container?

I have a docker container running mysql-5.5 with host data volume. I am upgrading my container to mysql-5.6. I am starting a new container with same host volume. MySQL was crashing in the container due to the crash of mysql.user table.

[ERROR] Fatal error: Can't open and lock privilege tables: 
Incorrect key file for table 'user'; try to repair it
160523 12:04:13 mysqld_safe mysqld from pid file  /var/run/mysqld/mysqld.pid ended

I tried the following to fix that:

root@container# mysqld --skip-grant-tables;
root@container# mysql -uroot -ppassword
mysql> repair table user USE_FRM;
+------------+--------+----------+--------------------------------------------------+
| Table      | Op     | Msg_type | Msg_text                                         |
+------------+--------+----------+--------------------------------------------------+
| mysql.user | repair | info     | Key 1 - Found wrong stored record at 0           |
| mysql.user | repair | info     | Found block that points outside data file at 184 |
| mysql.user | repair | info     | Found block that points outside data file at 292 |
| mysql.user | repair | info     | Found block that points outside data file at 296 |
| mysql.user | repair | info     | Found block that points outside data file at 300 |
| mysql.user | repair | info     | Found block that points outside data file at 304 |
| mysql.user | repair | info     | Found block that points outside data file at 308 |
| mysql.user | repair | info     | Found block that points outside data file at 312 |
| mysql.user | repair | info     | Found block that points outside data file at 316 |
| mysql.user | repair | info     | Found block that points outside data file at 340 |
| mysql.user | repair | info     | Found block that points outside data file at 392 |
| mysql.user | repair | info     | Found block that points outside data file at 396 |
| mysql.user | repair | info     | Found block that points outside data file at 400 |
| mysql.user | repair | info     | Found block that points outside data file at 404 |
| mysql.user | repair | info     | Found block that points outside data file at 408 |
| mysql.user | repair | info     | Found block that points outside data file at 412 |
| mysql.user | repair | info     | Found block that points outside data file at 416 |
| mysql.user | repair | info     | Found block that points outside data file at 420 |
| mysql.user | repair | info     | Found block that points outside data file at 448 |
| mysql.user | repair | info     | Found block that points outside data file at 452 |
| mysql.user | repair | info     | Found block that points outside data file at 456 |
| mysql.user | repair | info     | Found block that points outside data file at 460 |
| mysql.user | repair | info     | Found block that points outside data file at 464 |
| mysql.user | repair | info     | Found block that points outside data file at 468 |
| mysql.user | repair | info     | Found block that points outside data file at 472 |
| mysql.user | repair | info     | Found block that points outside data file at 500 |
| mysql.user | repair | info     | Found block that points outside data file at 504 |
| mysql.user | repair | info     | Found block that points outside data file at 508 |
| mysql.user | repair | info     | Found block that points outside data file at 512 |
| mysql.user | repair | info     | Found block that points outside data file at 516 |
| mysql.user | repair | info     | Found block that points outside data file at 520 |
| mysql.user | repair | info     | Found block that points outside data file at 524 |
| mysql.user | repair | info     | Found block that points outside data file at 528 |
| mysql.user | repair | info     | Found block that points outside data file at 556 |
| mysql.user | repair | info     | Found block that points outside data file at 560 |
| mysql.user | repair | info     | Found block that points outside data file at 564 |
| mysql.user | repair | info     | Found block that points outside data file at 568 |
| mysql.user | repair | info     | Found block that points outside data file at 572 |
| mysql.user | repair | info     | Found block that points outside data file at 576 |
| mysql.user | repair | info     | Found block that points outside data file at 580 |
| mysql.user | repair | info     | Found block that points outside data file at 604 |
| mysql.user | repair | info     | Found block that points outside data file at 608 |
| mysql.user | repair | info     | Found block that points outside data file at 612 |
| mysql.user | repair | info     | Found block that points outside data file at 616 |
| mysql.user | repair | info     | Found block that points outside data file at 620 |
| mysql.user | repair | info     | Found block that points outside data file at 624 |
| mysql.user | repair | info     | Found block that points outside data file at 628 |
| mysql.user | repair | info     | Found block that points outside data file at 632 |
| mysql.user | repair | status   | OK                                               |
+------------+--------+----------+--------------------------------------------------+
49 rows in set (0.00 sec)

Error: I cannot login into mysql server using my username/password. Any idea on how should i upgrade my mysql data?

root@container# mysql -uroot -ppassword
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.30-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit;
Bye
root@container# ps -ef|grep mysql              
root      1509 31492  0 12:15 ?        00:00:00 grep --color=auto mysql
mysql    31281  4823  0 12:06 ?        00:00:01 mysqld --skip-grant-tables --user=mysql
root@container# kill -9 31281
root@container# ps -ef|grep mysql
root      1698 31492  0 12:16 ?        00:00:00 grep --color=auto mysql
root@container# mysqld_safe &
[1] 1700
root@container# 160523 12:16:34 mysqld_safe Can't log to error log and syslog at the same time.  Remove all --log-error configuration     options for --syslog to take effect.
160523 12:16:34 mysqld_safe Logging to '/var/log/mysql/error.log'.
160523 12:16:34 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
^C
root@container# mysql -uroot -ppassword
Warning: Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
root@container# 

Reference:

MySQL Fatal error: Can't open and lock privilege tables: Incorrect file format 'user' https://bugs.mysql.com/bug.php?id=68385


Source: (StackOverflow)

Strange MySQL Popup "Mysql Installer is running community mode"

I have recently installed a recent community version of MySQL from MySQL site. The version is 5.6.x.

It was done using an installer and I also chose the option to create a MySQL service on Windows so that I can simply start the service. The service was set to not start automatically.

Although I am not actively using MySQL, while I am working on my PC on something else, a strange pop-up displays, "MySQL Running in community mode.", does something, prints on screen, and closes:

MySql installer

I am surprised to see this and wonder: what is it doing - connecting to a remote server and doing what? Does someone know the internals and how to prevent it from connecting to a remote server?

Could this be a security issue?


Source: (StackOverflow)

ERROR 1006 (HY000) Can't create database (errno: 13) MySQL 5.6.12

I am facing problem creating the database and it results in following error.

mysql> show grants for 'admin'@'%';
+---------------------------------------------------------------------------------------------------------------------------------+
| Grants for admin@%                                                                                                              |
+---------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%' IDENTIFIED BY PASSWORD '*4ACFE3202A5FF5CF467898FC58AAB1D615029441' WITH GRANT OPTION |
+---------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> create database abc;
ERROR 1006 (HY000): Can't create database 'abc' (errno: 13)

Here is my users table.

mysql> select host, user from mysql.user;
+-------------+-------+
| host        | user  |
+-------------+-------+
| %           | admin |
| 127.0.0.1   | root  |
| ::1         | root  |
| IVM-MMC-DGW | root  |
| localhost   | admin |
| localhost   | root  |
+-------------+-------+
6 rows in set (0.00 sec)


mysql> show grants;
+-----------------------------------------------------------------------------------------------------------------------------------------+
| Grants for admin@localhost                                                                                                              |
+-----------------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'admin'@'localhost' IDENTIFIED BY PASSWORD '*4ACFE3202A5FF5CF467898FC58AAB1D615029441' WITH GRANT OPTION |
+-----------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

I can create and remove table in the existing database.

The data directory already has mysql:mysql privileges and also the logged in user has privilege to create the new database.

What configuration is missing here ?


Source: (StackOverflow)

Using "TYPE = InnoDB" in MySQL throws exception

When I try to execute the following SQL in MySQL, I'm getting error:

SQL:

        SQL = "CREATE TABLE Ranges (";
        SQL += "ID varchar(20) NOT NULL, ";
        SQL += "Descriptions longtext NULL, ";
        SQL += "Version_Number int NULL, ";
        SQL += "Row_Updated bigint NULL, ";
        SQL += "Last_Updated datetime NULL, ";
        SQL += "XML longtext NULL, ";
        SQL += "PRIMARY KEY (ID)";
        SQL += ") " + "TYPE = InnoDB";

Error:

You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near "TYPE = InnoDB"

But if I remove "TYPE = InnoDB", then the query works fine.

Previously the query worked fine, ie in MySQL 5.0. But when I upgraded to MySQL 5.6, I'm getting the above error.

Any Suggestions / Alternatives ... ??


Source: (StackOverflow)

Resetting ROOT password in MySQL 5.6

I have been following these instructions for resetting root password for local installation of MySQL 5.6 on Windows 7 laptop.

I stopped the service, created init-file, and ran the following command (as Administrator):

"C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqld" --defaults-file="C:\ProgramData\MySQL\MySQL Server 5.6\my.ini" --init-file=C:\\MySQL-misc\\mysql-init.txt

I got the following warning:

2014-02-08 15:44:10 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).

Since it's a warning I'm not sure whether I need to fix anything and then redo the process again.

Currently the command window is still on and does not accept any input. Should I force-close it or is there anything I can do to complete the process gracefully?

UPDATE

I killed the Command window and tried to restart the service. Got an error.

Restarted Windows and the service automatically started. The new root password seems to work. I was successfully able to use various functions of Workbench that require the password.

So, the warning was indeed just a warning.


Source: (StackOverflow)

Unable to enable logs in MySQL - error : Found option without preceding group in config file

I am trying to enable logs on my MySQL server but with no success. What I have done :

1) I have created the log file with the right permissions :

touch /var/log/mysql/mysql.log   
chown mysql:mysql /var/log/mysql/mysql.log

2) I have added the needed change to the my.cnf file :

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

My my.cnf file is the following :

#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#

!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

I am unable to restart MySQL and when I try to logging, I have this error :

error: Found option without preceding group in config file: /etc/mysql/my.cnf at line: 23 Fatal error in defaults handling. Program aborted

What does that mean and how I can fix this ?

I am using Ubuntu 15.05 with MySQL 5.6.24-0ubuntu2


Source: (StackOverflow)

Compiling PHP from source with MySQL version 5.6

I am trying to compile PHP from source to get ZTS working. I have the source for PHP and I try to install the dependencies with

apt-get build-dep php5

I also have MySQL 5.6 installed for Ubuntu 14.04LTS and I am getting this error after the command:

The following packages have unmet dependencies: mysql-server : Depends: mysql-server-5.5 but it is not going to be installed E: Build-dependencies for php5 could not be satisfied.

Is there anyway to bypass this dependency as MySQL 5.6 works ok with prebuilt PHP5 but I cannot start compiling as the dependency is missing? Thanks!


Source: (StackOverflow)

performance issue MySQL 5.7

I've migrated my MySQL Database from 5.6 to version 5.7. Now I got a performance issue.

Specific statement are executed in both versions and I noticed that 5.7 is significant slower. Especially with large amount of Data.

Even a simple select statement is much slower

5.6: 2,948 sec
5.7: 6,536 sec

In my application i've got more complex statements that take way more time to execute so it get's unusable.

I've also noticed that the field count of my table matters in 5.7 but not in 5.6.

The configuration values in the my.ini didn't change.

To test this issue you can use the following code:

#Create Tables
DROP TABLE IF EXISTS aTest;
DROP TABLE IF EXISTS sTest;
CREATE TABLE `aTest` (
    `T_BETREFF` CHAR(40) NOT NULL DEFAULT '',
    `T_DATUM` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00',
    `T_ZEIT` CHAR(5) NOT NULL DEFAULT '',
    `T_SACHBEARB` CHAR(2) NOT NULL DEFAULT '',
    `SYS_UID` INT NOT NULL AUTO_INCREMENT,
    `SYS_DATE` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    UNIQUE INDEX `TODO01` (`SYS_UID`),
    INDEX `TODO02` (`T_SACHBEARB`)
)
COLLATE='latin1_swedish_ci'
ENGINE=MyISAM;

CREATE TABLE `sTest` (
    `V_NR` CHAR(2) NOT NULL DEFAULT '',
    `V_NAME` CHAR(30) NOT NULL DEFAULT '',
    `SYS_UID` INT NOT NULL AUTO_INCREMENT,
    `SYS_DATE` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    UNIQUE INDEX `S_VERK01` (`SYS_UID`),
    UNIQUE INDEX `S_VERK02` (`V_NR`)
)
COLLATE='latin1_swedish_ci'
ENGINE=MyISAM;

#Fill Tables
DROP PROCEDURE IF EXISTS load_foo_atest_data;
DELIMITER //
CREATE PROCEDURE load_foo_atest_data()
BEGIN
    DECLARE v_max int unsigned default 1000000;
    DECLARE v_counter INT UNSIGNED DEFAULT 0;

    WHILE v_counter < v_max DO
        INSERT INTO aTest (T_BETREFF, T_DATUM, T_ZEIT, T_SACHBEARB) VALUES('TEST 1234,', NOW(), '00:00', v_counter DIV 100000 * 10);
        IF (v_counter MOD 100000 = 0) THEN
            INSERT INTO sTest (V_NR, V_NAME) VALUES(v_counter DIV 100000 * 10, CONCAT('TEST', v_counter));
        END IF;
        SET v_counter=v_counter+1;
    END WHILE;
END //
DELIMITER ;

call load_foo_atest_data();

Here my statement:

FLUSH TABLES;
SELECT * FROM atest
LEFT JOIN sTEST ON v_nr = t_sachbearb
GROUP BY atest.sys_uid
ORDER BY t_datum DESC, t_zeit DESC

EDIT

EXPLAIN results:

5.6:

+----+-------------+-------+--------+---------------+----------+---------+-------------------------------------------+---------+---------------------------------+
| id | select_type | table |  type  | possible_keys |   key    | key_len |                    ref                    |  rows   |              Extra              |
+----+-------------+-------+--------+---------------+----------+---------+-------------------------------------------+---------+---------------------------------+
|  1 | SIMPLE      | atest | ALL    | TODO01        | NULL     | NULL    | NULL                                      | 1000000 | Using temporary; Using filesort |
|  1 | SIMPLE      | sTEST | eq_ref | S_VERK02      | S_VERK02 | 2       | obs_mysql_bergau_57test.atest.T_SACHBEARB |       1 | NULL                            |
+----+-------------+-------+--------+---------------+----------+---------+-------------------------------------------+---------+---------------------------------+

5.7

+----+-------------+-------+------------+--------+---------------+----------+---------+-------------------------------------------+---------+----------+---------------------------------+
| id | select_type | table | partitions |  type  | possible_keys |   key    | key_len |                    ref                    |  rows   | filtered |              Extra              |
+----+-------------+-------+------------+--------+---------------+----------+---------+-------------------------------------------+---------+----------+---------------------------------+
|  1 | SIMPLE      | atest | NULL       | ALL    | TODO01        | NULL     | NULL    | NULL                                      | 1000000 | 100.00   | Using temporary; Using filesort |
|  1 | SIMPLE      | sTEST | NULL       | eq_ref | S_VERK02      | S_VERK02 | 2       | obs_mysql_bergau_57test.atest.T_SACHBEARB |       1 | 100.00   | NULL                            |
+----+-------------+-------+------------+--------+---------------+----------+---------+-------------------------------------------+---------+----------+---------------------------------+

My.ini:

# Other default tuning values
# MySQL Server Instance Configuration File
# ----------------------------------------------------------------------
# Generated by the MySQL Server Instance Configuration Wizard
#
#
# Installation Instructions
# ----------------------------------------------------------------------
#
# On Linux you can copy this file to /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options
# (@localstatedir@ for this installation) or to
# ~/.my.cnf to set user-specific options.
#
# On Windows you should keep this file in the installation directory 
# of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To
# make sure the server reads the config file use the startup option 
# "--defaults-file". 
#
# To run run the server from the command line, execute this in a 
# command line shell, e.g.
# mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini"
#
# To install the server as a Windows service manually, execute this in a 
# command line shell, e.g.
# mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini"
#
# And then execute this in a command line shell to start the server, e.g.
# net start MySQLXY
#
#
# Guildlines for editing this file
# ----------------------------------------------------------------------
#
# In this file, you can use all long options that the program supports.
# If you want to know the options a program supports, start the program
# with the "--help" option.
#
# More detailed information about the individual options can also be
# found in the manual.
#
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html
#
#
# CLIENT SECTION
# ----------------------------------------------------------------------
#
# The following options will be read by MySQL client applications.
# Note that only client applications shipped by MySQL are guaranteed
# to read this section. If you want your own MySQL client program to
# honor these values, you need to specify it as an option during the
# MySQL client library initialization.
# [client] no-beep

# pipe
# socket=0.0 port=3306

[mysql]

default-character-set=utf8


# SERVER SECTION
# ----------------------------------------------------------------------
#
# The following options will be read by the MySQL Server. Make sure that
# you have installed the server correctly (see above) so it reads this 
# file.
#
# server_type=1 [mysqld]

# The next three options are mutually exclusive to SERVER_PORT below.
# skip-networking

# enable-named-pipe

# shared-memory

# shared-memory-base-name=MYSQL

# The Pipe the MySQL Server will use
# socket=MYSQL

# The TCP/IP Port the MySQL Server will listen on port=3306

# Path to installation directory. All paths are usually resolved relative to this.
# basedir="C:/Program Files/MySQL/MySQL Server 5.7/"

# Path to the database root datadir=C:/ProgramData/MySQL/MySQL Server 5.7\Data

# The default character set that will be used when a new schema or table is
# created and no character set is defined character-set-server = latin1

# The default storage engine that will be used when create new tables when default-storage-engine = MYISAM

# Set the SQL mode to strict sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

# Enable Windows Authentication
# plugin-load=authentication_windows.dll

# General and Slow logging. log-output=FILE general-log=1 general_log_file="PC-THIEL.log" slow-query-log=1 slow_query_log_file="PC-THIEL-slow.log" long_query_time=10

# Binary Logging.
# log-bin

# Error Logging. log-error="PC-THIEL.err"

# Server Id. server-id=1

# Secure File Priv. secure-file-priv="C:/ProgramData/MySQL/MySQL Server 5.7/Uploads"

# The maximum amount of concurrent sessions the MySQL server will
# allow. One of these connections will be reserved for a user with
# SUPER privileges to allow the administrator to login even if the
# connection limit has been reached. max_connections=1000

# Query cache is used to cache SELECT results and later return them
# without actual executing the same query once again. Having the query
# cache enabled may result in significant speed improvements, if your
# have a lot of identical queries and rarely changing tables. See the
# "Qcache_lowmem_prunes" status variable to check if the current value
# is high enough for your load.
# Note: In case your tables change very often or if your queries are
# textually different every time, the query cache may result in a
# slowdown instead of a performance improvement. query_cache_size = 2G

# The number of open tables for all threads. Increasing this value
# increases the number of file descriptors that mysqld requires.
# Therefore you have to make sure to set the amount of open files
# allowed to at least 4096 in the variable "open-files-limit" in
# section [mysqld_safe] table_open_cache=2000

# Maximum size for internal (in-memory) temporary tables. If a table
# grows larger than this value, it is automatically converted to disk
# based table This limitation is for a single table. There can be many
# of them. tmp_table_size=3G

# How many threads we should keep in a cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there aren't
# more than thread_cache_size threads from before.  This greatly reduces
# the amount of thread creations needed if you have a lot of new
# connections. (Normally this doesn't give a notable performance
# improvement if you have a good thread implementation.) thread_cache_size=10

#*** MyISAM Specific options
# The maximum size of the temporary file MySQL is allowed to use while
# recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE.
# If the file-size would be bigger than this, the index will be created
# through the key cache (which is slower). myisam_max_sort_file_size=100G

# If the temporary file used for fast index creation would be bigger
# than using the key cache by the amount specified here, then prefer the
# key cache method.  This is mainly used to force long character keys in
# large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=6G

# Size of the Key Buffer, used to cache index blocks for MyISAM tables.
# Do not set it larger than 30% of your available memory, as some memory
# is also required by the OS to cache rows. Even if you're not using
# MyISAM tables, you should still set it to 8-64M as it will also be
# used for internal temporary disk tables. key_buffer_size = 8G

# Size of the buffer used for doing full table scans of MyISAM tables.
# Allocated per thread, if a full scan is needed. read_buffer_size=64K read_rnd_buffer_size=256K

#*** INNODB Specific options ***
# innodb_data_home_dir=0.0

# Use this option if you have a MySQL server with InnoDB support enabled
# but you do not plan to use it. This will save memory and disk space
# and speed up some things.
# skip-innodb

# If set to 1, InnoDB will flush (fsync) the transaction logs to the
# disk at each commit, which offers full ACID behavior. If you are
# willing to compromise this safety, and you are running small
# transactions, you may set this to 0 or 2 to reduce disk I/O to the
# logs. Value 0 means that the log is only written to the log file and
# the log file flushed to disk approximately once per second. Value 2
# means the log is written to the log file at each commit, but the log
# file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1

# The size of the buffer InnoDB uses for buffering log data. As soon as
# it is full, InnoDB will have to flush it to disk. As it is flushed
# once per second anyway, it does not make sense to have it very large
# (even with long transactions). innodb_log_buffer_size=1M

# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the machine physical memory size. Do not set it
# too large, though, because competition of the physical memory may
# cause paging in the operating system.  Note that on 32bit systems you
# might be limited to 2-3.5G of user level memory per process, so do not
# set it too high. innodb_buffer_pool_size=8M

# Size of each log file in a log group. You should set the combined size
# of log files to about 25%-100% of your buffer pool size to avoid
# unneeded buffer pool flush activity on log file overwrite. However,
# note that a larger logfile size will increase the time needed for the
# recovery process. innodb_log_file_size=48M

# Number of threads allowed inside the InnoDB kernel. The optimal value
# depends highly on the application, hardware as well as the OS
# scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=17

# The increment size (in MB) for extending the size of an auto-extend InnoDB system tablespace file when it becomes full. innodb_autoextend_increment=64

# The number of regions that the InnoDB buffer pool is divided into.
# For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency,
# by reducing contention as different threads read and write to cached pages. innodb_buffer_pool_instances=8

# Determines the number of threads that can enter InnoDB concurrently. innodb_concurrency_tickets=5000

# Specifies how long in milliseconds (ms) a block inserted into the old sublist must stay there after its first access before
# it can be moved to the new sublist. innodb_old_blocks_time=1000

# It specifies the maximum number of .ibd files that MySQL can keep open at one time. The minimum value is 10. innodb_open_files=300

# When this variable is enabled, InnoDB updates statistics during metadata statements. innodb_stats_on_metadata=0

# When innodb_file_per_table is enabled (the default in 5.6.6 and higher), InnoDB stores the data and indexes for each newly created table
# in a separate .ibd file, rather than in the system tablespace. innodb_file_per_table=1

# Use the following list of values: 0 for crc32, 1 for strict_crc32, 2 for innodb, 3 for strict_innodb, 4 for none, 5 for strict_none. innodb_checksum_algorithm=0

# The number of outstanding connection requests MySQL can have.
# This option is useful when the main MySQL thread gets many connection requests in a very short time.
# It then takes some time (although very little) for the main thread to check the connection and start a new thread.
# The back_log value indicates how many requests can be stacked during this short time before MySQL momentarily
# stops answering new requests.
# You need to increase this only if you expect a large number of connections in a short period of time. back_log=80

# If this is set to a nonzero value, all tables are closed every flush_time seconds to free up resources and
# synchronize unflushed data to disk.
# This option is best used only on systems with minimal resources. flush_time=0

# The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use
# indexes and thus perform full table scans. join_buffer_size=256K

# The maximum size of one packet or any generated or intermediate string, or any parameter sent by the
# mysql_stmt_send_long_data() C API function. max_allowed_packet=4M

# If more than this many successive connection requests from a host are interrupted without a successful connection,
# the server blocks that host from performing further connections. max_connect_errors=1000

# Changes the number of file descriptors available to mysqld.
# You should try increasing the value of this option if mysqld gives you the error "Too many open files". open_files_limit=4161

# Set the query cache type. 0 for OFF, 1 for ON and 2 for DEMAND. query_cache_type = 1

# If you see many sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the
# sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization
# or improved indexing. sort_buffer_size=256K

# The number of table definitions (from .frm files) that can be stored in the definition cache.
# If you use a large number of tables, you can create a large table definition cache to speed up opening of tables.
# The table definition cache takes less space and does not use file descriptors, unlike the normal table cache.
# The minimum and default values are both 400. table_definition_cache=1400

# Specify the maximum size of a row-based binary log event, in bytes.
# Rows are grouped into events smaller than this size if possible. The value should be a multiple of 256. binlog_row_event_max_size=8K

# If the value of this variable is greater than 0, a replication slave synchronizes its master.info file to disk.
# (using fdatasync()) after every sync_master_info events. sync_master_info=10000

# If the value of this variable is greater than 0, the MySQL server synchronizes its relay log to disk.
# (using fdatasync()) after every sync_relay_log writes to the relay log. sync_relay_log=10000

# If the value of this variable is greater than 0, a replication slave synchronizes its relay-log.info file to disk.
# (using fdatasync()) after every sync_relay_log_info transactions. sync_relay_log_info=10000

query_cache_limit = 30M collation-server = latin1_swedish_ci default_tmp_storage_engine = MYISAM


myisam_use_mmap concurrent_insert = AUTO

Source: (StackOverflow)

After DB version change, index won't be used automatically

We recently migrated our Magento application from a private host to AWS web services. We noticed that some of the internal functionality of Magento was taking an excessively long period of time to execute after the migration, so started to investigate.

One of the queries in question is a simple customer select query, with around 9-10 regular joins to attribute tables to get the attributes.

We have run some tests on the query and found that the difference between the old host and AWS is that on the old host, the MySQL optimizer appears to use the correct index, whereas in AWS it resorts to using filesort, ignoring the index.

Using FORCE INDEX(index_name) makes the query execute correctly in AWS, however we don't want to go down this road and would rather fix the issue in the database configuration than make manual hacks throughout our Magento application. To be clear, this is not an issue with our indexes, they are set up correctly.

For background:

  • I have copied all the MySQL parameters from the old host's my.cnf file to a parameter group in RDS, but nothing made any difference
  • The tables are all InnoDB
  • I've run analyze, repair and optimize queries etc
  • The query takes around 45 seconds to complete on RDS
  • The query took around 2 seconds to complete on the old host, or when I use FORCE INDEX() to force RDS to behave in the same way as the old host

The old MySQL server was running version 5.1.61, and the AWS RDS instance we are running is on 5.6.19. A consulting group suggested to us that we downgrade our RDS instance to 5.1.61, however again we don't want to do this as it is not a sustainable solution.

The query in question is below (shortened by removing fields from the select for the sake of space):

SELECT
    `e`.*
    -- various field names here, removed
FROM `customer_entity` AS `e`
LEFT JOIN `customer_entity_int` AS `at_default_billing` ON (`at_default_billing`.`entity_id` = `e`.`entity_id`) AND (`at_default_billing`.`attribute_id` = '13')
LEFT JOIN `customer_address_entity_varchar` AS `at_billing_postcode` ON (`at_billing_postcode`.`entity_id` = `at_default_billing`.`value`) AND (`at_billing_postcode`.`attribute_id` = '30')
LEFT JOIN `customer_address_entity_varchar` AS `at_billing_city` ON (`at_billing_city`.`entity_id` = `at_default_billing`.`value`) AND (`at_billing_city`.`attribute_id` = '26')
LEFT JOIN `customer_address_entity_varchar` AS `at_billing_telephone` ON (`at_billing_telephone`.`entity_id` = `at_default_billing`.`value`) AND (`at_billing_telephone`.`attribute_id` = '31')
LEFT JOIN `customer_address_entity_varchar` AS `at_billing_regione` ON (`at_billing_regione`.`entity_id` = `at_default_billing`.`value`) AND (`at_billing_regione`.`attribute_id` = '28')
LEFT JOIN `customer_address_entity_varchar` AS `at_billing_country_id` ON (`at_billing_country_id`.`entity_id` = `at_default_billing`.`value`) AND (`at_billing_country_id`.`attribute_id` = '27')
LEFT JOIN `core_store` AS `at_store_name` ON (at_store_name.`store_id`=e.store_id)
LEFT JOIN `customer_entity_varchar` AS `firstname` ON e.entity_id = firstname.entity_id AND firstname.attribute_id = 5
LEFT JOIN `customer_entity_varchar` AS `lastname` ON e.entity_id = lastname.entity_id AND lastname.attribute_id = 7 

WHERE (`e`.`entity_type_id` = '1') 

ORDER BY `e`.`entity_id`
DESC LIMIT 20;

A summary of the differences between servers when EXPLAINing this query is below:

RDS - MySQL v5.6.19:

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: e
         type: ref
possible_keys: IDX_CUSTOMER_ENTITY_ENTITY_TYPE_ID
          key: IDX_CUSTOMER_ENTITY_ENTITY_TYPE_ID
      key_len: 2
          ref: const
         rows: 653990
        Extra: Using temporary; Using filesort

Old host - MySQL v5.1.61, or when FORCE INDEX is used on RDS:

*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: e
         type: ref
possible_keys: IDX_CUSTOMER_ENTITY_ENTITY_TYPE_ID
          key: IDX_CUSTOMER_ENTITY_ENTITY_TYPE_ID
      key_len: 2
          ref: const
         rows: 644775
        Extra: Using where

I'm aware that it's highly likely that the difference in those database versions has changed the way the query optimizer works, and I'm not aware of what the differences in those versions are, but I'm looking for a solution that will help us to address those differences.


Edit: here's a comparison list from RDS for the parameter's I've copied over from the old host vs the default parameters for this MySQL version. None of these parameters have affected the result above whether they are there or whether I used standard parameters in RDS:

enter image description here


Source: (StackOverflow)

Updating millions of records on inner joined subquery - optimization techniques

I'm looking for some advice on how I might better optimize this query.

For each _piece_detail record that:

  1. Contains at least one matching _scan record on (zip, zip_4, zip_delivery_point, serial_number)
  2. Belongs to a company from mailing_groups (through a chain of relationships)
  3. Has either:
    1. first_scan_date_time that is greater than the MIN(scan_date_time) of the related _scan records
    2. latest_scan_date_time that is less than the MAX(scan_date_time) of the related _scan records

I will need to:

  1. Set _piece_detail.first_scan_date_time to MIN(_scan.scan_date_time)
  2. Set _piece_detail.latest_scan_date_time to MAX(_scan.scan_date_time)

Since I'm dealing with millions upon millions of records, I am trying to reduce the number of records that I actually have to search through. Here are some facts about the data:

  1. The _piece_details table is partitioned by job_id, so it seems to make the most sense to run through these checks in the order of _piece_detail.job_id, _piece_detail.piece_id.
  2. The scan records table contains over 100,000,000 records right now and is partitioned by (zip, zip_4, zip_delivery_point, serial_number, scan_date_time), which is the same key that is used to match a _scan with a _piece_detail (aside from scan_date_time).
  3. Only about 40% of the _piece_detail records belong to a mailing_group, but we don't know which ones these are until we run through the full relationship of joins.
  4. Only about 30% of the _scan records belong to a _piece_detail with a mailing_group.
  5. There are typically between 0 and 4 _scan records per _piece_detail.

Now, I am having a hell of a time finding a way to execute this in a decent way. I had originally started with something like this:

UPDATE _piece_detail
    INNER JOIN (
        SELECT _piece_detail.job_id, _piece_detail.piece_id, MIN(_scan.scan_date_time) as first_scan_date_time, MAX(_scan.scan_date_time) as latest_scan_date_time
        FROM _piece_detail
            INNER JOIN _container_quantity 
                ON _piece_detail.cqt_database_id = _container_quantity.cqt_database_id 
                AND _piece_detail.job_id = _container_quantity.job_id
            INNER JOIN _container_summary 
                ON _container_quantity.container_id = _container_summary.container_id 
                AND _container_summary.job_id = _container_quantity.job_id
            INNER JOIN _mail_piece_unit 
                ON _container_quantity.mpu_id = _mail_piece_unit.mpu_id 
                AND _container_quantity.job_id = _mail_piece_unit.job_id
            INNER JOIN _header 
                ON _header.job_id = _piece_detail.job_id
            INNER JOIN mailing_groups 
                ON _mail_piece_unit.mpu_company = mailing_groups.mpu_company
            INNER JOIN _scan
                ON _scan.zip = _piece_detail.zip 
                AND _scan.zip_4 = _piece_detail.zip_4 
                AND _scan.zip_delivery_point = _piece_detail.zip_delivery_point 
                AND _scan.serial_number = _piece_detail.serial_number 
        GROUP BY _piece_detail.job_id, _piece_detail.piece_id, _scan.zip, _scan.zip_4, _scan.zip_delivery_point, _scan.serial_number
    ) as t1 ON _piece_detail.job_id = t1.job_id AND _piece_detail.piece_id = t1.piece_id 
SET _piece_detail.first_scan_date_time = t1.first_scan_date_time, _piece_detail.latest_scan_date_time = t1.latest_scan_date_time
WHERE _piece_detail.first_scan_date_time < t1.first_scan_date_time 
    OR _piece_detail.latest_scan_date_time > t1.latest_scan_date_time;

I thought that this may have been trying to load too much into memory at once and might not be using the indexes properly.

Then I thought that I might be able to avoid doing that huge joined subquery and add two leftjoin subqueries to get the min/max like so:

UPDATE _piece_detail
    INNER JOIN _container_quantity 
        ON _piece_detail.cqt_database_id = _container_quantity.cqt_database_id 
        AND _piece_detail.job_id = _container_quantity.job_id
    INNER JOIN _container_summary 
        ON _container_quantity.container_id = _container_summary.container_id 
        AND _container_summary.job_id = _container_quantity.job_id
    INNER JOIN _mail_piece_unit 
        ON _container_quantity.mpu_id = _mail_piece_unit.mpu_id 
        AND _container_quantity.job_id = _mail_piece_unit.job_id
    INNER JOIN _header 
        ON _header.job_id = _piece_detail.job_id
    INNER JOIN mailing_groups 
        ON _mail_piece_unit.mpu_company = mailing_groups.mpu_company
    LEFT JOIN _scan fs ON (fs.zip, fs.zip_4, fs.zip_delivery_point, fs.serial_number) = (
        SELECT zip, zip_4, zip_delivery_point, serial_number
        FROM _scan
        WHERE zip = _piece_detail.zip 
            AND zip_4 = _piece_detail.zip_4 
            AND zip_delivery_point = _piece_detail.zip_delivery_point 
            AND serial_number = _piece_detail.serial_number
        ORDER BY scan_date_time ASC
        LIMIT 1
        )
    LEFT JOIN _scan ls ON (ls.zip, ls.zip_4, ls.zip_delivery_point, ls.serial_number) = (
        SELECT zip, zip_4, zip_delivery_point, serial_number
        FROM _scan
        WHERE zip = _piece_detail.zip 
            AND zip_4 = _piece_detail.zip_4 
            AND zip_delivery_point = _piece_detail.zip_delivery_point 
            AND serial_number = _piece_detail.serial_number
        ORDER BY scan_date_time DESC
        LIMIT 1
        )
SET _piece_detail.first_scan_date_time = fs.scan_date_time, _piece_detail.latest_scan_date_time = ls.scan_date_time
WHERE _piece_detail.first_scan_date_time < fs.scan_date_time 
    OR _piece_detail.latest_scan_date_time > ls.scan_date_time

These are the explains when I convert them to SELECT statements:

+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+
| id | select_type | table               | type   | possible_keys                                      | key           | key_len | ref                                                                                                                    | rows   | Extra                                        |
+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+
|  1 | PRIMARY     | <derived2>          | ALL    | NULL                                               | NULL          | NULL    | NULL                                                                                                                   | 844161 | NULL                                         |
|  1 | PRIMARY     | _piece_detail       | eq_ref | PRIMARY,first_scan_date_time,latest_scan_date_time | PRIMARY       | 18      | t1.job_id,t1.piece_id                                                                                                  |      1 | Using where                                  |
|  2 | DERIVED     | _header             | index  | PRIMARY                                            | date_prepared | 3       | NULL                                                                                                                   |     87 | Using index; Using temporary; Using filesort |
|  2 | DERIVED     | _piece_detail       | ref    | PRIMARY,cqt_database_id,zip                        | PRIMARY       | 10      | odms._header.job_id                                                                                                    |   9703 | NULL                                         |
|  2 | DERIVED     | _container_quantity | eq_ref | unique,mpu_id,job_id,job_id_container_quantity     | unique        | 14      | odms._header.job_id,odms._piece_detail.cqt_database_id                                                                 |      1 | NULL                                         |
|  2 | DERIVED     | _mail_piece_unit    | eq_ref | PRIMARY,company,job_id_mail_piece_unit             | PRIMARY       | 14      | odms._container_quantity.mpu_id,odms._header.job_id                                                                    |      1 | Using where                                  |
|  2 | DERIVED     | mailing_groups      | eq_ref | PRIMARY                                            | PRIMARY       | 27      | odms._mail_piece_unit.mpu_company                                                                                      |      1 | Using index                                  |
|  2 | DERIVED     | _container_summary  | eq_ref | unique,container_id,job_id_container_summary       | unique        | 14      | odms._header.job_id,odms._container_quantity.container_id                                                              |      1 | Using index                                  |
|  2 | DERIVED     | _scan               | ref    | PRIMARY                                            | PRIMARY       | 28      | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number |      1 | Using index                                  |
+----+-------------+---------------------+--------+----------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+--------+----------------------------------------------+

+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+
| id | select_type        | table               | type   | possible_keys                                                      | key           | key_len | ref                                                                                                                    | rows      | Extra                                                           |
+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+
|  1 | PRIMARY            | _header             | index  | PRIMARY                                                            | date_prepared | 3       | NULL                                                                                                                   |        87 | Using index                                                     |
|  1 | PRIMARY            | _piece_detail       | ref    | PRIMARY,cqt_database_id,first_scan_date_time,latest_scan_date_time | PRIMARY       | 10      | odms._header.job_id                                                                                                    |      9703 | NULL                                                            |
|  1 | PRIMARY            | _container_quantity | eq_ref | unique,mpu_id,job_id,job_id_container_quantity                     | unique        | 14      | odms._header.job_id,odms._piece_detail.cqt_database_id                                                                 |         1 | NULL                                                            |
|  1 | PRIMARY            | _mail_piece_unit    | eq_ref | PRIMARY,company,job_id_mail_piece_unit                             | PRIMARY       | 14      | odms._container_quantity.mpu_id,odms._header.job_id                                                                    |         1 | Using where                                                     |
|  1 | PRIMARY            | mailing_groups      | eq_ref | PRIMARY                                                            | PRIMARY       | 27      | odms._mail_piece_unit.mpu_company                                                                                      |         1 | Using index                                                     |
|  1 | PRIMARY            | _container_summary  | eq_ref | unique,container_id,job_id_container_summary                       | unique        | 14      | odms._header.job_id,odms._container_quantity.container_id                                                              |         1 | Using index                                                     |
|  1 | PRIMARY            | fs                  | index  | NULL                                                               | updated       | 1       | NULL                                                                                                                   | 102462928 | Using where; Using index; Using join buffer (Block Nested Loop) |
|  1 | PRIMARY            | ls                  | index  | NULL                                                               | updated       | 1       | NULL                                                                                                                   | 102462928 | Using where; Using index; Using join buffer (Block Nested Loop) |
|  3 | DEPENDENT SUBQUERY | _scan               | ref    | PRIMARY                                                            | PRIMARY       | 28      | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number |         1 | Using where; Using index; Using filesort                        |
|  2 | DEPENDENT SUBQUERY | _scan               | ref    | PRIMARY                                                            | PRIMARY       | 28      | odms._piece_detail.zip,odms._piece_detail.zip_4,odms._piece_detail.zip_delivery_point,odms._piece_detail.serial_number |         1 | Using where; Using index; Using filesort                        |
+----+--------------------+---------------------+--------+--------------------------------------------------------------------+---------------+---------+------------------------------------------------------------------------------------------------------------------------+-----------+-----------------------------------------------------------------+

Now, looking at the explains generated by each, I really can't tell which is giving me the best bang for my buck. The first one shows fewer total rows when multiplying the rows column, but the second appears to execute a bit quicker.

Is there anything that I could do to achieve the same results while increasing performance through modifying the query structure?


Source: (StackOverflow)

2nd generation Google cloud SQL - App Engine

Delighted to see that you have rolled out 2nd generation cloud SQL in Beta. However, I see that it is not available yet to GAE projects. Can you estimate when this link will be enabled please? We are making some big infrastructural changes in our fancy system necessitated by the slowness of Generation 1 SQL, but will stick with cloud SQL if the change is not far off.


Source: (StackOverflow)

Installing MySQL 5.6 in Ubuntu 14.04 triggers removal of needed packages

So I was trying to make a fresh installation of MySQL 5.6 on Ubuntu 14.04.2 and for some reason it tried to remove needed packages, not without first complaining about some unmet dependencies. Here's what happened:

pedro@hope:~$ sudo apt-get install mysql-server-5.6
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 mysql-server-5.6 : Depends: mysql-client-5.6 (>= 5.6.19-0ubuntu0.14.04.1) but it is not going to be installed
                    Depends: mysql-server-core-5.6 (= 5.6.19-0ubuntu0.14.04.1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
pedro@hope:~$ sudo apt-get install mysql-client-5.6
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 mysql-client-5.6 : Depends: mysql-client-core-5.6 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
pedro@hope:~$ sudo apt-get install mysql-client-core-5.6
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  acl advancecomp amor analitza-common ark astrometry.net audiocd-kio
  avogadro-data blinken bomber bovo cantor cantor-backend-kalgebra cervisia
  chemical-mime-data dirmngr dnsmasq-base docbook-to-man dragonplayer edict
  filelight fonts-dustin freerdp-x11 freespacenotifier gnugo granatier
  gwenview iputils-arping juk kaccessible kalgebra kalgebra-common kalzium
  kalzium-data kamera kanagram kanjidic kapman kapptemplate kate katomic
  kblackbox kblocks kbounce kbreakout kbruch kcachegrind kcalc kcharselect
  kcolorchooser kcron kde-artwork-active kde-base-artwork kde-baseapps
  kde-config-cddb kde-config-cron kde-icons-mono kde-style-oxygen
  kde-wallpapers kde-wallpapers-default kde-window-manager
  kde-window-manager-common kde-workspace-data kde-workspace-kgreet-plugins
  kde-zeroconf kdeaccessibility kdeadmin kdeartwork-emoticons kdeartwork-style
  kdeartwork-theme-window kdeedu kdeedu-kvtml-data kdegames-card-data
  kdegames-data kdegames-mahjongg-data kdegraphics kdegraphics-mobipocket
  kdegraphics-strigi-analyzer kdegraphics-thumbnailers kdemultimedia
  kdenetwork-filesharing kdenetwork-strigi-analyzers kdepasswd
  kdepimlibs-kio-plugins kdesdk kdesdk-kio-plugins kdesdk-misc kdesdk-scripts
  kdesdk-strigi-plugins kdesdk-thumbnailers kdetoys kdewallpapers kdf kdiamond
  kdm kfilereplace kfourinline kgamma kgeography kgeography-data kget
  kgoldrunner khangman khelpcenter4 kig kigo killbots kimagemapeditor
  kinfocenter kio-audiocd kiriki kiten kjumpingcube klettres klettres-data
  klickety klines klipper kmag kmahjongg kmenuedit kmines kmix kmousetool
  kmplot kmtrace knavalbattle knetwalk kolf kollision kolourpaint4 kommander
  kompare konquest kpartloader kpat krdc kremotecontrol kreversi krfb
  krosspython kruler ksaneplugin kscd kshisen ksirk ksnakeduel ksnapshot
  kspaceduel ksplash-theme-active ksquares kstars kstars-data ksudoku
  ksysguard ksysguardd ksystemlog kteatime ktimer ktimetracker ktouch
  ktouch-data ktuberling kturtle ktux kubrick kuiviewer kuser kwalletmanager
  kwordquiz kwrite libaccounts-glib0 libaccounts-qt1 libakonadi-calendar4
  libakonadi-contact4 libakonadi-kabc4 libakonadi-kcal4 libakonadi-kde4
  libakonadi-kmime4 libakonadi-notes4 libakonadi-socialutils4
  libakonadiprotocolinternals1 libanalitza5abi1 libanalitzagui5abi1
  libanalitzaplot5abi1 libao-common libao4 libapr1 libaprutil1 libastro1
  libavogadro1 libbaloopim4 libblas3 libbluetooth3
  libboost-program-options1.54.0 libboost-python1.54.0 libboost-system1.54.0
  libboost-thread1.54.0 libcfitsio3 libcln6 libcomposereditorng4
  libdeclarative-multimedia libdiscid0 libdmtx0a libegl1-mesa
  libegl1-mesa-drivers libelfg0 libexif12 libexttextcat-data
  libfreerdp-plugins-standard libfreerdp1 libftdi1 libgadu3 libgbm1
  libgfortran3 libgl2ps0 libglamor0 libgle3 libgles2-mesa libglew1.10
  libglib2.0-bin libgpgme++2 libgpgme11 libgphoto2-6 libgphoto2-l10n
  libgphoto2-port10 libgps20 libgrantlee-core0 libgrantlee-gui0 libgsl0ldbl
  libical1 libieee1284-3 libindi-data libindi0c libindicate-qt1 libindicate5
  libjs-jquery libjs-sphinxdoc libjs-underscore libkabc4 libkalarmcal2
  libkasten2controllers2abi1 libkasten2core2 libkasten2gui2
  libkasten2okteta1controllers1abi1 libkasten2okteta1core1
  libkasten2okteta1gui1 libkateinterfaces4 libkblog4 libkcal4 libkcalcore4
  libkcalutils4 libkcddb4 libkcompactdisc4 libkdcraw-data libkdcraw23
  libkdecorations4abi1 libkdeedu-data libkdegames6 libkdgantt2-0
  libkeduvocdocument4 libkephal4abi1 libkfbapi1 libkgapi2-2 libkholidays4
  libkimap4 libkipi-data libkipi11 libkiten4abi1 libkldap4 libkleo4
  libkmahjongglib4 libkmanagesieve4 libkmbox4 libkmime4 libknewstuff2-4
  libkolab0 libkolabxml1 libkomparediff2-4 libkontactinterface4 libkpgp4
  libkpimidentities4 libkpimtextedit4 libkpimutils4 libkresources4 libkrossui4
  libksane-data libksane0 libkscreensaver5 libksgrd4 libksieve4
  libksignalplotter4 libktnef4 libktorrent5 libkunitconversion4
  libkwineffects1abi4 libkwinglesutils1 libkwinglutils1abi3 libkworkspace4abi2
  liblapack3 liblightdm-gobject-1-0 liblightdm-qt-3-0 liblircclient0
  libmailtransport4 libmarblewidget18 libmariadbclient18 libmbim-glib0
  libmeanwhile1 libmediastreamer-base3 libmicroblog4 libmm-glib0 libmms0
  libmnl0 libmodemmanagerqt1 libmsn0.3 libmtdev1 libmusicbrainz3-6
  libmusicbrainz5-0 libneon27-gnutls libnepomukdatamodel0
  libnetfilter-conntrack3 libnetworkmanagerqt1 libnm-glib-vpn1 libnm-glib4
  libnm-util2 libnova-0.14-0 liboath0 libokteta1core1 libokteta1gui1
  libopenal-data libopenal1 libopenbabel4 libopenconnect2 libopenvg1-mesa
  libortp9 libotr5 libplasma-geolocation-interface4 libplasmaclock4abi4
  libplasmagenericshell4 libprison0 libprocesscore4abi1 libprocessui4a
  libqaccessibilityclient0 libqalculate5 libqalculate5-data libqextserialport1
  libqgpgme1 libqhull6 libqmi-glib0 libqoauth1 libqrencode3 libqt4-help
  libqt4-scripttools libqt4-test libqtassistantclient4 libqtlocation1
  libqtmultimediakit1 libqtwebkit-qmlwebkitplugin libquazip0 libraw9 libsane
  libsane-common libsendlater4 libsensors4 libserf-1-1 libshp1 libsignon-qt1
  libsp1c2 libspandsp2 libspice-server1 libsrtp0 libsvn1 libswscale2
  libsyndication4 libtaskmanager4abi5 libtelepathy-qt4-2 libupnp6
  libvncserver0 libwayland-client0 libwayland-egl1-mesa libwayland-server0
  libwcs4 libweather-ion6 libwlocate0 libxatracker2 libxcb-composite0
  libxcb-damage0 libxcb-image0 libxcb-keysyms1 libxcb-xfixes0 libxcb-xtest0
  libxerces-c3.1 libxkbfile1 libxklavier16 libxvmc1 lightdm
  lightdm-kde-greeter lirc lokalize lskat marble marble-data marble-plugins
  mariadb-common mobile-broadband-provider-info modemmanager
  mysql-server-core-5.5 network-manager network-manager-pptp okteta optipng
  oxygen-cursor-theme p7zip-full palapeli palapeli-data parley parley-data
  picmi pkg-kde-tools plasma-active-data plasma-active-metadatamodels
  plasma-active-mobilecomponents plasma-active-settings
  plasma-active-webbrowser plasma-containments-addons
  plasma-dataengines-addons plasma-desktopthemes-artwork plasma-nm
  plasma-scriptengine-superkaramba plasma-widget-folderview
  plasma-widget-message-indicator plasma-widget-networkmanagement
  plasma-widgets-active poxml pptp-linux python-avogadro python-beautifulsoup
  python-dateutil python-dbus python-dbus-dev python-enchant python-gi
  python-iniparse python-levenshtein python-numpy python-pyfits python-qt4
  python-qt4-sql python-simplejson python-sip python-utidylib python-vobject
  qhull-bin qtdeclarative4-kqtquickcharts-1 rocs setserial share-like-connect
  share-like-connect-data sp startactive-data step subversion svgpart sweeper
  systemsettings texlive-latex-base texlive-latex-base-doc translate-toolkit
  ttf-sjfonts umbrello unzip update-notifier-kde usb-modeswitch
  usb-modeswitch-data valgrind vorbis-tools x11-xkb-utils xfonts-base
  xscreensaver-data xscreensaver-data-extra xscreensaver-gl
  xscreensaver-gl-extra xscreensaver-screensaver-bsod xserver-common
  xserver-xorg xserver-xorg-core xserver-xorg-input-all
  xserver-xorg-input-evdev xserver-xorg-input-mouse
  xserver-xorg-input-synaptics xserver-xorg-input-vmmouse
  xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-ati
  xserver-xorg-video-cirrus xserver-xorg-video-fbdev
  xserver-xorg-video-glamoregl xserver-xorg-video-intel
  xserver-xorg-video-mach64 xserver-xorg-video-mga
  xserver-xorg-video-modesetting xserver-xorg-video-neomagic
  xserver-xorg-video-nouveau xserver-xorg-video-openchrome
  xserver-xorg-video-qxl xserver-xorg-video-r128 xserver-xorg-video-radeon
  xserver-xorg-video-s3 xserver-xorg-video-savage
  xserver-xorg-video-siliconmotion xserver-xorg-video-sis
  xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident
  xserver-xorg-video-vesa xserver-xorg-video-vmware zip
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
  libqt4-declarative-gestures
The following packages will be REMOVED:
  akonadi-backend-mysql akonadi-server akonadiconsole akregator blogilo
  kaddressbook kajongg kalarm kde-config-pimactivity kde-full
  kde-plasma-desktop kde-plasma-netbook kde-standard kde-workspace
  kde-workspace-bin kdeartwork kdegames kdenetwork kdepim kdepim-kresources
  kdepim-runtime kdeplasma-addons kdeutils kdewebdev kgpg kjots kleopatra
  klinkstatus kmail knode knotes konsolekalendar kontact kopete
  kopete-message-indicator korganizer kscreensaver kscreensaver-xsavers
  libcalendarsupport4 libeventviews4 libincidenceeditorsng4 libkdepim4
  libkdepimdbusinterfaces4 libkopete4 libksieveui4 libmailcommon4
  libmailimporter4 libmessagecomposer4 libmessagecore4 libmessagelist4
  libmessageviewer4 libnoteshared4 libpimactivity4 libpimcommon4
  libtemplateparser4 mysql-client-core-5.5 plasma-dataengines-workspace
  plasma-desktop plasma-netbook plasma-runners-addons plasma-wallpapers-addons
  plasma-widget-lancelot plasma-widgets-addons plasma-widgets-workspace
  python-kde4
The following NEW packages will be installed:
  libqt4-declarative-gestures mysql-client-core-5.6
0 upgraded, 2 newly installed, 65 to remove and 0 not upgraded.
Need to get 4,102 kB of archives.
After this operation, 125 MB disk space will be freed.
Do you want to continue? [Y/n] n
Abort.

Some extra information, in case it helps:

pedro@hope:~$ uname -a
Linux hope 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

KDE SC Version: 4.13.2

Am I missing something? This happened both with the existing MySQL 5.6 version in the repositories and the ones in ppa:ondrej/mysql-5.6. I also followed this tutorial, but the result was the same (trying to remove necessary packages).


Source: (StackOverflow)

Sql syntax: select without from clause as subquery in select (subselect)

While editing some queries to add alternatives for columns without values, I accidentally wrote something like this (here is the simplyfied version):

SELECT id, (SELECT name) FROM t

To my surprise, MySQL didn't throw any error, but completed the query giving my expected results (the name column values). I tried to find any documentation about it, but with no success.

Is this SQL standard or a MySQL specialty?
Can I be sure that the result of this syntax is really the column value from the same (outer) table? The extended version would be like this:

SELECT id, (SELECT name FROM t AS t1 where t1.id=t2.id) FROM t AS t2

but the EXPLAIN reports No tables used in the Extra column for the former version, which I think is very nice.

Here's a simple fiddle on SqlFiddle (it keeps timing out for me, I hope you have better luck).

Clarification: I know about subqueries, but I always wrote subqueries (correlated or not) that implied a table to select from, hence causing an additional step in the execution plan; my question is about this syntax and the result it gives, that in MySQL seems to return the expected value without any.


Source: (StackOverflow)

mysqld ignoring my.cnf (mysql 5.6 on Ubuntu 16.04 Server)

I installed mysql 5.6 on Ubuntu Server 16.04 using this method: https://askubuntu.com/questions/762384/install-mysql-5-6-on-ubuntu-16-04

But i'm having trouble getting mysqld to read /etc/mysql/my.cnf

Specifically i'm trying to set it up for an Atlassian Confluence installation where I have to add in these settings:

[mysqld]
character-set-server=utf8
collation-server=utf8_bin
default-storage-engine=INNODB
max_allowed_packet=256M
innodb_log_file_size=2G

I've checked mysqld --help --verbose and it has the my.cnf in its path. Here's the relevant output:

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf The following
groups are read: mysqld server mysqld-5.6

I know that mysql is reading the file because if I change [mysqld] to [mysql] the arguments are correctly printed when I run mysql --print-defaults also if I put a wrong line in the file mysql will fail to start and tell me where the syntax error is. However mysqld does not report any arguments for mysqld --print-defaults or tell me about syntax errors in the file.

What else can I try?

Solved:

my.cnf was a symbolic link. I deleted it and made it an actual file. Now mysqld --show-defaults is working! So apparently mysql will follow the symlink but mysqld will not. Good to know.


Source: (StackOverflow)

Upgrading Amazon RDS instance from MySQL 5.5 to MySQL 5.6

With a large data set and a real time system, halting that applications using RDS for a mysqldump from a 5.5 instance and loading that data with the mysql command is impractical.

In the Amazon blog http://aws.typepad.com/aws/amazon-rds/page/2/ on July 1, 2013 Jeff Barr posts "Upgrading an existing database instance from MySQL 5.5 to MySQL 5.6 is not currently supported. However, we intend to provide this functionality in the near future."

7 months later, I don't see any answer from them. I'm looking for options to have a no outage or few minute outage to have an upgrade of MySQL 5.5 to MySQL 5.6 in the RDS environment.

Normally for an upgrade like this I make a MySQL 5.6 replica from a MySQL 5.5 master and then shutdown the applications, change the application MySQL connect to the 5.6 server, and restart the applications. Replicating an RDS MySQL 5.5 instance to a different version RDS replica or to an external replica is not an option in RDS.

What options have worked for people?


Source: (StackOverflow)