CB Gallery Module


Deprecated: Function ereg() is deprecated in /home/templates/public_html/demo/mycommunity/j15/modules/mod_cbgallery/mod_cbgallery.php on line 380
Community


 

CB Online

None
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Shinguz: Avoid temporary disk tables with MySQL
    Taxonomy upgrade extras: temporary tablediskselectquery tuningFor processing SELECT queries MySQL needs some times the help of temporary tables. These temporary tables can be created either in memory or on disk. The number of creations of such temporary tables can be found with the following command: mysql> SHOW GLOBAL STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 4 | | Created_tmp_tables | 36 | +-------------------------+-------+ There are 2 different reasons why MySQL is creating a temporary disk table instead of a temporary memory table: The result is bigger than the smaller one of the MySQL variables max_heap_table_size and tmp_table_size. The result contains columns of type BLOB or TEXT. In the following example we can see how the temporary disk table can be avoided without changing the column types: mysql> CREATE TABLE test ( id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , data TEXT , type TINYINT UNSIGNED ); mysql> INSERT INTO test VALUES (NULL, 'State is green', 1), (NULL, 'State is green', 1) , (NULL, 'State is red', 3), (NULL, 'State is red', 3) , (NULL, 'State is red', 3), (NULL, 'State is orange', 2); mysql> EXPLAIN SELECT data, COUNT(*) FROM test GROUP BY data; +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | 1 | SIMPLE | test | ALL | NULL | NULL | NULL | NULL | 6 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_tables | 3 | +-------------------------+-------+ mysql> SELECT data, COUNT(*) FROM test GROUP BY data; +-----------------+----------+ | data | count(*) | +-----------------+----------+ | State is green | 2 | | State is orange | 1 | | State is red | 3 | +-----------------+----------+ mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 1 | | Created_tmp_tables | 4 | +-------------------------+-------+ mysql> SELECT SUBSTR(data, 1, 32), COUNT(*) FROM test GROUP BY SUBSTR(data, 1, 32); mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 1 | | Created_tmp_tables | 5 | +-------------------------+-------+This method can be used if changing the table structure from TEXT to VARCHAR or the use of a RAM disk are not possible solutions.

  • Drop Table slow
    A few days back we were given a MySQL database hosted in EC2 with around 5TB data to move some of the tables to a new server.  The intention was to share the tables between 2 EC2 instances. Since AWS had the option to take online snapshots, the plan was to take a snapshot, create a new machine with that snapshot and drop the unwanted tables.So everything went as planned until creating a new machine with the snapshot.  The real challenge was dropping the unwanted tables.  It took around 4 minutes to Drop a table whose size is 20GB.  It took 20 minutes to drop a 100GB table. The time kept on increasing for larger tables. MySQL even went to “defunct” when we killed the drop query and at times crashed. To track down this issue we executed drop table in one session and checked the processlist from another session that gave the below output.mysql> show processlist \G*************************** 1. row ***************************     Id: 2352   User: dbuser   Host: localhost     db: dbCommand: Query   Time: 2573  State: checking permissions   Info: DROP TABLE `test`I wondered if it is a MySQL user related permission or a OS related one. Now I went on to check the “InnoDB status”. Found something interesting in the “ROW OPERATIONS”. Printing it below--------------ROW OPERATIONS--------------0 queries inside InnoDB, 0 queries in queue1 read views open inside InnoDBMain thread process no. 6117, id 47405873539392, state: doing insert buffer mergeNumber of rows inserted 167872705, updated 1018100, deleted 55251906, read 1953144114171.93 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/sAnd Innodb status output also gave me lots of semaphore waits on data dictionaryInnoDB Change Buffering : From MySQL reference manual “Index changes resulting from SQL statements, which could normally involve random I/O operations, are held back and performed periodically by a background thread. This sequence of operations can write the disk blocks for a series of index values more efficiently than if each value were written to disk immediately."So here is what had really happened. The given DB server is big in data and used to get frequent inserts and didn't have enough ram to accumulate all the pages in “InnoDB buffer pool”. So it had to cache all the secondary index changes to “Insert Buffer”. These cached changes are flushed to disk only when the pages are loaded to the “buffer pool” or when the server is idle. When we took a snapshot and mounted it in a new server the database was idle and so the InnoDB main thread started merging all the changes cached in “insert buffer” to the disk. So it was the “Innodb main thread” that held the lock on those tables and the drop statement that we executed has waited for the InnoDB main thread to complete the insert buffer merge process. So we waited for the buffer merge process to complete and then we executed the drop statements.All those tables got dropped in seconds. Even a 700GB table got dropped in 5 to 10 seconds.Also setting “innodb_fast_shutdown=0” and shutting down MySQL does the buffer merge operation.But what I feel is “buffer merging by keeping the server idle or setting “innodb_fast_shutdown=0” and restarting results in same amount of time”. Also the status “Checking permissions” that we got in the processlist output is very misleading. It is natural for everybody to thing that the drop statement is waiting for MySQL user permission or OS related permission. But actually it was waiting for permission from the InnoDB main thread. I still wonder why we need to buffer merges for a table that is to be dropped. I need to think further.Note : There might some some other reasons too for drop table slowness. But this is the one I have faced till date.--Aravinth C

  • Orchestrator 1.2.9 GA released
    Orchestrator 1.2.9 GA has been released. Noteworthy: Added "ReadOnly" (true/false) configuration param. You can have orchestrator completely read-only Added "AuthenticationMethod": "multi": works like BasicAuth (your normal HTTP user+password) only it also accepts the special user called "readonly", which, surprise, can only view and not modify Centralized/serialized most backend database writes (with hundreds/thousands monitored servers it was possible or probable that high concurrency led to too-many-connections openned on the backend database). Fixed evil evil bug that would skip some checks if binary logs were not enabled Better hostname resolve (now also asking MySQL server to resolve hostname; resolving is cached) Pseudo-GTID (read here, here, here) support now considered stable (apart from being tested it has already been put to practice multiple times in production at Outbrain, in different planned and unplanned crash scenarios) I continue developing orchestrator as free and open source at my new employer, Booking.com.    

  • Making HAProxy 1.5 replication lag aware in MySQL
    HAProxy is frequently used as a software load balancer in the MySQL world. Peter Boros, in a past post, explained how to set it up with Percona XtraDB Cluster (PXC) so that it only sends queries to available nodes. The same approach can be used in a regular master-slaves setup to spread the read load across multiple slaves. However with MySQL replication, another factor comes into play: replication lag. In this case the approach mentioned for Percona XtraDB Cluster does not work that well as the check we presented only returns ‘up’ or ‘down’. We would like to be able to tune the weight of a replica inside HAProxy depending on its replication lag. This is what we will do in this post using HAProxy 1.5.Agent checks in HAProxyHAProxy 1.5 allows us to run an agent check, which is a check that can be added to a regular health check. The benefit of agent checks is that the return value can be ‘up’ or ‘down’, but also a weight.What is an agent? It is simply a program that can be accessed from a TCP connection on a given port. So if we want to run an agent on a MySQL server that will:Mark the server as down in HAProxy if replication is not workingSet the weight to 100% if the replication lag is < 10sSet the weight to 50% if the replication lag is >= 10s and < 60sSet the weight to 5% in all other situationsWe can use a script like this:$ less agent.php = 10 && $lag < 60){ return "up 50%"; } else return "up 5%"; } set_time_limit(0); $socket = stream_socket_server("tcp://127.0.0.1:$port", $errno, $errstr); if (!$socket) { echo "$errstr ($errno) n"; } else { while ($conn = stream_socket_accept($socket,9999999999999)) { $cmd = "$mysql -h127.0.0.1 -u$user -p$password -P$mysql_port -Ee "$query" | grep Seconds_Behind_Master | cut -d ':' -f2 | tr -d ' '"; exec("$cmd",$lag); $weight = set_weight($lag[0]); unset($lag); fputs ($conn, $weight); fclose ($conn); } fclose($socket); } ?>If you want the script to be accessible from port 6789 and connect to a MySQL instance running on port 3306, run:$ php agent.php 6789 3306You will also need a dedicated MySQL user:mysql> GRANT REPLICATION CLIENT ON *.* TO 'haproxy'@'127.0.0.1' IDENTIFIED BY 'haproxy_pwd';When the agent is started, you can check that it is working properly:# telnet 127.0.0.1 6789 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. up 100% Connection closed by foreign host.Assuming it is run locally on the app server, that 2 replicas are available (192.168.10.2 and 192.168.10.3) and that the application will send all reads on port 3307, you will define a frontend and a backend in your HAProxy configuration like this:frontend read_only-front bind *:3307 mode tcp option tcplog log global default_backend read_only-back backend read_only-back mode tcp balance leastconn server slave1 192.168.10.2 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessions server slave2 192.168.10.3 weight 100 check agent-check agent-port 6789 inter 1000 rise 1 fall 1 on-marked-down shutdown-sessionsDemoNow that everything is set up, let’s see how HAProxy can dynamically change the weight of the servers depending on the replication lag.No lag# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # HAProxy $ echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,200Slave1 lagging# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 25 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,50 read_only-back,slave2,UP,100 read_only-back,BACKEND,UP,150Slave2 down# Slave1 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: 0 # Slave2 $ mysql -Ee "show slave status" | grep Seconds_Behind_Master Seconds_Behind_Master: NULL # echo "show stat" | socat stdio /run/haproxy/admin.sock | cut -d ',' -f1,2,18,19 # pxname,svname,status,weight read_only-front,FRONTEND,OPEN, read_only-back,slave1,UP,100 read_only-back,slave2,DOWN (agent),100 read_only-back,BACKEND,UP,100ConclusionAgent checks are a nice addition in HAProxy 1.5. The setup presented above is a bit simplistic though: for instance, if HAProxy fails to connect to the agent, it will not mark the corresponding as down. It is then recommended to keep a regular health check along with the agent check.Astute readers will also notice that in this configuration, if replication is broken on all nodes, HAProxy will stop sending reads. This may not be the best solution. Possible options are: stop the agent and mark the servers as UP using the stats socket or add the master as a backup server.And as a final note, you can edit the code of the agent so that replication lag is measured with Percona Toolkit’s pt-heartbeat instead of Seconds_Behind_Master.The post Making HAProxy 1.5 replication lag aware in MySQL appeared first on MySQL Performance Blog.

  • Monitor MySQL Performance Interactively With VividCortex
    If you’re monitoring MySQL performance on dozens or hundreds of servers, chances are you have a “rainbow chart” – a time-series chart with hundreds of tiny area graphs stacked on top of each other, crawling imperceptibly one pixel at a time across the big-screen monitor in your office. The trouble with these charts is they’re hard to see. It takes many minutes for enough new pixels to display after a change. In the meantime you can’t see the change clearly. At VividCortex, we think we’ve found a better way to keep tabs on what’s going on in your infrastructure: a bubble visualization. It is compact, and immediately communicates current status and trend, with visualizations that your eye interprets in a glance. This is the Activity layout on the Hosts dashboard. It’s designed to scale to hundreds, even thousands of hosts. Tabular layouts and strip-charts won’t do the trick, but live, interactive bubbles will. The bubbles update live, as agents send metrics to the APIs. They’re actually a D3 physics simulation of heat, gravity, friction, and several other forces. A moment later, this environment’s load characteristics changed dramatically! The bubbles are sized according to the host’s load, so busier hosts are larger. The color is the trend: redder bubbles are currently trending1 upwards, and bluer ones are getting less loaded. Similar hosts are grouped together in a single circle. It takes a lot of words to explain, but when you see it changing live, you get it instantly. You don’t see it in this screenshot, but as the visualization auto-selects the hottest host, the right-hand sidebar of the page updates with the host’s details and history, making this a great dashboard to display on that big screen for everyone to see. If you’re using it interactively, quick-targets activate so you can drill down into the auto-selected host. If you’re not getting immediate feedback from your MySQL monitoring system, what are you waiting for? Start your 30-day free trial today. Installation is super-fast and unintrusive. 1 In case you’re curious about the trend calculation, we use a MACD – Moving Average Convergence-Divergence. The exact metric varies by host type, but is either a utilization metric such as CPU busy, or a throughput metric such as queries per second.