MySQL Server fails to upgrade in Debian/Ubuntu

Just did a relatively normal ‘apt-get upgrade’ on my simple WordPress HHVM test server and got a horrible bunch of errors, leaving it in a non-working state:

E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up mysql-server-5.1 (5.1.54-1ubuntu4) …
start: Job failed to start
invoke-rc.d: initscript mysql, action “start” failed.
dpkg: error processing mysql-server-5.1 (–configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of mysql-server:
mysql-server depends on mysql-server-5.1; however:
Package mysql-server-5.1 is not configured yet.
dpkg: error processing mysql-server (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:
mysql-server-5.1
mysql-server

There are many reports of this on various forums and blogs with many different solutions. Here’s mine.

In my case, MySQL logged the following error:

May 4 07:04:22 trog mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] Plugin ‘FEDERATED’ is disabled.
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: The InnoDB memory heap is disabled
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Mutexes and rw_locks use GCC atomic builtins
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Compressed tables use zlib 1.2.7
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Using Linux native AIO
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Initializing buffer pool, size = 128.0M
May 4 07:04:22 trog mysqld: InnoDB: mmap(137363456 bytes) failed; errno 12
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Completed initialization of buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Fatal error: cannot allocate memory for the buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ init function returned error.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Unknown/unsupported storage engine: InnoDB
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Aborting
May 4 07:04:22 trog mysqld:
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] /usr/sbin/mysqld: Shutdown complete
May 4 07:04:22 trog mysqld:

 

I’ve bolded the important lines – in my case, MySQL was starting and trying to allocate some memory for InnoDB, but it was failing. So when the package manager updated MySQL and restarted it, it would simply exit as failed, returning failure as part of the update process.

I solved this hackishly (but easily) by stopping a memory-hungry process on the server (HHVM), then re-running the upgrade. Without HHVM running there was plenty of spare memory, the update was able to successfully restart MySQL, then I just restarted HHVM.

This is of course indicative that this system probably doesn’t have enough memory – but it’s a small-scale test environment. So if you’re looking for a cheap easy fix when Debian/Ubuntu fail to upgrade MySQL, this might work.

Fixing Double Encoded Characters in MySQL

If you’re working on any old PHP/MySQL sites, chances are at some point you’re going to need to get into the murky, painful world of character encoding – presumably to convert everything to UTF-8 from whatever original setup you have. It is not fun, but fortunately many people have gone through it before and paved the way with a collection of useful information and scripts.

One problem which struck us recently when migrating our database server was certain characters being “double encoded”. This appears to be relatively common. For us, the cause was exporting our data – all UTF-8 data but stored in tables that were latin1 – via mysqldump and then importing again as if it was UTF-8. This means something like the characters are detected as multibyte, but because the source and destinations were different, they’re re-encoded – so you end up with these double encoded characters that look like squiggly gibberish appearing in all your web pages.

Nathan over at the Blue Box Group has written an extremely comprehensive guide to problems like this. It explains the root cause of these problems, the common symptoms, and – of course, most importantly – precise details on how to safely fix them. If you’re doing anything at all involved in changing character encoding then it is worth a read even before you have problems, just so you can get a better handle on how to fix things and what your end game should be.

There’s a few other ways to fix it, of course. The Blue Box solution is comprehensive and reliable but it requires quite a bit of work to get it going, and you also need to know which database table fields you want to work on specifically – so it can be time consuming unless you’re prepared to really sit down and work on it, either to process everything manually or write a script to do it all for you.

Fortunately there’s an easier way, as described here – basically, all you need to do is export your current dataset with mysqldump, forcing it to latin1, and then re-import it as UTF-8:

mysqldump -h DB_HOST -u DB_USER -p –opt –quote-names –skip-set-charset –default-character-set=latin1 DB_NAME > DB_NAME-dump.sql

mysql -h DB_HOST -u DB_USER -p –default-character-set=utf8 DB_NAME < DB_NAME-dump.sql

We did this for AusGamers.com and it worked perfectly – the only caveat you need to be aware of is that it will mess up UTF-8 characters that are properly encoded aleady. For us this wasn’t a big deal as we were able to clearly identify them and fix them manually.

StackOverflow has yet another approach which might be suitable if you’re dealing with only one or two tables and just want to fix it from the MySQL console or phpMyAdmin or whatever – changing the table character sets on the fly:

ALTER TABLE [tableName] MODIFY [columnName] [columnType] CHARACTER SET latin1
ALTER TABLE MyTable [tableName] [columnName] [columnType] CHARACTER SET binary
ALTER TABLE MyTable [tableName] [columnName] [columnType] CHARACTER SET utf8

This method worked fine for me in a test capacity on a single table but we didn’t end up using it everywhere.

Should I Gzip Content Before Putting it in MySQL?

The answer for us was “yes”, although there’s a lot more to it than that. I just wrote about doing this on AusGamers for a table that was causing us a lot of grief with really slow DELETEs due to the huge volume of data in there.

I found that gzip’ing the content before putting it into the database made a massive difference to performance – queries that would usually take minutes to run because they were removing up to gigabytes of data suddenly were dealing with 10x less bytes, which made a huge impact to the execution time.

The results were obvious – you can see in the graphs below the impact that was made.

This change might not be useful in all circumstances – obviously at some point the CPU overhead of gzip’ing might cause more problems than its worth, or something. But if you’re dealing with multi-megabyte chunks of text that MySQL only needs to pull in and out (ie, you don’t need to sort by the contents or do anything else with that data from within MySQL), it’s probably worth trying.