exim4 and apache2-mpm-itk

Migrating from a very old Debian install to a newer VPS with a more recent version of Apache and the mpm-itk mod, I was having problems with sending mail using the standard PHP mail() call (first seen when the WP contact form I was using was throwing a “Sorry, email message could not be delivered” error).

exim4 log reported the following:

unable to set gid=33 or uid=0 (euid=0): forcing real = effective

This thread contained a post indicating the problem was the LimitGIDRange/LimitUIDRange options; it seems if these are not specified there are some defaults (perhaps with very low values, or perhaps it’s just that if it’s not set it will not work at all) that need to be overridden.

Defining these values in the global Apache configuration fixes it.

PHPMailer and Gmail API Mysterious ‘Could not connect’ Error on Windows

I’ve had PHPMailer happily sending email through the Gmail API (as part of a G Suite subscription) for a while now and it mysteriously stopped working yesterday (29th Sep, 2017), throwing the following output with debug enabled:

2017-09-30 11:24:52 SERVER -> CLIENT: 220 smtp.gmail.com ESMTP v2sm1805443wmf.8 - gsmtp
2017-09-30 11:24:52 CLIENT -> SERVER: EHLO trog-pc
2017-09-30 11:24:52 SERVER -> CLIENT: 250-smtp.gmail.com at your service, []
250-SIZE 35882577
2017-09-30 11:24:52 CLIENT -> SERVER: STARTTLS
2017-09-30 11:24:52 SERVER -> CLIENT: 220 2.0.0 Ready to start TLS
2017-09-30 11:24:52 SMTP Error: Could not connect to SMTP host.
2017-09-30 11:24:52 CLIENT -> SERVER: QUIT
2017-09-30 11:24:52 SERVER -> CLIENT: M I A ��] P *g�� 87� �*��h�!T��
[multiple line binary gibberish removed]
2017-09-30 11:24:52 SMTP ERROR: QUIT command failed: M I A ��] P *g�� 87� �*��h�!T�� [multiple line binary gibberish removed]
2017-09-30 11:24:52 SMTP connect() failed. https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting
Mailer Error: SMTP connect() failed. https://github.com/PHPMailer/PHPMailer/wiki/Troubleshooting

It looks pretty clearly like a crypto error and the step in the Troubleshooting guide (helpfully provided in the error message!) relating to the OpenSSL check made it seem pretty clear that it was a problem.

The OpenSSL test result looked like this:

C:\files\Apps\OpenSSL>openssl s_client -starttls smtp -crlf -connect smtp.gmail.com:587
depth=1 C = US, O = Google Trust Services, CN = Google Internet Authority G3
verify error:num=20:unable to get local issuer certificate
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com
i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
1 s:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
i:/OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign
Server certificate
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com
issuer=/C=US/O=Google Trust Services/CN=Google Internet Authority G3
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: ECDH, P-256, 256 bits
SSL handshake has read 3246 bytes and written 468 bytes
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: 927522CACEB8BB3D0FB305E197235C64D147A4CC26643AB60EB5F110E787FA98
Master-Key: 8D698AF5A7790DC4836430F2FA6157B310CF0DDA684B5160BEC643B966E9CCC41598D34D03DA0579893A6CAFB62C2B33
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 100800 (seconds)
TLS session ticket:
0000 - 00 33 97 a3 0b be 7f 8d-47 f3 97 6c 18 bb 43 83 .3......G..l..C.
0010 - 27 a4 f7 01 2c d1 a8 0e-55 a9 3c c3 b3 6f 30 58 '...,...U.<..o0X
0020 - 89 22 e3 29 50 42 18 8e-29 ca be 27 57 f9 bc 6e .".)PB..)..'W..n
0030 - 25 f9 ed 68 6a ba 30 97-60 0b 32 fc 19 ab 83 10 %..hj.0.`.2.....
0040 - 00 d1 91 e7 1d 72 d9 2f-3f 27 ac 06 83 23 78 94 .....r./?'...#x.
0050 - 4d 59 38 7f 5d 70 2e ec-d9 d4 b3 31 c9 34 04 25 MY8.]p.....1.4.%
0060 - 79 a8 2f 49 66 ce c7 e3-67 de 46 58 43 b9 42 36 y./If...g.FXC.B6
0070 - 54 49 33 94 99 1e 7d 0b-87 4c da c5 a4 72 b1 05 TI3...}..L...r..
0080 - 5d 47 3b cf 33 13 69 41-f8 1d e4 a0 81 26 1c e5 ]G;.3.iA.....&..
0090 - a7 6b 9b 09 c8 db 1d 8f-6b 5e 54 eb d7 ed 9e 6c .k......k^T....l
00a0 - fc 1f f9 f8 3a d4 3a df-05 c7 0b a3 0b 66 c1 4e ....:.:......f.N
00b0 - 66 27 3c 64 03 60 81 1d-44 bb f0 a4 08 d0 96 dd f'<d.`..D.......
00c0 - 14 31 95 fd 23 7f 13 82-ed 15 fa fb 6a f5 ec 69 .1..#.......j..i
00d0 - c9 b1 d3 e9 fc .....

Start Time: 1506770618
Timeout : 300 (sec)
Verify return code: 20 (unable to get local issuer certificate)

At first glance the Troubleshooting guide implies that the ‘unable to get local issuer certificate’ is safe to ignore – but it is only referring to the first instance of the error at the top. If you’re also seeing the error message at the bottom, you have the same problem as me.

The easy fix is to set verify_peer to false as it described in the Troubleshooting guide. But (also as it notes) this is dodgy and you should fix the local certificate store. So don’t do this.

After messing around a bit (including testing identical code on a Linux VM and seeing that it worked), I gritted my teeth and dove into the OpenSSL configuration (something which I’ve studiously avoided for years because everything has magically Just Worked for me).

It looks like OpenSSL didn’t have a local certificate store at all in Windows and it needs to be explicitly configured. I have no idea how it worked at all – maybe it was using some sort of embedded certificate that just expired? Or maybe I had changed some other option somewhere without realising (unlikely but I hate blaming gremlins).

Anyway, the fix is simple:

1) Download the latest cacert.pem file from the curl website

2) Plonk it somewhere on your local machine where PHP can get to it.

3) Update your php.ini’s openssl.cafile directive to point to this new file.

PHP’s OpenSSL should now have the local certificates. The OpenSSL test in the PHPMailer Troubleshooting guide should now “pass” and that final Verify return code: 20 (unable to get local issuer certificate) message should be replaced with Verify return code: 0 (ok). PHPMailer should also happily work again.

Encouraging More Flexible Government Procurement

Last year, Jon Stewart interviewed Nancy Pelosi on The Daily Show discussing how government procurement sort of sucks. A lot of it was the result of issues with the Affordable Care Act’s healthcare.gov site, which struggled at its launch with a series of technical problems that would make any web developer wince.

The full interview is, as always, an excellent watch.

While the scale of the deployment and the bizarre state of the American healthcare system is almost completely alien to most of us in Australia, there was one part of the interview that resonated very strongly with me as a small business owner who has tried to approach government to solve their technical problems.

Stewart brought up the point that the complexity of the procurement process limits the accessibility of it to large companies – small companies are excluded simply because they don’t have the resources to devote to the tendering process.

The relevant text for those nerds (like me) who dislike video:

Stewart: “Obama’s IT guy, small company, clearly a brilliant guy – he arranged all of Obama’s Internet campaign stuff … That guy couldn’t figure out the process. He couldn’t figure out how to bid for that contract.

He said it was a 300 page document and it seems like it’s obscured like that purposefully so that the larger companies have an advantage because they have teams of lawyers and things that can do it.

I’m presenting it as – “Do we have a foundational problem? Is there a corruption in the system that needs to be addressed to give us the confidence that moving forward, we can execute these programmes better?”

Pelosi: “I don’t think there’s a corruption. There may be a risk aversion with going with the known and then just not being entrepreneurial enough to say, question whether that is really going to do the job.”

If you’re a small ICT company in Queensland looking to expand your customer base, you’ve almost certainly looked at the QTenders site every now and then to see what stuff is on the table.

In general I’ve found the tender documents to be very well written. They’re well organised; it’s clear what they’re trying to do and how they want to do it.

Unfortunately many of the documents are very long – just reading them can be a full day exercise.

For a small company, responding to a tender is the work time equivalent of running an entire project. They often require several people to work on. Given the often short timeframes for tender response – maybe a month – it can mean taking up a huge amount of time servicing other customers.

Arguably, this is just part of the sales process. But large companies have entire teams of sales people that do this. A mate I played soccer with actually lead a team (in mining or something, not IT), the sole purpose of which was to just reply to government tenders, having (over the years) developed a keen insight into how many they’d win versus how many they’d lose.

If you’re a large company, you can afford to do this. You can build your sales process over time and simply absorb the losses incurred by blowing a few human-months of time on responding to a tender. But if you’re a startup or an SME, you’re potentially losing 10-20% of your entire sales effort for a YEAR, working on something that you’re almost certainly not going to win – because the big companies have the tender process so streamlined. They basically have human machines for churning out responses to these things; responses that they probably know statistically how likely they are to win.

The tender process is an important part of open government procurement. But watching the giant companies that win the tenders fail again and again costing taxpayers billions of dollars is starting to wear a bit thin. We need to look at better solutions – dividing large projects up unto smaller and more manageable components and figuring out a way to let our SMEs and startups compete effectively for them, instead of them being excluded because they simply can’t afford a seat at the table.

Irritation, Confusion and Deception in the Android Permission System

The Android permission system seemed like a great thing at first – crystal clear understanding about what each application can do on your device. However, with the latest round of updates, it has become even clear that the permission system is confusing at best and deliberately harmful for users at worst.

If you’re a privacy/security conscious, tech savvy user, you might end up poring over individual permissions for each application before deciding to install it. This can be time consuming – even experienced users might have to hit up Google to see exactly what a permission means. Sometimes you might even veto an app or an upgrade because of onerous permissions. But if you watch normal civilians use their phone, they barely even glance at the security options, during install or upgrade. I’ll even confess to giving up dealing with permissions and just installing an app because I felt I “needed” it.

Everything looked good for a while back in Android 4.3 with the discovery of the permission tweaking system which lead to “App Ops”, allowing users to selectively enable/disable permissions on a per-application basis, granting total control over what they had access to. This was perfect, but sadly was pulled in a later release, with Google saying wasn’t intended to be available. The clever hackers over at CyanogenMod restored it as part of their release, but there’s basically no way to have fine-grained control over your apps – meaning you accept everything permission that they want, or you do without.

Here’s an example – I just received notice of an update for PasswdSafe:

2014-09-29 17.21.57

Now, the “What’s New” notes are provided by the application developer. Sometimes they are nice enough to include details about why there was a permission change, but I would say this is generally pretty rare.

Here’s what happens when you click ‘Update’:

2014-09-30 01.04.50

You only get this sort of popup when there’s a permission change. So this popup implies – to me, at least – that there’s some sort of permission change involving media, or files, or whatever.

But if you scroll to the bottom of the app page to click on the ‘Permission Details’ icon, you’ll see this:

2014-09-29 17.22.46

Here you can see the new permission actually has something to do with Near-Field Communication! (So in this case, the changelog provided by the developer actually does relate directly to the permission change, though it’s not really clear until you manually inspect the permissions like this.)

While you can find the information, this current flow is totally broken. It’s infuriating that the recent change now seems to actively hide and thus mislead the user.

This UI change is a massive step back for Android; it compromises the ability of the user to make informed decisions about the software on their device.

The failure of the Australian state to gracefully deal with copyright

Nic Suzor has written another insightful piece for The Conversation which looks at the Australian government’s leaked plan to combat piracy.

In conjunction with research assistant Alex Button-Sloan, the article users several words that I think fairly accurately sum up this plan: “unrealistic”, “vague”, “unlikely to help”. Some of the outcomes are equally depressing: “likely to raise the price of internet access”.

Everyone in Australia should read this article. This plan does nothing to address the root cause of Australian piracy – the simple fact that content is not made available on the same terms as it is in other markets.

The fact that this plan could “massively increase the potential risks for companies that provide legitimate services” also fills me with dread; Mammoth – in addition to being an Internet hosting provider – has been actively involved in legitimate content distribution since the late 90s, across video games, music, and movies. With our clients, we have struggled for years to try to figure out how to make sure Australians have access to the latest content.

These are not technical problems; they are licensing and rights problems caused by media companies carefully choosing how to maximise their profits. The fact that they are now cooperating with out government to literally change the laws to better suit their business model is kind of irritating.

MySQL Server fails to upgrade in Debian/Ubuntu

Just did a relatively normal ‘apt-get upgrade’ on my simple WordPress HHVM test server and got a horrible bunch of errors, leaving it in a non-working state:

E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up mysql-server-5.1 (5.1.54-1ubuntu4) …
start: Job failed to start
invoke-rc.d: initscript mysql, action “start” failed.
dpkg: error processing mysql-server-5.1 (–configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of mysql-server:
mysql-server depends on mysql-server-5.1; however:
Package mysql-server-5.1 is not configured yet.
dpkg: error processing mysql-server (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:

There are many reports of this on various forums and blogs with many different solutions. Here’s mine.

In my case, MySQL logged the following error:

May 4 07:04:22 trog mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] Plugin ‘FEDERATED’ is disabled.
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: The InnoDB memory heap is disabled
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Mutexes and rw_locks use GCC atomic builtins
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Compressed tables use zlib 1.2.7
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Using Linux native AIO
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Initializing buffer pool, size = 128.0M
May 4 07:04:22 trog mysqld: InnoDB: mmap(137363456 bytes) failed; errno 12
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Completed initialization of buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Fatal error: cannot allocate memory for the buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ init function returned error.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Unknown/unsupported storage engine: InnoDB
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Aborting
May 4 07:04:22 trog mysqld:
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] /usr/sbin/mysqld: Shutdown complete
May 4 07:04:22 trog mysqld:


I’ve bolded the important lines – in my case, MySQL was starting and trying to allocate some memory for InnoDB, but it was failing. So when the package manager updated MySQL and restarted it, it would simply exit as failed, returning failure as part of the update process.

I solved this hackishly (but easily) by stopping a memory-hungry process on the server (HHVM), then re-running the upgrade. Without HHVM running there was plenty of spare memory, the update was able to successfully restart MySQL, then I just restarted HHVM.

This is of course indicative that this system probably doesn’t have enough memory – but it’s a small-scale test environment. So if you’re looking for a cheap easy fix when Debian/Ubuntu fail to upgrade MySQL, this might work.

I Want to Pay for Dropbox – But It Might Make Me Hate Myself

Dropbox is one of the very few applications I’ve installed that has completely changed the way I use computer systems.

Under most circumstances, I no longer have to think about having to deal with the irritating sending and receiving of files, or stuffing them onto some other system to be retrieved later. I can just save everything onto my local disk – exactly as I’d like to – and know that it will magically pop up at some later point on every other PC that I own. I take photos on my phone knowing they’ll be stuffed onto Dropbox for later retrieval on my PC – indeed, I no longer even think about “copying photos off my phone”, because it just happens.

There are, of course, a few limitations. For example, it’s hard to do this with large volumes of data, simply because the upstream on most broadband plans is woeful. In those cases typically reaching for the USB disk or stuffing bytes onto my phone is a better alternative.

Of course, the other limitation is the few paltry gigabytes of storage you get on the free plan. If you’re dedicated though, it’s pretty trivial to boost this by quite a bit – referring friends, linking devices, all that sort of stuff. At the time of writing I have 4.2GB available on my Dropbox, without spending a cent.

And now, perhaps inevitably, I find myself in the situation of wondering why the hell all my files aren’t on Dropbox. It’s almost like they had some sort of insidious plan to get me hooked on their awesome system by giving me a taste for free.

Unfortunately I don’t really want to use Dropbox. Not really because I don’t want to pay for it, but because I have never really liked their security model. I want my files to be encrypted/decrypted client side.

I suspect the main reason they don’t want to offer this is because it would remove a lot of the basic functionality that the vast majority of users take advantage of regularly – the ability to access and share files quickly and easily via the web interface in particular. Not to mention the support nightmare that would certainly ensue when those users lose their encryption keys and wonder why all their files are now a bunch of unrecoverable gibberish.

In the post-Snowden world this is possibly an even bigger deal. I don’t really have concerns that faceless government agents are going to be poring through my files – but it’s even clearer that you ultimately need to be responsible for the security of putting your data online.

I’ve tried a few of the European alternatives to Dropbox – Wuala and SpiderOak most recently. Their security policies look good, they (appear to) use client side encryption, and they’re located in Europe, so I can rest somewhat comfortably knowing they’re not subject to secret NSA orders or whatever.

With the possible exception of Google Drive (which of course is subject to the same woes as Dropbox), the other services I’ve tried I found almost completely unusable compared to the elegance, simplicity, and sheer Just Workiness of Dropbox. I tried – I really did. I wanted to like them. I’m not sure if it’s all that security stuff just getting in the way of making it a good experience, but they just feel clunky and awkward to use, painful to set up, and I was generally just thinking “why am I doing this?” the whole time.

I’m a big believer in voting with your wallet. It’s not like there aren’t other options. But Dropbox is just so damn convenient in so many different ways that I can feel myself slowly caving and abandoning any lofty principles just so I can go back to Getting Shit Done.

There are two things that Dropbox could do to get me off the fence immediately.

1) Introduce client side encryption/decryption into the Dropbox client. While it remains closed source I can imagine many would still (rightfully) be hesitant to trust it (how would you know they’re not capturing your encryption keys?), but a nod in that direction would be enough for me.

2) Introduce an option to limit storage of my files on Amazon clouds in different regions. I am not intricately familiar with how Amazon’s cloudy stuff works, but it seems that this would not be a complicated feature. Allow me to opt to have my files stored on S3 within particular geographic regions. I can imagine this would be a big deal for many government services who might want to use Dropbox but might be subject to limitations on where their data can be physically stored, and for the security nerds, getting out of the reach of the NSA (yes, yes, subject to their ability to compromise any site anyway), it would be a neat service.

What I suspect I’ll end up doing is signing up for a plan and then encrypting all my stuff locally with gnupg and treating it more like a backup archival system rather than a live working filesystem.

Image Data Only Hashing of JPEG Files

As part of a small project to verify backups, I came across a case where I had two photos that looked identical but with different EXIF data.

The backup verification system (correctly) flagged these as two different files – as the SHA1 file hashes were different. However, the actual photo was – as far as I could tell – absolutely identical, so I started looking to see if there was a way to verify JPEG files based on the image data alone (instead of the entire file, which would include meta stuff like the EXIF data).

A quick look around revealed that ImageMagick has a “signature hash” function as part of ‘identify‘, which sounded perfect. You can test it like so:

identify.exe -verbose -format “%#” test.jpg

At first glance this solved the problem, but testing on a few systems showed that I was getting different hashes for the same file – it looked like different versions of ImageMagick return a different hash. I’ve asked about this on their forum and was told that the signature algorithm has changed a few times – which makes it sort of useless if compatibility across platforms is required.

After looking around a bit more for alternative I found the (possibly Australian made?) PHP JPEG Metadata Toolkit, which (amongst many other things) includes a get_jpeg_image_data() function which (so far) seems to work reliably across systems. Pulling the data out and running it through SHA1 gives a simple usable way to hash the image-only data in a JPEG file.

Terrible Thunderbird v15.x IMAP Performance with AVG

My PC has recently been chugging a lot more than usual – massive disk activity and high CPU utilisation. Looking into it I quickly realised that it was happening whenever Thunderbird received a large bolus of new email – more than 15-20 emails within a minute or two. When I clicked on the folder with the new email, I could see in the status bar at the bottom that Thunderbird was very slowly downloading these new emails, while my disk and CPU went crazy.

Looking further I noticed that in Filemon, AVG was doing a lot of the work. Disabling AVG’s “Resident Shield” during one of these operations almost immediately fixes the symptoms – the email comes down much faster and the disk activity and CPU returns to normal.

This seemed to happen around the same time as Thunderbird v15.x was released, but I don’t want to declare that the culprit, especially as it is probably the same thing that I noticed with Microsoft Security Essentials that started happening around v11.x. I’m curious if something fundamental changed back then – either internally in Thunderbird, or perhaps within AVG – but it’s certainly possible that I’m just getting a little bit more email now and it’s just tripped my PC over the edge. I assume it has something to do with the way AVG hooks into the disk reading/writing operations – possibly Thunderbird changed something low-level there and it is simply reacting badly with how AVG does its real-time checking.

In any case, if you are experiencing massive slowdowns and system chunkiness using Thunderbird in conjunction with AVG, you can simply temporarily disable the real-time checking when getting a large number of emails. Obviously you probably don’t want to leave it off altogether.