Software

Encouraging More Flexible Government Procurement

Last year, Jon Stewart interviewed Nancy Pelosi on The Daily Show discussing how government procurement sort of sucks. A lot of it was the result of issues with the Affordable Care Act’s healthcare.gov site, which struggled at its launch with a series of technical problems that would make any web developer wince.

The full interview is, as always, an excellent watch.

While the scale of the deployment and the bizarre state of the American healthcare system is almost completely alien to most of us in Australia, there was one part of the interview that resonated very strongly with me as a small business owner who has tried to approach government to solve their technical problems.

Stewart brought up the point that the complexity of the procurement process limits the accessibility of it to large companies – small companies are excluded simply because they don’t have the resources to devote to the tendering process.

The relevant text for those nerds (like me) who dislike video:

Stewart: “Obama’s IT guy, small company, clearly a brilliant guy – he arranged all of Obama’s Internet campaign stuff … That guy couldn’t figure out the process. He couldn’t figure out how to bid for that contract.

He said it was a 300 page document and it seems like it’s obscured like that purposefully so that the larger companies have an advantage because they have teams of lawyers and things that can do it.

I’m presenting it as – “Do we have a foundational problem? Is there a corruption in the system that needs to be addressed to give us the confidence that moving forward, we can execute these programmes better?”

Pelosi: “I don’t think there’s a corruption. There may be a risk aversion with going with the known and then just not being entrepreneurial enough to say, question whether that is really going to do the job.”

If you’re a small ICT company in Queensland looking to expand your customer base, you’ve almost certainly looked at the QTenders site every now and then to see what stuff is on the table.

In general I’ve found the tender documents to be very well written. They’re well organised; it’s clear what they’re trying to do and how they want to do it.

Unfortunately many of the documents are very long – just reading them can be a full day exercise.

For a small company, responding to a tender is the work time equivalent of running an entire project. They often require several people to work on. Given the often short timeframes for tender response – maybe a month – it can mean taking up a huge amount of time servicing other customers.

Arguably, this is just part of the sales process. But large companies have entire teams of sales people that do this. A mate I played soccer with actually lead a team (in mining or something, not IT), the sole purpose of which was to just reply to government tenders, having (over the years) developed a keen insight into how many they’d win versus how many they’d lose.

If you’re a large company, you can afford to do this. You can build your sales process over time and simply absorb the losses incurred by blowing a few human-months of time on responding to a tender. But if you’re a startup or an SME, you’re potentially losing 10-20% of your entire sales effort for a YEAR, working on something that you’re almost certainly not going to win – because the big companies have the tender process so streamlined. They basically have human machines for churning out responses to these things; responses that they probably know statistically how likely they are to win.

The tender process is an important part of open government procurement. But watching the giant companies that win the tenders fail again and again costing taxpayers billions of dollars is starting to wear a bit thin. We need to look at better solutions – dividing large projects up unto smaller and more manageable components and figuring out a way to let our SMEs and startups compete effectively for them, instead of them being excluded because they simply can’t afford a seat at the table.

Irritation, Confusion and Deception in the Android Permission System

The Android permission system seemed like a great thing at first – crystal clear understanding about what each application can do on your device. However, with the latest round of updates, it has become even clear that the permission system is confusing at best and deliberately harmful for users at worst.

If you’re a privacy/security conscious, tech savvy user, you might end up poring over individual permissions for each application before deciding to install it. This can be time consuming – even experienced users might have to hit up Google to see exactly what a permission means. Sometimes you might even veto an app or an upgrade because of onerous permissions. But if you watch normal civilians use their phone, they barely even glance at the security options, during install or upgrade. I’ll even confess to giving up dealing with permissions and just installing an app because I felt I “needed” it.

Everything looked good for a while back in Android 4.3 with the discovery of the permission tweaking system which lead to “App Ops”, allowing users to selectively enable/disable permissions on a per-application basis, granting total control over what they had access to. This was perfect, but sadly was pulled in a later release, with Google saying wasn’t intended to be available. The clever hackers over at CyanogenMod restored it as part of their release, but there’s basically no way to have fine-grained control over your apps – meaning you accept everything permission that they want, or you do without.

Here’s an example – I just received notice of an update for PasswdSafe:

2014-09-29 17.21.57

Now, the “What’s New” notes are provided by the application developer. Sometimes they are nice enough to include details about why there was a permission change, but I would say this is generally pretty rare.

Here’s what happens when you click ‘Update’:

2014-09-30 01.04.50

You only get this sort of popup when there’s a permission change. So this popup implies – to me, at least – that there’s some sort of permission change involving media, or files, or whatever.

But if you scroll to the bottom of the app page to click on the ‘Permission Details’ icon, you’ll see this:

2014-09-29 17.22.46

Here you can see the new permission actually has something to do with Near-Field Communication! (So in this case, the changelog provided by the developer actually does relate directly to the permission change, though it’s not really clear until you manually inspect the permissions like this.)

While you can find the information, this current flow is totally broken. It’s infuriating that the recent change now seems to actively hide and thus mislead the user.

This UI change is a massive step back for Android; it compromises the ability of the user to make informed decisions about the software on their device.

The failure of the Australian state to gracefully deal with copyright

Nic Suzor has written another insightful piece for The Conversation which looks at the Australian government’s leaked plan to combat piracy.

In conjunction with research assistant Alex Button-Sloan, the article users several words that I think fairly accurately sum up this plan: “unrealistic”, “vague”, “unlikely to help”. Some of the outcomes are equally depressing: “likely to raise the price of internet access”.

Everyone in Australia should read this article. This plan does nothing to address the root cause of Australian piracy – the simple fact that content is not made available on the same terms as it is in other markets.

The fact that this plan could “massively increase the potential risks for companies that provide legitimate services” also fills me with dread; Mammoth – in addition to being an Internet hosting provider – has been actively involved in legitimate content distribution since the late 90s, across video games, music, and movies. With our clients, we have struggled for years to try to figure out how to make sure Australians have access to the latest content.

These are not technical problems; they are licensing and rights problems caused by media companies carefully choosing how to maximise their profits. The fact that they are now cooperating with out government to literally change the laws to better suit their business model is kind of irritating.

MySQL Server fails to upgrade in Debian/Ubuntu

Just did a relatively normal ‘apt-get upgrade’ on my simple WordPress HHVM test server and got a horrible bunch of errors, leaving it in a non-working state:

E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up mysql-server-5.1 (5.1.54-1ubuntu4) …
start: Job failed to start
invoke-rc.d: initscript mysql, action “start” failed.
dpkg: error processing mysql-server-5.1 (–configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of mysql-server:
mysql-server depends on mysql-server-5.1; however:
Package mysql-server-5.1 is not configured yet.
dpkg: error processing mysql-server (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:
mysql-server-5.1
mysql-server

There are many reports of this on various forums and blogs with many different solutions. Here’s mine.

In my case, MySQL logged the following error:

May 4 07:04:22 trog mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] Plugin ‘FEDERATED’ is disabled.
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: The InnoDB memory heap is disabled
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Mutexes and rw_locks use GCC atomic builtins
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Compressed tables use zlib 1.2.7
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Using Linux native AIO
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Initializing buffer pool, size = 128.0M
May 4 07:04:22 trog mysqld: InnoDB: mmap(137363456 bytes) failed; errno 12
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Completed initialization of buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 InnoDB: Fatal error: cannot allocate memory for the buffer pool
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ init function returned error.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Unknown/unsupported storage engine: InnoDB
May 4 07:04:22 trog mysqld: 140504 7:04:22 [ERROR] Aborting
May 4 07:04:22 trog mysqld:
May 4 07:04:22 trog mysqld: 140504 7:04:22 [Note] /usr/sbin/mysqld: Shutdown complete
May 4 07:04:22 trog mysqld:

 

I’ve bolded the important lines – in my case, MySQL was starting and trying to allocate some memory for InnoDB, but it was failing. So when the package manager updated MySQL and restarted it, it would simply exit as failed, returning failure as part of the update process.

I solved this hackishly (but easily) by stopping a memory-hungry process on the server (HHVM), then re-running the upgrade. Without HHVM running there was plenty of spare memory, the update was able to successfully restart MySQL, then I just restarted HHVM.

This is of course indicative that this system probably doesn’t have enough memory – but it’s a small-scale test environment. So if you’re looking for a cheap easy fix when Debian/Ubuntu fail to upgrade MySQL, this might work.

I Want to Pay for Dropbox – But It Might Make Me Hate Myself

Dropbox is one of the very few applications I’ve installed that has completely changed the way I use computer systems.

Under most circumstances, I no longer have to think about having to deal with the irritating sending and receiving of files, or stuffing them onto some other system to be retrieved later. I can just save everything onto my local disk – exactly as I’d like to – and know that it will magically pop up at some later point on every other PC that I own. I take photos on my phone knowing they’ll be stuffed onto Dropbox for later retrieval on my PC – indeed, I no longer even think about “copying photos off my phone”, because it just happens.

There are, of course, a few limitations. For example, it’s hard to do this with large volumes of data, simply because the upstream on most broadband plans is woeful. In those cases typically reaching for the USB disk or stuffing bytes onto my phone is a better alternative.

Of course, the other limitation is the few paltry gigabytes of storage you get on the free plan. If you’re dedicated though, it’s pretty trivial to boost this by quite a bit – referring friends, linking devices, all that sort of stuff. At the time of writing I have 4.2GB available on my Dropbox, without spending a cent.

And now, perhaps inevitably, I find myself in the situation of wondering why the hell all my files aren’t on Dropbox. It’s almost like they had some sort of insidious plan to get me hooked on their awesome system by giving me a taste for free.

Unfortunately I don’t really want to use Dropbox. Not really because I don’t want to pay for it, but because I have never really liked their security model. I want my files to be encrypted/decrypted client side.

I suspect the main reason they don’t want to offer this is because it would remove a lot of the basic functionality that the vast majority of users take advantage of regularly – the ability to access and share files quickly and easily via the web interface in particular. Not to mention the support nightmare that would certainly ensue when those users lose their encryption keys and wonder why all their files are now a bunch of unrecoverable gibberish.

In the post-Snowden world this is possibly an even bigger deal. I don’t really have concerns that faceless government agents are going to be poring through my files – but it’s even clearer that you ultimately need to be responsible for the security of putting your data online.

I’ve tried a few of the European alternatives to Dropbox – Wuala and SpiderOak most recently. Their security policies look good, they (appear to) use client side encryption, and they’re located in Europe, so I can rest somewhat comfortably knowing they’re not subject to secret NSA orders or whatever.

With the possible exception of Google Drive (which of course is subject to the same woes as Dropbox), the other services I’ve tried I found almost completely unusable compared to the elegance, simplicity, and sheer Just Workiness of Dropbox. I tried – I really did. I wanted to like them. I’m not sure if it’s all that security stuff just getting in the way of making it a good experience, but they just feel clunky and awkward to use, painful to set up, and I was generally just thinking “why am I doing this?” the whole time.

I’m a big believer in voting with your wallet. It’s not like there aren’t other options. But Dropbox is just so damn convenient in so many different ways that I can feel myself slowly caving and abandoning any lofty principles just so I can go back to Getting Shit Done.

There are two things that Dropbox could do to get me off the fence immediately.

1) Introduce client side encryption/decryption into the Dropbox client. While it remains closed source I can imagine many would still (rightfully) be hesitant to trust it (how would you know they’re not capturing your encryption keys?), but a nod in that direction would be enough for me.

2) Introduce an option to limit storage of my files on Amazon clouds in different regions. I am not intricately familiar with how Amazon’s cloudy stuff works, but it seems that this would not be a complicated feature. Allow me to opt to have my files stored on S3 within particular geographic regions. I can imagine this would be a big deal for many government services who might want to use Dropbox but might be subject to limitations on where their data can be physically stored, and for the security nerds, getting out of the reach of the NSA (yes, yes, subject to their ability to compromise any site anyway), it would be a neat service.

What I suspect I’ll end up doing is signing up for a plan and then encrypting all my stuff locally with gnupg and treating it more like a backup archival system rather than a live working filesystem.

Image Data Only Hashing of JPEG Files

As part of a small project to verify backups, I came across a case where I had two photos that looked identical but with different EXIF data.

The backup verification system (correctly) flagged these as two different files – as the SHA1 file hashes were different. However, the actual photo was – as far as I could tell – absolutely identical, so I started looking to see if there was a way to verify JPEG files based on the image data alone (instead of the entire file, which would include meta stuff like the EXIF data).

A quick look around revealed that ImageMagick has a “signature hash” function as part of ‘identify‘, which sounded perfect. You can test it like so:

identify.exe -verbose -format “%#” test.jpg

At first glance this solved the problem, but testing on a few systems showed that I was getting different hashes for the same file – it looked like different versions of ImageMagick return a different hash. I’ve asked about this on their forum and was told that the signature algorithm has changed a few times – which makes it sort of useless if compatibility across platforms is required.

After looking around a bit more for alternative I found the (possibly Australian made?) PHP JPEG Metadata Toolkit, which (amongst many other things) includes a get_jpeg_image_data() function which (so far) seems to work reliably across systems. Pulling the data out and running it through SHA1 gives a simple usable way to hash the image-only data in a JPEG file.

Terrible Thunderbird v15.x IMAP Performance with AVG

My PC has recently been chugging a lot more than usual – massive disk activity and high CPU utilisation. Looking into it I quickly realised that it was happening whenever Thunderbird received a large bolus of new email – more than 15-20 emails within a minute or two. When I clicked on the folder with the new email, I could see in the status bar at the bottom that Thunderbird was very slowly downloading these new emails, while my disk and CPU went crazy.

Looking further I noticed that in Filemon, AVG was doing a lot of the work. Disabling AVG’s “Resident Shield” during one of these operations almost immediately fixes the symptoms – the email comes down much faster and the disk activity and CPU returns to normal.

This seemed to happen around the same time as Thunderbird v15.x was released, but I don’t want to declare that the culprit, especially as it is probably the same thing that I noticed with Microsoft Security Essentials that started happening around v11.x. I’m curious if something fundamental changed back then – either internally in Thunderbird, or perhaps within AVG – but it’s certainly possible that I’m just getting a little bit more email now and it’s just tripped my PC over the edge. I assume it has something to do with the way AVG hooks into the disk reading/writing operations – possibly Thunderbird changed something low-level there and it is simply reacting badly with how AVG does its real-time checking.

In any case, if you are experiencing massive slowdowns and system chunkiness using Thunderbird in conjunction with AVG, you can simply temporarily disable the real-time checking when getting a large number of emails. Obviously you probably don’t want to leave it off altogether.

MongoDB Fails Updating on Debian

Every so often there’s a MongoDB update on my Debian VPS that fails. The output of ‘aptitude full-upgrade’ is:

# aptitude full-upgrade
The following partially installed packages will be configured:
mongodb-10gen
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B of archives. After unpacking 0 B will be used.
Setting up mongodb-10gen (2.0.5) …
Starting database: mongodb failed!
invoke-rc.d: initscript mongodb, action “start” failed.
dpkg: error processing mongodb-10gen (–configure):
subprocess installed post-installation script returned error exit status 1
configured to not write apport reports
Errors were encountered while processing:
mongodb-10gen
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up mongodb-10gen (2.0.5) …
Starting database: mongodb failed!
invoke-rc.d: initscript mongodb, action “start” failed.
dpkg: error processing mongodb-10gen (–configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
mongodb-10gen

The update works fine, but mongo just fails to start properly.

The problem in my case is simply that there’s a /var/lib/mongodb/mongod.lock file lying around from some previous process. Deleting that file and re-running the aptitude command will start it properly. (Reminder post because I keep forgetting what the problem is.)

AVG on Linux False Positives for NSIS

As of today, we’re seeing what I’m very confident are false positives in AVG running on Linux on our file servers. This has started happening after this morning’s virus database update. The database release we’re using is:

Virus database version: 271.1.1/4927
Virus database release date: Wed, 11 Apr 2012 05:55:00 +10:00

The output of avgscan is:

utils.exe |%name%=Win32/Validace_partial.nsis3|%idn%=0bcfdae664a2c000|=Win32/Validace_partial.nsis3

Files scanned : 1(1)
Infections found : 1(1)
PUPs found : 0
Files healed : 0
Warnings reported : 0
Errors reported : 0

The ‘nsis’ in the output there is presumably referring to the excellent Nullsoft Scriptable Install System (NSIS). The files I’m testing are largely game installers; when cross-checked with a file I built using NSIS it also triggers the false positive.

We are contacting AVG to report this as a probable false positive signature.

Update 3rd May 2012: AVG recommended we update to the 2012 version to fix this issue, which we did – and it fixed the problem.

1 2 3 8  Scroll to top