Sunday, 19 February 2017



Recently WordPress hit the headlines with an issue which turned to be nightmare for millions of websites around the world. REST-API exploit was a really bad issue, on 1 to 5, this was number 5, because potentially it opened many sites to other exploits and it's just a matter of time when some of the affected servers are going to join the army of botnets doing bad things or users discover that ransomware was installed and they can't access their data.

There are many things that you can do to secure your server and site, but there is always a chance that something might go wrong. What is interesting on most WordPress sites is that all what they do is just to serve static content. This means HTML generated by PHP and webserver also serves content like images, css and js files. When content is not user specific, there is no user area on the site, WordPress is used just for editing and publishing content.

Amazon S3 or Simple Storage Service by definition is "Simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web."  Now you can ask the question: what S3 has to do with WordPress which you would run on EC2 instances. Simply you will need at least webserver and database. On S3 you can't host WordPress but there is one interesting option, you can host there Static website.

As a static website you can imagine webserver with disabled PHP. Advantage would be that this site would be almost impossible to hack - no php, no database, no ssh or ftp access to the server. Another interesting aspect are hosting costs. For a tiny site might be ok to have just one EC2 instance, cheap t2.micro is $0.012 per hour ($105 per year + $12 10GB storage = $117 per year) and any bigger site with ELB, multiple web servers, database server, this would be significantly more (>$1000 per year).
In comparison S3 would be $0.090 per GB for data transfer, $0.023 per GB per storage, both per month. For a small site lets say $1 per year. To move a site from EC2 to S3 could mean that hosting will be more than 100 times cheaper for you.

So when you have two very good reasons - security and money, how to do it?

To make it clear, you will still need WordPress installation, but you would use it only for editing and publishing content; it's not going to be used for public access.
You have a few options there, to run tiny EC2 instance only for yourself, use dedicated Vagrant box or Docker container.

When content is ready, use a tool like httrack to create static image of your site.

When you have a snapshot of your site, on S3 create bucket, where the content will be uploaded.


What is important is to give the bucket a name same as your site URL and enable static site hosting. Given End point you need to use for CNAME DNS record of your domain, to point it to S3.
This is all you need to do, to have a fast, cheap and secure site. When you edit content, you will need to repeat the step to create a snapshot and push it to the S3 bucket, but this can be done easily with a simple script.

And a small advice on the end, if you need something more sophisticated and static site can't do the trick, move functionality outside WordPress. WordPress is not a platform for complicated things anyway, use the API server and handle it in a Javascript Anglular / React application and keep WordPress just for CMS.



WordPress site without WordPress on Amazon S3



Recently WordPress hit the headlines with an issue which turned to be nightmare for millions of websites around the world. REST-API exploit was a really bad issue, on 1 to 5, this was number 5, because potentially it opened many sites to other exploits and it's just a matter of time when some of the affected servers are going to join the army of botnets doing bad things or users discover that ransomware was installed and they can't access their data.

There are many things that you can do to secure your server and site, but there is always a chance that something might go wrong. What is interesting on most WordPress sites is that all what they do is just to serve static content. This means HTML generated by PHP and webserver also serves content like images, css and js files. When content is not user specific, there is no user area on the site, WordPress is used just for editing and publishing content.

Amazon S3 or Simple Storage Service by definition is "Simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web."  Now you can ask the question: what S3 has to do with WordPress which you would run on EC2 instances. Simply you will need at least webserver and database. On S3 you can't host WordPress but there is one interesting option, you can host there Static website.

As a static website you can imagine webserver with disabled PHP. Advantage would be that this site would be almost impossible to hack - no php, no database, no ssh or ftp access to the server. Another interesting aspect are hosting costs. For a tiny site might be ok to have just one EC2 instance, cheap t2.micro is $0.012 per hour ($105 per year + $12 10GB storage = $117 per year) and any bigger site with ELB, multiple web servers, database server, this would be significantly more (>$1000 per year).
In comparison S3 would be $0.090 per GB for data transfer, $0.023 per GB per storage, both per month. For a small site lets say $1 per year. To move a site from EC2 to S3 could mean that hosting will be more than 100 times cheaper for you.

So when you have two very good reasons - security and money, how to do it?

To make it clear, you will still need WordPress installation, but you would use it only for editing and publishing content; it's not going to be used for public access.
You have a few options there, to run tiny EC2 instance only for yourself, use dedicated Vagrant box or Docker container.

When content is ready, use a tool like httrack to create static image of your site.

When you have a snapshot of your site, on S3 create bucket, where the content will be uploaded.


What is important is to give the bucket a name same as your site URL and enable static site hosting. Given End point you need to use for CNAME DNS record of your domain, to point it to S3.
This is all you need to do, to have a fast, cheap and secure site. When you edit content, you will need to repeat the step to create a snapshot and push it to the S3 bucket, but this can be done easily with a simple script.

And a small advice on the end, if you need something more sophisticated and static site can't do the trick, move functionality outside WordPress. WordPress is not a platform for complicated things anyway, use the API server and handle it in a Javascript Anglular / React application and keep WordPress just for CMS.



Friday, 30 September 2016

Strange shout and especially from somebody like me who is big fan of Android.

So, what's the story?

Android is a very successful operating system for mobile devices with market share similar to share which are having Windows on Desktops (Mobile OS / Desktop OS).

The problem for Android is fragmentation. The problem is not only because having many version running means difficulty for developers to test and optimize for so many versions but more serious problem is security.

The security is a problem running an older version of Android, which is not receiving any security updates. Your phone might be is vulnerable to attacks and you don't have to be Democrat. Those things have even names QuadRooter, Stagefright and allow the attacker to take full control over your phone. And imagine what is on your phone, how you use your phone as a device for two factor authentication, accessing your bank account, taking photos of your children, really not fun. And it could be even worst, you can be part of some dirty game and not even knowing about it.

Getting security updates and patching is normal and a very basic thing which you do on your desktop computer, same on servers, so why is it an issue for Android and it's ecosystem?

I've seen it personally in action when I've got a mobile phone on a two year contact from T-Mobile UK, now EE, it was Sony Xperia Neo V.
Customized Android by Sony, on top of it added bloatware (totally useless) by T-Mobile. During the time which I was using it, there was only one major update and two? small updates. Then Sony gave up and hasn't released any other update. At least I wanted  to install available updates which were released by Sony, but no OTA (over-the-air) updates were available to me.

I've asked Sony where is my update and they told me to talk to T-Mobile. So I've talked to T-Mobile about where is an update and they told me to talk to Sony.
At the end I had to de-brand my phone, install generic firmware and then finally I've received an update. At least I had Gingerbread then.

But this is exactly what is happening with Android fragmentation and why there is such a bad situation with Android phones when it comes to a security. The reason behind it is simple Greed!

When you have a phone which is getting older and is not receiving any new features, soon you are going to buy a new phone. And this is what phone makers and carriers want; to keep wheels spinning and sell.

A couple of years ago I've bought OnePlus One with CyanogenMod as alternative version of Android. Since then I've received 4! major updates and I am still getting minor updates almost every month. My phone is really powerful, everything is still lightning fast and I really don't have a reason to buy a new phone. Slowly but surely phones reached performance levels where there is no need to buy a new phone because something much better, much more powerful is available.

In US FCC and FTC woke up  and are investigating what the hell is going on with Android security updates. In EU we are not so lucky, European commission is opening a battle against Google instead of focusing on real world problems.

So what can you do about having a secure phone / tablet? Don't buy a device which doesn't have declared support - you will receive updates. Pretty good are Nexus devices where Google declared support lifecycle:

"Google is committing to keeping the now-monthly security updates coming for either three years from initial availability in the Google Store or 18 months after it is removed from the store (whichever is longer)."

But even Google is no angel, Motorola phones support changed over night when they sold it to Lenovo. Samsung promised to provide schedule for supporting it's phones and tablets, but as far as I know, it's still only a promise. The rest, who knows. My experience with Sony is terrible, Motorola, after taken over by Lenovo, is terrible, Asus is not too bad, OPO is good, Nexus devices are clear winners.

But there is another option, to break the chains and install an alternative, better supported version.  Mentioned CyanogenMod is a very good alternative with excellent support for many devices or CopperheadOS which claims to be hardened Android with focus on security. But there is a catch. You need a phone which allows you to unlock bootloader to load custom ROM (OEM unlock). As you can guess, many vendors make sure you are going to buy very soon a new phone and you can't unlock bootloader. There are ways around it to obtain root access but you are risking bricking your phone.

The conclusion is quite simple. When you are going to buy a new Android phone or tablet, device support should be one of the key factors what device you are going to choose, as important as price and hardware specs.
And when you buy a phone, make sure that you can do with your device, which you paid for, whatever you want, including flashing alternative ROM as a way to keep your device and data secure.

Android sucks!

Strange shout and especially from somebody like me who is big fan of Android.

So, what's the story?

Android is a very successful operating system for mobile devices with market share similar to share which are having Windows on Desktops (Mobile OS / Desktop OS).

The problem for Android is fragmentation. The problem is not only because having many version running means difficulty for developers to test and optimize for so many versions but more serious problem is security.

The security is a problem running an older version of Android, which is not receiving any security updates. Your phone might be is vulnerable to attacks and you don't have to be Democrat. Those things have even names QuadRooter, Stagefright and allow the attacker to take full control over your phone. And imagine what is on your phone, how you use your phone as a device for two factor authentication, accessing your bank account, taking photos of your children, really not fun. And it could be even worst, you can be part of some dirty game and not even knowing about it.

Getting security updates and patching is normal and a very basic thing which you do on your desktop computer, same on servers, so why is it an issue for Android and it's ecosystem?

I've seen it personally in action when I've got a mobile phone on a two year contact from T-Mobile UK, now EE, it was Sony Xperia Neo V.
Customized Android by Sony, on top of it added bloatware (totally useless) by T-Mobile. During the time which I was using it, there was only one major update and two? small updates. Then Sony gave up and hasn't released any other update. At least I wanted  to install available updates which were released by Sony, but no OTA (over-the-air) updates were available to me.

I've asked Sony where is my update and they told me to talk to T-Mobile. So I've talked to T-Mobile about where is an update and they told me to talk to Sony.
At the end I had to de-brand my phone, install generic firmware and then finally I've received an update. At least I had Gingerbread then.

But this is exactly what is happening with Android fragmentation and why there is such a bad situation with Android phones when it comes to a security. The reason behind it is simple Greed!

When you have a phone which is getting older and is not receiving any new features, soon you are going to buy a new phone. And this is what phone makers and carriers want; to keep wheels spinning and sell.

A couple of years ago I've bought OnePlus One with CyanogenMod as alternative version of Android. Since then I've received 4! major updates and I am still getting minor updates almost every month. My phone is really powerful, everything is still lightning fast and I really don't have a reason to buy a new phone. Slowly but surely phones reached performance levels where there is no need to buy a new phone because something much better, much more powerful is available.

In US FCC and FTC woke up  and are investigating what the hell is going on with Android security updates. In EU we are not so lucky, European commission is opening a battle against Google instead of focusing on real world problems.

So what can you do about having a secure phone / tablet? Don't buy a device which doesn't have declared support - you will receive updates. Pretty good are Nexus devices where Google declared support lifecycle:

"Google is committing to keeping the now-monthly security updates coming for either three years from initial availability in the Google Store or 18 months after it is removed from the store (whichever is longer)."

But even Google is no angel, Motorola phones support changed over night when they sold it to Lenovo. Samsung promised to provide schedule for supporting it's phones and tablets, but as far as I know, it's still only a promise. The rest, who knows. My experience with Sony is terrible, Motorola, after taken over by Lenovo, is terrible, Asus is not too bad, OPO is good, Nexus devices are clear winners.

But there is another option, to break the chains and install an alternative, better supported version.  Mentioned CyanogenMod is a very good alternative with excellent support for many devices or CopperheadOS which claims to be hardened Android with focus on security. But there is a catch. You need a phone which allows you to unlock bootloader to load custom ROM (OEM unlock). As you can guess, many vendors make sure you are going to buy very soon a new phone and you can't unlock bootloader. There are ways around it to obtain root access but you are risking bricking your phone.

The conclusion is quite simple. When you are going to buy a new Android phone or tablet, device support should be one of the key factors what device you are going to choose, as important as price and hardware specs.
And when you buy a phone, make sure that you can do with your device, which you paid for, whatever you want, including flashing alternative ROM as a way to keep your device and data secure.

Monday, 11 July 2016

One of the most common problems, when it comes to scaling up, is how to share files between multiple web servers (nodes, instances). There are several options available but each of them has certain disadvantages.
After long time Amazon promoted EFS (Elastic file storage) from beta to available service (production-ready). Needles to say for now only in regions US East, US West, EU West. But EFS finally looks like a solution how to quickly, painlessly and securely share files.

Many devops are scratching their heads with question like this Shared File Systems between multiple AWS EC2 instances

 There were couple of options available how to achieve this:
  • Amazon S3
  • NFS server
  • Rsync
Amazon S3 is object storage, not file system, but there is a project called s3fs which allows you to mount S3 bucket as a volume. But there are some limitations and also higher latency might be a problem.

Usage of NFS is proved and reliable option, but initial setup takes a time and requires quite a few steps - see How To Set Up an NFS Mount on Ubuntu.

Rsync or to be precise Lsyncd, which can be described as scheduled Rsync, is effective way how to synchronise files across fleet of servers and store them on EBS (Elastic Block Store). Problem is that there needs to be one server which is master, from where files are being pulled and where are uploaded.

Amazon response to this is EFS. EFS is easy to setup, you pay only for used space per GB and works as NFSv4 volume with low latency.

Usage is really easy, in AWS console select "Elastic file system system" service.



Follow the wizard, it's straight forward. Only what you should remember is security group, this is how you can restrict access to EFS. What is best is to create own security group for instances accessing EFS. EFS uses NFSv4 which works over TCP and requires only port 2049 to be open.

On your EC2 instances you need to mount EFS. As AWS documentation suggest you need to install NFS client libraries :
  • On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
    sudo yum install -y nfs-utils
  • On an Ubuntu instance:
    sudo apt-get install nfs-common
Mounting is straight forward:
sudo mount -t nfs4 -o nfsvers=4.1 XXXXDNS:/ /where/to/mount

where XXXXDNS is DNS name matching availability zone which you can find under DNS in EFS console, will look similar to  something .efs.eu-west-1.amazonaws.com. 

To have EFS mounted after reboot, don't forget to add record to /etc/fstab which will have syntax:
XXXXDNS:/ /where/to/mount nfs defaults,vers=4.1 0 0

Then by running df command you can verify mounted volume availability







 




Amazon EFS - sharing files painlessly

One of the most common problems, when it comes to scaling up, is how to share files between multiple web servers (nodes, instances). There are several options available but each of them has certain disadvantages.
After long time Amazon promoted EFS (Elastic file storage) from beta to available service (production-ready). Needles to say for now only in regions US East, US West, EU West. But EFS finally looks like a solution how to quickly, painlessly and securely share files.

Many devops are scratching their heads with question like this Shared File Systems between multiple AWS EC2 instances

 There were couple of options available how to achieve this:
  • Amazon S3
  • NFS server
  • Rsync
Amazon S3 is object storage, not file system, but there is a project called s3fs which allows you to mount S3 bucket as a volume. But there are some limitations and also higher latency might be a problem.

Usage of NFS is proved and reliable option, but initial setup takes a time and requires quite a few steps - see How To Set Up an NFS Mount on Ubuntu.

Rsync or to be precise Lsyncd, which can be described as scheduled Rsync, is effective way how to synchronise files across fleet of servers and store them on EBS (Elastic Block Store). Problem is that there needs to be one server which is master, from where files are being pulled and where are uploaded.

Amazon response to this is EFS. EFS is easy to setup, you pay only for used space per GB and works as NFSv4 volume with low latency.

Usage is really easy, in AWS console select "Elastic file system system" service.



Follow the wizard, it's straight forward. Only what you should remember is security group, this is how you can restrict access to EFS. What is best is to create own security group for instances accessing EFS. EFS uses NFSv4 which works over TCP and requires only port 2049 to be open.

On your EC2 instances you need to mount EFS. As AWS documentation suggest you need to install NFS client libraries :
  • On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
    sudo yum install -y nfs-utils
  • On an Ubuntu instance:
    sudo apt-get install nfs-common
Mounting is straight forward:
sudo mount -t nfs4 -o nfsvers=4.1 XXXXDNS:/ /where/to/mount

where XXXXDNS is DNS name matching availability zone which you can find under DNS in EFS console, will look similar to  something .efs.eu-west-1.amazonaws.com. 

To have EFS mounted after reboot, don't forget to add record to /etc/fstab which will have syntax:
XXXXDNS:/ /where/to/mount nfs defaults,vers=4.1 0 0

Then by running df command you can verify mounted volume availability







 




Monday, 13 June 2016


When you have PHP running on same server as webserver like Apache or Nginx, there is no need to worry if somebody can monitor data sent between PHP and MySQL. All the data are being sent through the local socket and are not leaving your server.

But with cloud hosting, docker containers, there is good chance that data will flow through the network, which you can't control and you can't bee sure that somebody is not listening. Even if you use solutions like AWS EC2 with VPC (virtual private cloud) still there is no harm to encrypt traffic between webserver and database.

Following post describes quick and painless way how to do it and things to watch for.

MySQL

Database server configuration is very easy. With configuration can help a lot MySQL Workbench. This tool helps you with generating self signed certificates which you install on server and you use on client, in our case php.
When you look at Workbench SSL tab on existing connection:

You will see SSL Wizard button. When you click on button and you go through the process, it generates self signed client and server certificates and configuration file changes which need to be applied on database server.

Then I would suggest to create extra user where SSL connection is required.
See MySQL documentation or use something like:

GRANT ALL PRIVILEGES ON *.* TO 'ssluser'@'localhost' IDENTIFIED BY 'password' REQUIRE SSL;

PHP

In PHP you have three supported ways how to connect to the database:
  • MySQL
  • MySQLi
  • PDO MySQL
In theory all of the should support SSL connection with usage of underlying OpenSSL library.

MySQL extension

MySQL is deprecated extension and was removed from PHP7. PHP documentation   says that you can pass client flag MYSQL_CLIENT_SSL and use SSL encryption. This is didn't work for me. If you use this deprecated extension you realy should upgrade to MySQLi, this is not difficult, it has almost identical API only with "i" on the end (mysql_query => mysqli_query). For some methods you might need to shift parameters but it's nothing difficult.

MySQLi extension

MySQLi does fully support SSL, before connecting to database you need to use mysqli::ssl_set function. 

Sample code could look like this:
<?php
$connection = mysqli_init();

mysqli_ssl_set(
    $connection,
    $db_client_key,
    $db_client_cert,
    $db_ca_cert,
    null,
    null
);

mysqli_real_connect(
    $connection,
    $host,
    $user,
    $pass,
    $name,
);


PDO MySQL extension

 Usage is very similar to MySQLi extension.

$connection = new PDO('mysql:host=ip;dbname=db', 'user', 'pass', array(
    PDO::MYSQL_ATTR_SSL_KEY    => $db_client_key,
    PDO::MYSQL_ATTR_SSL_CERT=> $db_client_cert,
    PDO::MYSQL_ATTR_SSL_CA    => $db_ca_cert,
    )
);

Things to watch for 

If you will see error similar to:

mysqli::real_connect(): Peer certificate CN=XX did not match expected CN=YY 

It's related to peer verification issue  - enabled by default since PHP 5.6. Because in this context peer verification doesn't make much sense you can disable it by :

MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT

 In code:

mysqli_real_connect(
    $connection,
    $host,
    $user,
    $pass,
    $name,
    3306,
    '',
    MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT
);

  
Another annoying problem could be: 
mysqli_real_connect() : this stream does not support SSL/crypto
PDO::__construct() : this stream does not support SSL/crypto

It's related to this problem. It happens when is used socket. MySQL documentation says:

On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file

Solution is simple, don't use socket with SSL, if you use something like Vagrant, change server address to IP address.

Otherwise SSL on MySQL and together with PHP is not difficult to use so give it a try.

PHP with MySQL and SSL


When you have PHP running on same server as webserver like Apache or Nginx, there is no need to worry if somebody can monitor data sent between PHP and MySQL. All the data are being sent through the local socket and are not leaving your server.

But with cloud hosting, docker containers, there is good chance that data will flow through the network, which you can't control and you can't bee sure that somebody is not listening. Even if you use solutions like AWS EC2 with VPC (virtual private cloud) still there is no harm to encrypt traffic between webserver and database.

Following post describes quick and painless way how to do it and things to watch for.

MySQL

Database server configuration is very easy. With configuration can help a lot MySQL Workbench. This tool helps you with generating self signed certificates which you install on server and you use on client, in our case php.
When you look at Workbench SSL tab on existing connection:

You will see SSL Wizard button. When you click on button and you go through the process, it generates self signed client and server certificates and configuration file changes which need to be applied on database server.

Then I would suggest to create extra user where SSL connection is required.
See MySQL documentation or use something like:

GRANT ALL PRIVILEGES ON *.* TO 'ssluser'@'localhost' IDENTIFIED BY 'password' REQUIRE SSL;

PHP

In PHP you have three supported ways how to connect to the database:
  • MySQL
  • MySQLi
  • PDO MySQL
In theory all of the should support SSL connection with usage of underlying OpenSSL library.

MySQL extension

MySQL is deprecated extension and was removed from PHP7. PHP documentation   says that you can pass client flag MYSQL_CLIENT_SSL and use SSL encryption. This is didn't work for me. If you use this deprecated extension you realy should upgrade to MySQLi, this is not difficult, it has almost identical API only with "i" on the end (mysql_query => mysqli_query). For some methods you might need to shift parameters but it's nothing difficult.

MySQLi extension

MySQLi does fully support SSL, before connecting to database you need to use mysqli::ssl_set function. 

Sample code could look like this:
<?php
$connection = mysqli_init();

mysqli_ssl_set(
    $connection,
    $db_client_key,
    $db_client_cert,
    $db_ca_cert,
    null,
    null
);

mysqli_real_connect(
    $connection,
    $host,
    $user,
    $pass,
    $name,
);


PDO MySQL extension

 Usage is very similar to MySQLi extension.

$connection = new PDO('mysql:host=ip;dbname=db', 'user', 'pass', array(
    PDO::MYSQL_ATTR_SSL_KEY    => $db_client_key,
    PDO::MYSQL_ATTR_SSL_CERT=> $db_client_cert,
    PDO::MYSQL_ATTR_SSL_CA    => $db_ca_cert,
    )
);

Things to watch for 

If you will see error similar to:

mysqli::real_connect(): Peer certificate CN=XX did not match expected CN=YY 

It's related to peer verification issue  - enabled by default since PHP 5.6. Because in this context peer verification doesn't make much sense you can disable it by :

MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT

 In code:

mysqli_real_connect(
    $connection,
    $host,
    $user,
    $pass,
    $name,
    3306,
    '',
    MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT
);

  
Another annoying problem could be: 
mysqli_real_connect() : this stream does not support SSL/crypto
PDO::__construct() : this stream does not support SSL/crypto

It's related to this problem. It happens when is used socket. MySQL documentation says:

On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file

Solution is simple, don't use socket with SSL, if you use something like Vagrant, change server address to IP address.

Otherwise SSL on MySQL and together with PHP is not difficult to use so give it a try.