Laravel 5.5 API Tutorial: Part 3 – Login & JWT

In parts 1 and 2 covering how to get started with a Laravel 5.5 API project, we covered creating a fresh Laravel project and then starting development with a User Registration API.  Up to this point things have been low friction, thanks to the amount of work Laravel does for us when new projects are setup.

Moving forward, in this part we’ll cover creating an API Authentication endpoint, including the beginnings of implementing a JWT authentication model.

Implementing Authentication & JWT

Our API’s login method can be setup with a similar process to that used for the Registration API (route setup, new controller, etc.).  But there’s one exception:  we’ll need to install a JWT package; it’ll provide needed support to generate and decrypt JWTs during authentication.  In this part, we’ll cover installing the JWT package and how to use it to return a JWT with the user’s login response.  The JWT will also need to be returned to the API on subsequent requests as a means to authenticate those requests, but we’ll cover that in another post.

For now, let’s focus on installing JWT support and implementing a login method in our Laravel 5.5 project.

First, to install the jwt-auth package, run the following command.  Note that at the time of this post, Laravel 5.5 requires the package’s dev branch:

composer require tymon/jwt-auth:dev-develop --prefer-source;

Second, add these entries to the existing providers and aliases arrays in config/app.php:

'providers' => [
'aliases' => [
    'JWTAuth' => Tymon\JWTAuth\Facades\JWTAuth::class,

Third, publish the JWT package assets to our project, which will include creating a config/jwt.php file:

php artisan vendor:publish --provider="Tymon\JWTAuth\Providers\LaravelServiceProvider"

Fourth, and at the heart of how JWT operates securely, we need to generate a JWT secret key specific to our project and environment.  Run the following command, and be sure to type “yes” when prompted:

php artisan jwt:secret;

It’s important to note that the secret key that was just generated is what’ll be used to securely sign each token upon generation, and to decrypt it when received on subsequent requests.  It should not be shared publicly.  That also means that while the above step generates a secret key for our particular environment, a different key should be generated and used for other environments (test, production, etc.). Also, be sure these keys don’t land in version control repositories.

Fifth, create app/Http/Middleware/AuthJWT.php (credit to, w/the following contents:


namespace App\Http\Middleware;

use Closure;
use JWTAuth;
use Exception;

class AuthJWT
* Handle an incoming request.
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
public function handle($request, Closure $next)
    try {
        $user = JWTAuth::toUser($request->input('token'));
    } catch (Exception $e) {
        if ($e instanceof \Tymon\JWTAuth\Exceptions\TokenInvalidException){
            return response()->json(['error'=>'Invalid token.']);
        } else if ($e instanceof \Tymon\JWTAuth\Exceptions\TokenExpiredException){
            return response()->json(['error'=>'Expired token.']);
        } else {
            return response()->json(['error'=>'Authentication error.']);
    return $next($request);

This class will handle decrypting and validating received JWTs.

Sixth, register this new class in app/Http/Kernel.php file by adding the following entry to the protected $routeMiddleware array:

'jwt-auth' => \App\Http\Middleware\AuthJWT::class,

Seventh, we need a controller to field API authentication requests.  For that, we’ll create a controller class file at app/Http/Controllers/Auth/ApiAuthController.php with the following contents:


namespace App\Http\Controllers\Auth;

use Illuminate\Http\Request;
use App\Http\Controllers\Controller;
use App\Http\Requests;
use JWTAuth;
use JWTAuthException;
use App\User;

class ApiAuthController extends Controller

    public function __construct()
        $this->user = new User;

    public function login(Request $request){

        $credentials = $request->only('email', 'password');

        $jwt = '';

        try {
            if (!$jwt = JWTAuth::attempt($credentials)) {
                return response()->json([
                    'response' => 'error',
                    'message' => 'invalid_credentials',
                ], 401);
        } catch (JWTAuthException $e) {
            return response()->json([
                'response' => 'error',
                'message' => 'failed_to_create_token',
            ], 500);
        return response()->json([
            'response' => 'success',
            'result' => ['token' => $jwt]

    public function getAuthUser(Request $request){
        $user = JWTAuth::toUser($request->token);
        return response()->json(['result' => $user]);

Before proceeding, take note of the 401 and 500 HTTP response codes specified above.  Other examples of similar login or authentication classes omit those, which I feel is an error.  Reason being, returning or indicating an error condition such as a failed login, merely with some error message (e.g. invalid_credentials, as used above), while also returning an HTTP status code of 200 (meaning OK, which is default if one isn’t specified), isn’t in the spirit of REST.  Such a response isn’t as accurate or clear as it could be, in part b/c HTTP 401 Unauthorized is arguably a better suited response code for failed login attempts.

One reason adhering to suitable response codes is important is b/c REST API clients can be very status-code-aware; they’ll look primarily to the HTTP status code for indication of success or error vs. some proprietary error text in your API’s response string.  And rightfully so – HTTP status codes are standardized and first-class citizens in REST APIs, not to mention your proprietary error text can change at any time.  So always prefer HTTP status codes as the primary means to communicate success or error for an API request, and only use error text or messages to supplement the reason behind the status code.

On to the next step …

Eighth, add the API’s login route.  In routes/api.php, add an auth/login route as shown below.  It’s added to the same middleware closure we added the auth/register route to:

Route::group(['middleware' => ['api','cors']], function () {
    Route::post('auth/register', 'Auth\RegisterController@create');
    Route::post('auth/login', 'Auth\ApiAuthController@login');

Ninth, and as noted in the Sep 20 post in this GitHub discussion (I encountered the same problem so wanted to cite credit to that poster), the User Model must now implement the JWTSubject interface. Here’s what the model should look like:

namespace Illuminate\Foundation\Auth;

use Illuminate\Auth\Authenticatable;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Auth\Passwords\CanResetPassword;
use Illuminate\Foundation\Auth\Access\Authorizable;
use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract;
use Illuminate\Contracts\Auth\Access\Authorizable as AuthorizableContract;
use Illuminate\Contracts\Auth\CanResetPassword as CanResetPasswordContract;

use Tymon\JWTAuth\Contracts\JWTSubject;

class User extends Model implements AuthenticatableContract, 

* @return mixed
public function getJWTIdentifier()
    return $this->getKey();

* @return array
public function getJWTCustomClaims()
    return ['user' => ['id' => $this->id]];

Note JWTSubject in the class signature, and the addition of its methods to satisfy its implementation.

Finally, the login endpoint is ready to test.  Again in Postman, send a request to http://localhost:8000/auth/login.  Send email and password key/value pairs, w/values corresponding to the test user created in Part 2.  Here’s what the request looks like in Postman:

Login request example in Postman.

The login request in Postman. Note the token in the response body.


Verify that the request completes successfully, and that a JWT is returned in the response as shown above.  In subsequent posts, we’ll cover the role of the received JWT in the overall authentication scheme.


This covered an API login example, including the beginnings of implementing JWT authentication in our Laravel 5.5 API.  Thus far we’ve covered how to create a fresh Laravel project, then added a User Registration API.  With test users introduced into the APIs database, we’ve also implemented a login/authentication API, which includes the beginnings of JWT in the API.

In the next post we’ll cover the JWT in more detail, including how to validate a user’s JWT on subsequent API requests.

Posted in General, Laravel, PHP | 9 Comments

Laravel 5.5 API Tutorial: Part 2 – User Registration

This is Part 2 of multi-part walk-through of a Laravel 5.5. REST API example.  In Part 1 we covered how to install a fresh Laravel 5.5 project and prep it for API development, and in this part we’ll move forward with creating a User Registration endpoint.  It’ll serve as a means to get user records into our API’s database, a prerequisite for later steps such as implementing JWT authentication.

Let’s get started.

Starting With A Default Migration

Before adding a user registration endpoint it’s worth running a default migration. Aside from introducing us as to Laravel’s migration mechanism, it also adds user related tables for us that’ll serve as a base for our user schema.

To run our first migration, run the following command from the project’s root:

php artisan migrate

After it completes, confirm that the database now has a users table.  It’ll be empty, but that’s ok, our user registration endpoint will get user records in there soon enough.

Note: A password_resets table was also created, which can be ignored for now.

Creating the User Registration API

Now that we have a users table, we’re ready to create an API endpoint that writes to it.  To start, first create the API route we’ll need for user registration.  As we did w/the Hello World route, append the following to the routes/api.php file:

Route::group(['middleware' => ['api','cors']], function () {
    Route::post('auth/register', 'Auth\ApiRegisterController@create');

This will route to an ApiRegisterController we’ll create in a subsequent step.

Second, and in regard to the inclusion of cors in the route, since we’ll be making Cross Origin API requests, including with tools like Postman, Cross Origin request support is required.  To add the CORS package, run the following commands in the project’s root directory:

composer require barryvdh/laravel-cors
php artisan vendor:publish --provider="Barryvdh\Cors\ServiceProvider"

Then, add the following array element to the protected $routeMiddleware group in app/Http/Kernel.php:

'cors' => \Barryvdh\Cors\HandleCors::class

Third, create app/Http/Controllers/Auth/ApiRegisterController.php with the following contents:


namespace App\Http\Controllers\Auth;

use Illuminate\Http\Request;
use Illuminate\Auth\Events\Registered;

class ApiRegisterController extends RegisterController
     * Handle a registration request for the application.
     * @override
     * @param  \Illuminate\Http\Request  $request
     * @return \Illuminate\Http\Response
    public function register(Request $request)
        $errors = $this->validator($request->all())->errors();

            return response(['errors' => $errors], 401);

        event(new Registered($user = $this->create($request->all())));


        return response(['user' => $user]);

This is the ApiRegisterController that’ll handle our registration requests.  It’s worth noting that this is more or less a a copy of the default app/Http/Controllers/Auth/RegisterController, with two changes.  The first is the import of the Illuminate\Http\Request namespace in the file header, which allows the second change – this class’ create() method accepts a Request instance vs. an array.

Fourth, the registration endpoint is now ready to test. With Postman, and while the test server is running, send a request w/the following set in Postman:

  1. request method set to POST
  2. url set to http://localhost:8000/api/auth/register
  3. content type set to JSON
  4. for testing purposes, set the request body to the following key/value pairs in JSON:

To run our first migration, run the following command from the project’s root:

  "name": "Walter White", 
  "email": "",
  "password": "testpassword",
  "password_confirmation": "testpassword"

Here’s what the request looks like in Postman:

Once the request is sent, verify the response contains the test user represented in JSON.  Also verify that a new record made it into the database.  If both of those are verified, the Registration API is complete! If you have any questions or hit any issues, feel free to leave a comment below and we can try to sort things out for you.

In the next post we’ll add a login method that also implements the beginnings of a JWT authentication model.


Posted in General, Laravel, PHP | 13 Comments

Laravel 5.5 API Tutorial: Part 1 – Intro & Project Setup

After searching for examples on how to setup a Laravel 5.5 API project, particularly one that implements JWT for authentication, I was unfortunately left piecing together various bits and commands from different blog posts and github discussions.  Some of that was due to existing examples using Laravel 5.4.  The subtle differences between Laravel 5.4 and 5.5 were enough to require additional research and debugging vs. enjoying a seamless API setup + JWT how-to.  In light of that I wanted to put together a few blog posts on what I found, with the goal of providing readers what I was seeking: a seamless start-to-finish example of implementing an API with JWT authentication in Laravel 5.5.

In this post we’ll only cover setting up a fresh Laraval 5.5 project to serve as ground zero for our API example, and verify the setup with a simple hello world API route. This much is likely more suitable for Laravel newcomers. Subsequent posts will progress through various API steps including user registration, JWT authentication, and modifying data.

Let’s get started.

Creating A New Laravel Project

Creating a new Laravel project is super easy; there’s not much too it thanks to Laravel’s CLI support. Let’s assume a project name of laravel_api.

Update: before creating the project, be sure the following php packages are installed, otherwise you’ll receive errors due to them missing when the project is created:

sudo apt-get install -y php-zip php-mbstring php-xml


To start, first run the following command from a suitable directory to create the project:

laravel new laravel_api

Second, some boilerplate is required to further initialize the project. In particular, we need to create a suitable .env file, generate an application key, setup database config, etc. The first two can be handled by running the following commands:

cd laravel_api
cp .env.example .env
php artisan key:generate

To setup database config, create a MySQL database and revise the .env file to point to it.  The 4 config entries that’ll need to change are:

DB_HOST=[the db server ip, e.g.]
DB_DATABASE=[the name of the database you created]
DB_USERNAME=[a suitable database user name]
DB_PASSWORD=[a suitable database user password]

Third, let’s test the project setup to ensure everything is wired correctly before proceeding further. Start PHP’s test server for our Laravel project:

php artisan serve

… then visit http://localhost:8000 in a web browser.  You should see the default Laravel splash page (shown below). If not, review or repeat the above steps until any problems are resolved.


Hello World API Endpoint

As a quick primer for future posts, let’s setup a Hello World API endpoint. This will provide a shallow dive into route setup, testing the api with Postman, and confirm our project is ready to move forward into further API development.

First, let’s setup our test route. Since we’re building an API, the routes we’ll add will go in the routes/api.php file. Add the following GET route to the end of that file:

Route::middleware('api')->get('/hello_world', function () {
    return json_encode(['message' => 'hello world']);

Next, let’s test this endpoint.  You can use your browser, but I’d recommend using and becoming familiar with Postman.  It provides a more robust API testing platform than using a browser or CURL commands form a terminal.

Once Postman is downloaded, open it and setup a GET request similar to the following:

Upon clicking Send, you should see a simple JSON response in the bottom pane with the message hello world, confirming the route we setup is working.  If any errors were received, review the previous steps until those issues are resolved; also be sure the web server (that was started above via php atrisan serve) is still running.


That’s it for part 1.  This wasn’t anything new for Laravel veterans, but for newcomers to the framework, this should help demonstrate how easy it is to setup a new Laravel project and confirm the setup is ready for API development.

Next, we’ll dive into database setup, creating our first real API endpoint, getting users into the database, and more.

Posted in General, Laravel, PHP | 1 Comment

My iPhone 6 Review

I received my iPhone 6 last Friday.  After going through the setup and using it for a few days, I wanted to give my initial impressions, some of which I almost regret to offer.

I purchased the standard iPhone 6, not the 6 Plus.  My previous phone was an iPhone 4.

The Good

It’s very, very beautiful.  This phone gives an amazing first impression.  The sleeker design is a big improvement.  I immediately noticed how much thinner the 6 is than my iPhone 4, and that it has fewer lines.  Overall the effort put into making the aesthetics as seamless as possible payed off.  Much credit to Apple for raising the bar in this area yet again, even after the 6th iteration of the product.

It’s lightning fast!  The mobile network screams.  My mail inbox updates instantly now, and Safari rips through web pages as fast as my desktop.  Touch events also respond quicker and smoother than my iPhone 4.

It’s lighter.  When I picked it up for the first time it immediately felt lighter than my iPhone 4.  I didn’t expect that given how much bigger it appears.  But when I compared specs on the two, it’s only lighter by a half ounce.  I’m surprised I was able to feel that.

The display is amazing.  iPhone 6 easily has the clearest display I’ve ever used, on any device.  The bigger screen also improves reading and viewing experiences.

The Bad

Syncing data was a pain.  It took 2 attempts.  I don’t have a Mac or iTunes, so it took 25 minutes or so to boot my Windows laptop, install iTunes, reboot for some (Windows) reason, sync my iPhone 4 data to iTunes, then sync that to my iPhone 6.  My first sync attempt crashed due to an obscure .dll error.  After another reboot my second sync attempt succeeded.

There really needs to be a way to sync data from iPhone to iPhone without having to go through iTunes, especially for those without a Mac.  Because even in 2014, using Windows is still something I dread doing; my sync experience supplements why.

The bigger screen is harder to use.  While the bigger screen is nice when reading and viewing content, it also makes one-hand use hard a lot of the time, if not prohibitive.  This is disappointing, and hopefully temporary.

For example, when holding the phone in my right hand, I can no longer reach the Messages icon with my thumb to send a text message.  That requires 2 hands now, or repositioning the phone in my hand, both of which make the phone feel clumsy.  I get the same experience when managing contacts.

And this is with the standard 6, not the 6 Plus.  I debated the decision, but now know had I ordered the 6 Plus I wouldn’t have liked it; its size would have been overboard.  In fact, after 30 minutes of using my 6, I questioned how usable the 6 Plus will be, at least during one-hand use.  And sure enough, reviews are starting to pop up stating as much, like this one and this one.


Overall I love my iPhone 6.  It’s an amazing phone, in just about every way.  Aesthetics, feel, performance – it’s an incredible device.  Apple proves, yet again, why they’re known as one of the best product companies in the world, if not the best.

But it’s not without flaws.  The bigger screen is lovely to look at, but not as easy to use.  I find myself fumbling with the phone more in cases when I didn’t have to with my iPhone 4.  Combined with a slightly slippier backing than previous models, and the phone just doesn’t feel as sure in the hand as my iPhone 4.  Even after nearly a week of use it still feels clumsy at times.

So if you’re thinking about upgrading to an iPhone 6, do it and don’t look back.  However, due to the size vs. usability trade off, I only recommend the standard iPhone 6.  Avoid the 6 Plus.

Posted in General | Leave a comment

Heartbleed and Ubuntu 13.04: Upgrade Required

The recent Heartbleed vulnerability sent a scare throughout the tech community.  Fortunately Linux distributions were quick to deploy a patch, allowing companies to quickly follow suit.

However, we found ourselves in a bind after realizing 2 of our non-public facing servers were still running Ubuntu Server 13.04. Canonical hadn’t released a Heartbleed patch for 13.04 due to it reaching end of life back in January. Yikes!

The more we researched, the more we found others in the same situation.  Unfortunately, or perhaps fortunately, the only correct path is to upgrade to 13.10. With 14.04 so close to its release date, we’d rather of waited and updated to it, but security issues are critical.  Prompt action is always better than no action.

So the choice is clear:  for those running Ubuntu Server 13.04, an upgrade to 13.10 is required if you want a supported Heartbleed fix. Although you’ll also want to consider upgrading to 14.04 when it’s released since it’s a LTS version.

Beware Of Breaking Changes

Fortunately research turned up breaking changes in Apache configuration files that took place in 13.10.  We also encountered breaking changes with PHP, and one provider-specific change that prevented our server’s ability to boot! So in addition to the upgrade process, I’ll outline those below, and how we worked around them.

Before proceeding I want to make a recommendation: perform the upgrade on a test server first, perhaps by cloning your target server environment to a new VM or cloud server.  Once everything checks out, proceed with updating production servers.

On with the upgrade then.

Upgrading From Ubuntu Server 13.04 to 13.10

Upgrading to 13.10 will affect PHP, Apache, and maybe a cloud or VM server’s ability to boot (heads up to Dediserve customers!) if your provider uses a custom menu.lst file. Ours did, which we’ll mention below.

First, it’s a good idea to get all 13.04 updates installed, so run:

sudo apt-get update
sudo apt-get upgrade

Second, proceed with the update to 13.10 by issuing the following commands:

sudo apt-get install update-manager-core
sudo do-release-upgrade

That will kick off the upgrade process.

It’s important to note that during that process you’ll be asked if you want to keep any changed system files, or have them overwritten by the new release’s version. Since no one can make those decisions for you, it’s best to diff each file (which you can do during the upgrade process) and make your own decisions.

Here are the changes that were important to us, including what changed and how we worked around any breaking changes.


Please beware of changes to this file, as it usually specifies disk or partition paths. Changes in this file can affect a server’s ability to boot. We host our servers with Dediserve, and prior experience taught us to keep their custom menu.lst file in place, else our server failed to boot.

So when asked by the upgrade process if we wanted to keep our own version or install the new version, we decided to keep our own.


13.10 broke our PHP installation, which included changes to our php.ini file.  After diff’ing the current vs. new version, the new php.ini’s changes were relatively simple.  The new version’s php.ini:

  • turned short tags off
  • set error_reporting back to a default value
  • reverted our session.cookie_lifetime and session.gc_maxlifetime settings
  • set default_charset back to an empty default

We accepted the new version, just in case it contained other important updates, and then re-instated the settings above in the new php.ini file:

  • short tags were turned back on
  • error_reporting was set back to our preferred value
  • session.cookie_lifetime and session.gc_maxlifetime were set back to preferred values
  • default_charset was set back to UTF-8

There were 2 additional errors we experienced.

The first was an error stating that json_decode()/json_encode() functions were undefined.  I’m not sure why 13.10 changed that, but to resolve we simply re-installed the json package:

sudo apt-get install php5-json

The second was due to no timezone setting.  To resolve that we specified a date.timezone setting in php.ini:

date.timezone = "America/Chicago"

After that we tested image generation, pdf generation, mail delivery, ftp, etc.  Fortunately all that still worked in our PHP apps.


13.10 introduced some important changes to Apache, mostly with configuration files.  They will break your 13.04 configuration, so please do your own research in addition to noting the changes below.

There were 2 major changes we were affected by.

The first is that all config files in /etc/apache2/conf.d should be moved to /etc/apache2/conf-available.

This is because 13.10 now treats those config files the same as sites-enabled/available and mods-enabled/available.  We use a custom.conf file in /etc/apache2/conf.d that includes the ServerName and AddDefaultCharset directives; we needed to move that to /etc/apache2/conf-available, then enable with:

sudo a2enconf custom

Second, vhost files in /etc/apache2/sites-available previously had no file extensions. That’s changed in 13.10; they now must have a .conf extension. Otherwise, apache will report an error like this upon start:

ERROR: Site site-name does not exist!

Fortunately this is pretty easy to fix. Just append .conf to each of your vhost files in /etc/apache2/sites-available.

Once that’s done, you’ll need new symlinks between sites-enabled and sites-available. You can re establish those by first removing your existing sym links:

sudo rm /etc/apache2/sites-enabled/*

Then re-enable your sites with a2ensite:

sudo a2ensite site-name

And that should take care of things. I had additional PHP packages installed (curl, gd, etc.), along with sites behind SSL.  Fortunately all that continued to work after the upgrade.

Verify Heartbleed fix

Finally, with the upgrade complete, you’ll also want to verify that OpenSSL is the version with the Heartbleed patch.  You can do so by running:

dpkg -l | grep "openssl"

… and verifying that your openssl version is 1.0.1e-3ubuntu1.2.

Other Precautions

In addition to updating to 13.10 and verifying the Heartbleed patch, you’ll also want to change any passwords used to access the server, or for apps hosted on it, since those would have been vulnerable.  You’d also need to reissue any SSL certificates used to secure sites hosted on affected servers.  And it’s important to note that you’d want to do those after the Heartbleed patch is installed.

Posted in General, Linux | Leave a comment

Implementing Session Timeout With PHP

PHP aims to make things simple.  So you’d think something like specifying session timeout would also be simple.  Unfortunately it can be a little tricky depending on your circumstances.  For example, if you Google php session timeout, you’ll find no shortage of people that found trouble implementing session timeouts with PHP.

We found ourselves in the same situation when we recently ported TimePanel to another framework.  Soon after, some users began complaining that they were being logged out too soon.  A quick check confirmed that our PHP session and cookie settings were still the same, but we did find that the previous framework handled session timeouts, whereas the new one didn’t.  After some minor code diffs and a little research, we decided we needed to implement our own session timeout logic.

Understanding How PHP Handles Sessions

Before I explain what we did, it’s important to understand how PHP handles session data; in particular, when sessions expire and are subsequently cleared from the server.  Since PHP supports multiple forms of session storage (file, database, etc.), this post will assume the default storage mechanism: the file system.

How Sessions Are Created

In their simplest form, a session consists of 2 components:

  1. A session file created and stored on the server, which is named after the session’s id.
  2. Some means for the client to identify its session id to the server.  This is usually in the form of a phpsessid cookie or url parameter (note: of the two, a cookie is the default method and considered more secure).  We’ll assume a cookie is used from this point forward.

The typical way a session is started is by calling PHP’s session_start().  What happens at that point is PHP looks for a phpsessid cookie.  If one is found, it uses that id to lookup an existing session file on the server.  If it finds one, then an existing session is successfully linked, and the session file that was just found is used.

If either cookie or session file aren’t found, PHP has no way to link to a previous session, so a new one is created.  That means a new session file is created, with a new session id, and a new phpsessid cookie is set to link the browser’s session to the new file on the server.

Any subsequent web requests will follow the same routine, either successfully linking to a previous session or creating a new one.

Understanding Session Duration

Now that we understand how sessions are created and the 2 primary components in play, we can start to understand how session duration is specified and managed.

Most conversations about this subject usually begin with 2 php.ini settings:

  1. session.cookie_lifetime
  2. session.gc_maxlifetime

Each one is related to one of the session components mentioned above, so it’s important to understand both of them, and that collectively they aren’t sufficient to enforce a session duration.

The first setting, session.cookie_lifetime is simply a duration, in seconds, that PHP sets for the phpsessid cookie expiry.

The second setting, session.gc_maxlifetime, is more complex.  On the surface, it specifies how long session files can live on the server before PHP’s garbage collector sees it as a garbage candidate.  I say candidate, because a session file can, indeed, live beyond this point; it’s all a matter of probability.

You see, PHP’s session garbage collector (what’s responsible for deleting session files) doesn’t run 100% of the time; doing so would be too resource intensive.  Instead, it’s designed to run on a per-request-probability basis, or as part of a user-defined process.

  • When on a per-request basis, session.gc_probability and session.gc_divisor ini settings come into play.  Their role is to compute the probability on whether or not the garbage collector runs on this request.  In general, a higher probability means a given request is more likely to initiate the garbage collector, meaning the garbage collector winds up running more often.
  • When leveraging a user-defined process, such as a cron job, that probability becomes obsolete, giving you full control over when session files are deleted.  You can essentially set a cron job to run on a set schedule to authoritively delete session files on a fixed schedule.

Going back to session.gc_maxlifetime and session.cookie_lifetime … the purpose of both is to allow you to specify a “soft” duration on both session components (the phpsessid cookie, and the session file on the server), and to give you some level of control over when the session garbage collector runs.

So why aren’t these 2 sufficient to enforce session timeout?  Because neither are 100% reliable in deleting their respective session components after a given time frame.

Since the phpsessid cookie exists on the client, it can be manipulated or deleted at any time.  Plus, if there’s no session file on the server that corresponds with the cookie’s session id (e.g. if the session file on the server is deleted for whatever reason), the cookie is ultimately useless. So alone, session.cookie_lifetime isn’t sufficient.

And as mentioned above, session.gc_maxlifetime doesn’t enforce session deletion very strictly at all – unless overridden by a user defined process, it bases session deletion on probability!

The Solution: Implement Your Own Session Timeout

So despite the session ini settings available, if you want a reliable session timeout, you’re forced to implement your own.  Fortunately doing so is pretty easy.

First, set session.gc_maxlifetime to the desired session timeout, in seconds.  E.g. if you want your sessions to timeout after 30 minutes, set session.gc_maxlifetime to 1800 (60 seconds in a minute * 30 minutes = 1,800 seconds).  What this does is ensure a given session file on the server can live for at least that long.

Second, and what a lot of other posts out there don’t mention, is that you also need to set session.cookie_lifetime to at least the same value (1,800 seconds, in this case).  Otherwise, the phpsessid cookie may expire before 30 minutes is up.  If that happens, the cookie is removed and the client has no way of identifying its session id to the server anymore. That effectively terminates the session before our 30 minute session window.

Third, add the following code to your app’s entry point, or any point in your app that’s executed on every request (usually an index.php file, front controller, bootstrap file, etc.).


* for a 30 minute timeout, specified in seconds
$timeout_duration = 1800;

* Here we look for the user's LAST_ACTIVITY timestamp. If
* it's set and indicates our $timeout_duration has passed,
* blow away any previous $_SESSION data and start a new one.
if (isset($_SESSION['LAST_ACTIVITY']) && 
   ($time - $_SESSION['LAST_ACTIVITY']) > $timeout_duration) {

* Finally, update LAST_ACTIVITY so that our timeout
* is based on it and not the user's login time.

What that does is keep track of the time a user’s session started.  That’s tested on every request to see if their session has expired a 30 minute window.  If so, a new session is created. This might also be where you’d handle re authenticating the user somehow, if needed, usually by giving them a login expired, or login UI, of some sort.

And that’s it.  For most, understanding the ini settings, and why they’re not effective, is usually more taxing than the code involved to get a timeout working.


If you want reliable session timeouts, ultimately you’ll need to implement your own timeout logic.  Most frameworks make session timeouts very easy to handle, but even if your code doesn’t, you can implement one with a handful of code.

Hopefully this post has helped shed some light on how PHP manages sessions, and allows you to implement a session timeout with out too much fuss.

Posted in General, PHP | 25 Comments

PHP Session Files Unexpectedly Deleted

A recent debugging session regarding session timeouts went on far longer than it needed to.   I’m going to share one aspect of it here in hopes that it saves someone (possibly hours of) debugging time.  If you’re running a Debian based environment (e.g. Ubuntu Server) and you find odd session behavior, like session data being cleared unexpectedly, there are 2 things this post will surface that should help.

Discovery #1: Debian-Based Distros Delete Session Files Via Cron Job

The first thing my debugging uncovered is that Debian-based distros (Linux Mint, Ubuntu Server, etc.) use a cron job to garbage collect PHP session files.  Thats to say that if you take a peek at the PHP session.gc_probability ini setting, it’ll be set to 0, indicating that PHP’s stock session garbage collection should never run.  You’ll also find a nifty cron job at /etc/cron.d/php5 that handles deleting session files.  Both indicate a complete work around to PHP’s default session garbage collection.  So if you don’t know this much, you likely don’t have the control you think you do regarding session management.

The good news though, is that the cron job is set to detect any changes made to the session.gc_maxlifetime ini setting, and if the changes result in a setting higher than 24 minutes, it’ll use that value.  Otherwise it falls back to using a 24 minute default.

So in most cases everything still works pretty reliably, but the deviation from PHP’s stock gc handling is still a surprise and another layer of discovery to work through.

Discovery #2:  With XDebug, Problems Arise

The second thing I learned, which was the real problem, is that the cron job becomes unreliable when XDebug enters the mix. If you’re unsure whether XDebug is enabled in your environment, you can check phpinfo() or check your php.ini file.  Depictions are below:

XDebug shown in phpinfo().

XDebug shown in phpinfo().


XDebug in php.ini.

XDebug in php.ini.

A problem occurs because the session cron job relies on a shell script at /usr/lib/php5/maxlifetime to determine what session.gc_maxlifetime ini value it should assume.  And that’s a good thing, because we want this cron job to respect when we want sessions gc’d (which is what session.gc_maxlifetime is for).  But when that script runs with xdebug enabled, it produces erroneous output, as shown below:


XDebug error output

And because that shell script returns erroneous text and not a valid maxlifetime value, the cron job proceeds to delete session files either unpredictably, or on a 24 minute interval (the original default).  In either case it’s best to solve this problem.

Luckily that’s easy.  Just disable xdebug.  Once I did, maxlifetime completed without error and returned a proper value.  Now my session files are being garbage collected on a predictable schedule again.


I can only guess that Debian based distros went with their own session garbage collection to better manage that process; the cron job, while an unexpected surprise, does add predictability and perhaps better resource management to the session gc proccess.  However, if you’re not aware of it, it can lead to some lengthy debugging efforts if you find your session data acting funny.

Posted in General | 2 Comments

View All MySQL Processes

I was recently debugging a long running database query. In most cases MySQL’s Slow Query Log is a great debugging tool for that. However, in this case I found MySQL’s show processlist to be a better fit.

What show processlist does is provides a list of all MySQL processes running on the server. It’s easy to run:

show full processlist;

.. and its results are easy to discern. Once ran, you’ll see that it returns details about each running process. Full details are outlined on MySQL’s documentation page, but in most cases you’ll likely want to pay attention to the process id, user, database, command, and time.

The command field, in particular, is what makes show processlist effective for debugging slow queries, because it cites the sql command the process is hanging on, and it does so immediately. That’s a huge benefit when compared to other tools such as MySQL’s slow query log. For example, the slow query log waits until the slow query completes before it’s logged (because it needs to cite execution time), meaning you’ll have to wait until the query completes before the slow query log tells you what it is. This was a huge drawback in my case, as the query in question had multiple variants that took close to 10 minutes to complete. That results in an awfully slow (and expensive!) debug cycle.

With show processlist my debug cycle was reduced to mere seconds, resulting in happier, and faster, debugging.

Posted in Database | Leave a comment

Honeypot Technique: Fast, Easy Spam Prevention

Spam is one of those things we wish didn’t exist.  It’s annoying and serves no useful purpose.  Mail inboxes filled with junk mail, websites with bogus contact form submissions, and products hit hard by fake sign ups are only a few common victims of spam.  And unfortunately that’s here to stay.

You may have found yourself on the receiving end of such problems.  In fact, you may have reached this blog post in your research to rid or lessen your spam problem.  Fortunately you’ve arrived at an answer.  The Honeypot technique is a fast, easy, and effective means to prevent spam.

Before I go into detail on how to implement the Honeypot technique, I want to cover two other options that are still in use to prevent spam, and why you shouldn’t use them.

Two Spam Prevention Options I Avoid

The first is Captcha.  A captcha is an image that renders text in an not-so-easy-to-read way,  also known as challenge text.  By requiring users to type the challenge text into a text field, it verifies some form of human interaction and intelligence. So if what the user enters matches the challenge text, the user is said to have successfully completed the challenge and their form submission is allowed to proceed.

A captcha displayed as part of a login form.

A captcha displayed as part of a login form.

Spam bots, on the other hand, often lack the intelligence to defeat the challenge.  First because the challenge text appears in an image, not html markup, reducing their chances of reading it.  And second, because they’re often unaware that the form field attached to the captcha is looking for a specific entry.  Most spam bots fail captchas due to one of these reasons.

A second option is implementing a question and answer field.  For example, a sign up form may include the following question:  What color is an orange?  Humans can easily answer that question, whereas spam bots won’t be smart enough.  Once submitted, the answer to the question can be tested. If it’s correct the form was likely submitted by a human and can be handled accordingly.

Both Degrade User Experience

While both options are easy and help prevent spam, I don’t recommend them because they interfere with the user experience.  Often times they’re frustrating to deal with and motivate users to leave. A good example of that would be captchas that output text too hard for even humans to read.

For that reason I always recommend implementing the least invasive option available.

Enter The Honeypot Technique

The reason the Honeypot technique is so popular is because in addition to how easy and effective it is, it doesn’t interfere with the user experience.  It demands nothing extra of them.  In fact, your users won’t even know you’re using it!

To implement the Honeypot technique, all that’s required is adding a hidden form field to  the form in question.  The form field can have any name or id associated to it, but make sure to add a display: none CSS rule on it (or some other means to hide it from users).  Here’s a brief example:

<input id="real_email" name="real_email" size="25" type="text" value="" />
<input id="test_email" name="email" size="25" type="text" value="" />
#test_email {
    display: none;

Note that I have 2 email fields, real_email and test_email.  test_email is hidden via display: none, so it’s not visible and likely can’t/won’t be submitted by real users.

And that’s what gives away whether the form submission is spam or not.  Real users don’t see the hidden field so they won’t submit it with any value. Spam bots, however, will still see the field in the form’s markup, auto-populate it with something, and submit it with the rest of the form.

So from there all that’s needed is to test whether the hidden field was submitted with a value or not.  If it was, the submission can be treated as spam.

And remember, because the field is hidden and out of view, users don’t even know it’s there, which is why this approach to spam prevention is far more user-friendly vs. requiring they complete a captcha challenge or answer silly questions.


Spam is here to stay, but fortunately the Honeypot technique offers a fast and effective way to prevent spam.  Even though there are other options to consider, keep your users in mind and always prefer the least invasive approach to mitigate spam.

All the Honeypot techniqure requires is adding a hidden field to the form in question.  With that,  just about any form can become spam free.

Posted in General, PHP | 63 Comments

Ubuntu Server: changing default shell

I love just about everything about Ubuntu Server, except that it doesn’t issue bash as the default shell for new users. It does for the root user, but not for every other user, which is a bit odd.

Not a problem though, because changing the default shell in Ubuntu is pretty easy. So if you’re ever in a position where you want to change your shell environment, you have 2 easy options.

The first is to use the chsh command. In some cases, you may be able to run something like:

 chsh /bin/bash

Though when I run that in Ubuntu Server 13.04 I get the following error message: chsh: unknown user /bin/bash.

Fortunately the second option is just as easy: you can also specify your preferred shell in the /etc/passwd file. All that’s needed is to find the desired user and change /bin/sh to whatever shell you wish to use. In my case I prefer bash, so all I had to do was change the following line from …


… to …


And that’s it. Now when you login you should noticed that you get your preferred shell by default.

Posted in General | Leave a comment