My iPhone 6 Review

I received my iPhone 6 last Friday.  After going through the setup and using it for a few days, I wanted to give my initial impressions, some of which I almost regret to offer.

I purchased the standard iPhone 6, not the 6 Plus.  My previous phone was an iPhone 4.

The Good

It’s very, very beautiful.  This phone gives an amazing first impression.  The sleeker design is a big improvement.  I immediately noticed how much thinner the 6 is than my iPhone 4, and that it has fewer lines.  Overall the effort put into making the aesthetics as seamless as possible payed off.  Much credit to Apple for raising the bar in this area yet again, even after the 6th iteration of the product.

It’s lightning fast!  The mobile network screams.  My mail inbox updates instantly now, and Safari rips through web pages as fast as my desktop.  Touch events also respond quicker and smoother than my iPhone 4.

It’s lighter.  When I picked it up for the first time it immediately felt lighter than my iPhone 4.  I didn’t expect that given how much bigger it appears.  But when I compared specs on the two, it’s only lighter by a half ounce.  I’m surprised I was able to feel that.

The display is amazing.  iPhone 6 easily has the clearest display I’ve ever used, on any device.  The bigger screen also improves reading and viewing experiences.

The Bad

Syncing data was a pain.  It took 2 attempts.  I don’t have a Mac or iTunes, so it took 25 minutes or so to boot my Windows laptop, install iTunes, reboot for some (Windows) reason, sync my iPhone 4 data to iTunes, then sync that to my iPhone 6.  My first sync attempt crashed due to an obscure .dll error.  After another reboot my second sync attempt succeeded.

There really needs to be a way to sync data from iPhone to iPhone without having to go through iTunes, especially for those without a Mac.  Because even in 2014, using Windows is still something I dread doing; my sync experience supplements why.

The bigger screen is harder to use.  While the bigger screen is nice when reading and viewing content, it also makes one-hand use hard a lot of the time, if not prohibitive.  This is disappointing, and hopefully temporary.

For example, when holding the phone in my right hand, I can no longer reach the Messages icon with my thumb to send a text message.  That requires 2 hands now, or repositioning the phone in my hand, both of which make the phone feel clumsy.  I get the same experience when managing contacts.

And this is with the standard 6, not the 6 Plus.  I debated the decision, but now know had I ordered the 6 Plus I wouldn’t have liked it; its size would have been overboard.  In fact, after 30 minutes of using my 6, I questioned how usable the 6 Plus will be, at least during one-hand use.  And sure enough, reviews are starting to pop up stating as much, like this one and this one.

Conclusion

Overall I love my iPhone 6.  It’s an amazing phone, in just about every way.  Aesthetics, feel, performance – it’s an incredible device.  Apple proves, yet again, why they’re known as one of the best product companies in the world, if not the best.

But it’s not without flaws.  The bigger screen is lovely to look at, but not as easy to use.  I find myself fumbling with the phone more in cases when I didn’t have to with my iPhone 4.  Combined with a slightly slippier backing than previous models, and the phone just doesn’t feel as sure in the hand as my iPhone 4.  Even after nearly a week of use it still feels clumsy at times.

So if you’re thinking about upgrading to an iPhone 6, do it and don’t look back.  However, due to the size vs. usability trade off, I only recommend the standard iPhone 6.  Avoid the 6 Plus.

Posted in General, Products | Leave a comment

Heartbleed and Ubuntu 13.04: Upgrade Required

The recent Heartbleed vulnerability sent a pronounced scare throughout the tech community.  Fortunately Linux distributions were quick to deploy a patch, with companies and System Administrators following suit.

However, we found ourselves in a bind when we realized 2 of our non-public facing servers were still running Ubuntu Server 13.04. Canonical hadn’t released a Heartbleed patch for 13.04 due to it reaching end of life back in January. Yikes!

The more we researched, the more we found others in the same situation.  Unfortunately, or perhaps fortunately, the only correct path is to upgrade to 13.10. With 14.04 so close to its release date, we’d rather of waited and updated to it, but security issues are critical.  Prompt action is always better than no action.

So the choice is clear:  for those running Ubuntu Server 13.04, an upgrade to 13.10 is required if you want a supported Heartbleed fix. Though perhaps more importantly, it’s bad practice to run an unsupported OS version, so upgrade to 13.10 today regardless. Even better, upgrade to 14.04 LTS when it’s released.

Beware Of Breaking Changes

Fortunately research turned up breaking changes in Apache configuration files that took place in 13.10.  We also encountered breaking changes with PHP, and one provider-specific change that prevented our server’s ability to boot! So in addition to the upgrade process, I’ll outline those below, and how we worked around them.

Before proceeding I want to make a recommendation: perform the upgrade on a test server first, perhaps by cloning your target server environment to a new VM or cloud server.  Once everything checks out, proceed with updating production servers.

On with the upgrade then.

Upgrading From Ubuntu Server 13.04 to 13.10

Upgrading to 13.10 will affect PHP, Apache, and maybe a cloud or VM server’s ability to boot (heads up to Dediserve customers!) if your provider uses a custom menu.lst file. Ours did, which we’ll mention below.

First, it’s a good idea to get all 13.04 updates installed, so run:

sudo apt-get update
sudo apt-get upgrade

Second, proceed with the update to 13.10 by issuing the following commands:

sudo apt-get install update-manager-core
sudo do-release-upgrade

That will kick off the upgrade process.

It’s important to note that during that process you’ll be asked if you want to keep any changed system files, or have them overwritten by the new release’s version. Since no one can make those decisions for you, it’s best to diff each file (which you can do during the upgrade process) and make your own decisions.

Here are the changes that were important to us, including what changed and how we worked around any breaking changes.

/boot/grub/menu.lst

Please beware of changes to this file, as it usually specifies disk or partition paths. Changes in this file can affect a server’s ability to boot. We host our servers with Dediserve, and prior experience taught us to keep their custom menu.lst file in place, else our server failed to boot.

So when asked by the upgrade process if we wanted to keep our own version or install the new version, we decided to keep our own.

PHP

13.10 broke our PHP installation, which included changes to our php.ini file.  After diff’ing the current vs. new version, the new php.ini’s changes were relatively simple.  The new version’s php.ini:

  • turned short tags off
  • set error_reporting back to a default value
  • reverted our session.cookie_lifetime and session.gc_maxlifetime settings
  • set default_charset back to an empty default

We accepted the new version, just in case it contained other important updates, and then re-instated the settings above in the new php.ini file:

  • short tags were turned back on
  • error_reporting was set back to our preferred value
  • session.cookie_lifetime and session.gc_maxlifetime were set back to preferred values
  • default_charset was set back to UTF-8

There were 2 additional errors we experienced.

The first was an error stating that json_decode()/json_encode() functions were undefined.  I’m not sure why 13.10 changed that, but to resolve we simply re-installed the json package:

sudo apt-get install php5-json

The second was due to no timezone setting.  To resolve that we specified a date.timezone setting in php.ini:

date.timezone = "America/Chicago"

After that we tested image generation, pdf generation, mail delivery, ftp, etc.  Fortunately all that still worked in our PHP apps.

Apache

13.10 introduced some important changes to Apache, mostly with configuration files.  They will break your 13.04 configuration, so please do your own research in addition to noting the changes below.

There were 2 major changes we were affected by.

The first is that all config files in /etc/apache2/conf.d should be moved to /etc/apache2/conf-available.

This is because 13.10 now treats those config files the same as sites-enabled/available and mods-enabled/available.  We use a custom.conf file in /etc/apache2/conf.d that includes the ServerName and AddDefaultCharset directives; we needed to move that to /etc/apache2/conf-available, then enable with:

sudo a2enconf custom

Second, vhost files in /etc/apache2/sites-available previously had no file extensions. That’s changed in 13.10; they now must have a .conf extension. Otherwise, apache will report an error like this upon start:

ERROR: Site site-name does not exist!

Fortunately this is pretty easy to fix. Just append .conf to each of your vhost files in /etc/apache2/sites-available.

Once that’s done, you’ll need new symlinks between sites-enabled and sites-available. You can re establish those by first removing your existing sym links:

sudo rm /etc/apache2/sites-enabled/*

Then re-enable your sites with a2ensite:

sudo a2ensite site-name

And that should take care of things. I had additional PHP packages installed (curl, gd, etc.), along with sites behind SSL.  Fortunately all that continued to work after the upgrade.

Verify Heartbleed fix

Finally, with the upgrade complete, you’ll also want to verify that OpenSSL is the version with the Heartbleed patch.  You can do so by running:

dpkg -l | grep "openssl"

… and verifying that your openssl version is 1.0.1e-3ubuntu1.2.

Other Precautions

In addition to updating to 13.10 and verifying the Heartbleed patch, you’ll also want to change any passwords used to access the server, or for apps hosted on it, since those would have been vulnerable.  You’d also need to reissue any SSL certificates used to secure sites hosted on affected servers.  And it’s important to note that you’d want to do those after the Heartbleed patch is installed.

Posted in General | Leave a comment

Implementing Session Timeout With PHP

PHP aims to make things simple.  So you’d think something like specifying session timeout would also be simple.  Unfortunately it can be a little tricky depending on your circumstances.  For example, if you Google php session timeout, you’ll find no shortage of people that found trouble implementing session timeouts with PHP.

We found ourselves in the same situation when we recently ported TimePanel to another framework.  Soon after, some users began complaining that they were being logged out too soon.  A quick check confirmed that our PHP session and cookie settings were still the same, but we did find that the previous framework handled session timeouts, whereas the new one didn’t.  After some minor code diffs and a little research, we decided we needed to implement our own session timeout logic.

Understanding How PHP Handles Sessions

Before I explain what we did, it’s important to understand how PHP handles session data; in particular, when sessions expire and are subsequently cleared from the server.  Since PHP supports multiple forms of session storage (file, database, etc.), this post will assume the default storage mechanism: the file system.

How Sessions Are Created

In their simplest form, a session consists of 2 components:

  1. A session file created and stored on the server, which is named after the session’s id.
  2. Some means for the client to identify its session id to the server.  This is usually in the form of a phpsessid cookie or url parameter (note: of the two, a cookie is the default method and considered more secure).  We’ll assume a cookie is used from this point forward.

The typical way a session is started is by calling PHP’s session_start().  What happens at that point is PHP looks for a phpsessid cookie.  If one is found, it uses that id to lookup an existing session file on the server.  If it finds one, then an existing session is successfully linked, and the session file that was just found is used.

If either cookie or session file aren’t found, PHP has no way to link to a previous session, so a new one is created.  That means a new session file is created, with a new session id, and a new phpsessid cookie is set to link the browser’s session to the new file on the server.

Any subsequent web requests will follow the same routine, either successfully linking to a previous session or creating a new one.

Understanding Session Duration

Now that we understand how sessions are created and the 2 primary components in play, we can start to understand how session duration is specified and managed.

Most conversations about this subject usually begin with 2 php.ini settings:

  1. session.cookie_lifetime
  2. session.gc_maxlifetime

Each one is related to one of the session components mentioned above, so it’s important to understand both of them, and that collectively they aren’t sufficient to enforce a session duration.

The first setting, session.cookie_lifetime is simply a duration, in seconds, that PHP sets for the phpsessid cookie expiry.

The second setting, session.gc_maxlifetime, is more complex.  On the surface, it specifies how long session files can live on the server before PHP’s garbage collector sees it as a garbage candidate.  I say candidate, because a session file can, indeed, live beyond this point; it’s all a matter of probability.

You see, PHP’s session garbage collector (what’s responsible for deleting session files) doesn’t run 100% of the time; doing so would be too resource intensive.  Instead, it’s designed to run on a per-request-probability basis, or as part of a user-defined process.

  • When on a per-request basis, session.gc_probability and session.gc_divisor ini settings come into play.  Their role is to compute the probability on whether or not the garbage collector runs on this request.  In general, a higher probability means a given request is more likely to initiate the garbage collector.  That also means the garbage collector winds up running more often.
  • When leveraging a user-defined process, such as a cron job, that probability becomes obsolete, giving you full control over when session files are deleted.  You can essentially set a cron job to run on a set schedule to authoritively delete session files on a fixed schedule.

Going back to session.gc_maxlifetime and session.cookie_lifetime … the purpose of both is to allow you to specify a “soft” duration on both session components (the phpsessid cookie, and the session file on the server), and to give you some level of control over when the session garbage collector runs.

So why aren’t these 2 sufficient to enforce session timeout?  Because neither are 100% reliable in deleting their respective session components after a given time frame.

Since the phpsessid cookie exists on the client, it can be manipulated or deleted at any time.  Plus, if there’s no session file on the server that corresponds with the cookie’s session id (e.g. if the session file on the server is deleted for whatever reason), the cookie is ultimately useless. So alone, session.cookie_lifetime isn’t sufficient.

And as mentioned above, session.gc_maxlifetime doesn’t enforce session deletion very strictly at all – unless overridden by a user defined process, it bases session deletion on probability!

The Solution: Implement Your Own Session Timeout

So despite the session ini settings available, if you want a reliable session timeout, you’re forced to implement your own.  Fortunately doing so is pretty easy.

First, set session.gc_maxlifetime to the desired session timeout, in seconds.  E.g. if you want your sessions to timeout after 30 minutes, set session.gc_maxlifetime to 1800 (60 seconds in a minute * 30 minutes = 1,800 seconds).  What this does is ensure a given session file on the server can live for at least that long.

Second, and what a lot of other posts out there don’t mention, is that you also need to set session.cookie_lifetime to at least the same value (1,800 seconds, in this case).  Otherwise, the phpsessid cookie may expire before 30 minutes is up.  If that happens, the cookie is removed and the client has no way of identifying its session id to the server anymore. That effectively terminates the session before our 30 minute session window.

Third, add the following code to your app’s entry point, or any point in your app that’s executed on every request (usually an index.php file, front controller, bootstrap file, etc.).

$time = $_SERVER[‘REQUEST_TIME’];

/**
 * for a 30 minute timeout, specified in seconds
 */

$timeout_duration = 1800;

/**
 * Here we look for the user’s LAST_ACTIVITY timestamp. If
 * it’s set and indicates our $timeout_duration has passed,
 * blow away any previous $_SESSION data and start a new one.
 */

if (isset($_SESSION[‘LAST_ACTIVITY’]) && ($time $_SESSION[‘LAST_ACTIVITY’]) > $timeout_duration) {
  session_unset();    
  session_destroy();
  session_start();    
}
   
/**
 * Finally, update LAST_ACTIVITY so that our timeout
 * is based on it and not the user’s login time.
 */

$_SESSION[‘LAST_ACTIVITY’] = $time;

What that does is keep track of the time a user’s session started.  That’s tested on every request to see if their session has expired a 30 minute window.  If so, a new session is created. This might also be where you’d handle re authenticating the user somehow, if needed, usually by giving them a login expired, or login UI, of some sort.

And that’s it.  For most, understanding the ini settings, and why they’re not effective, is usually more taxing than the code involved to get a timeout working.

Conclusion

If you want reliable session timeouts, ultimately you’ll need to implement your own timeout logic.  Most frameworks make session timeouts very easy to handle, but even if your code doesn’t, you can implement one with a handful of code.

Hopefully this post has helped shed some light on how PHP manages sessions, and allows you to implement a session timeout with out too much fuss.

Posted in General | 14 Comments

PHP Session Files Unexpectedly Deleted

A recent debugging session regarding session timeouts went on far longer than it needed to.   I’m going to share one aspect of it here in hopes that it saves someone (possibly hours of) debugging time.  If you’re running a Debian based environment (e.g. Ubuntu Server) and you find odd session behavior, like session data being cleared unexpectedly, there are 2 things this post will surface that should help.

Discovery #1: Debian-Based Distros Delete Session Files Via Cron Job

The first thing my debugging uncovered is that Debian-based distros (Linux Mint, Ubuntu Server, etc.) use a cron job to garbage collect PHP session files.  Thats to say that if you take a peek at the PHP session.gc_probability ini setting, it’ll be set to 0, indicating that PHP’s stock session garbage collection should never run.  You’ll also find a nifty cron job at /etc/cron.d/php5 that handles deleting session files.  Both indicate a complete work around to PHP’s default session garbage collection.  So if you don’t know this much, you likely don’t have the control you think you do regarding session management.

The good news though, is that the cron job is set to detect any changes made to the session.gc_maxlifetime ini setting, and if the changes result in a setting higher than 24 minutes, it’ll use that value.  Otherwise it falls back to using a 24 minute default.

So in most cases everything still works pretty reliably, but the deviation from PHP’s stock gc handling is still a surprise and another layer of discovery to work through.

Discovery #2:  With XDebug, Problems Arise

The second thing I learned, which was the real problem, is that the cron job becomes unreliable when XDebug enters the mix. If you’re unsure whether XDebug is enabled in your environment, you can check phpinfo() or check your php.ini file.  Depictions are below:

XDebug shown in phpinfo().

XDebug shown in phpinfo().

 

XDebug in php.ini.

XDebug in php.ini.

A problem occurs because the session cron job relies on a shell script at /usr/lib/php5/maxlifetime to determine what session.gc_maxlifetime ini value it should assume.  And that’s a good thing, because we want this cron job to respect when we want sessions gc’d (which is what session.gc_maxlifetime is for).  But when that script runs with xdebug enabled, it produces erroneous output, as shown below:

maxlifetime_error

XDebug error output

And because that shell script returns erroneous text and not a valid maxlifetime value, the cron job proceeds to delete session files either unpredictably, or on a 24 minute interval (the original default).  In either case it’s best to solve this problem.

Luckily that’s easy.  Just disable xdebug.  Once I did, maxlifetime completed without error and returned a proper value.  Now my session files are being garbage collected on a predictable schedule again.

Conclusion

I can only guess that Debian based distros went with their own session garbage collection to better manage that process; the cron job, while an unexpected surprise, does add predictability and perhaps better resource management to the session gc proccess.  However, if you’re not aware of it, it can lead to some lengthy debugging efforts if you find your session data acting funny.

Posted in General | 2 Comments

View All MySQL Processes

I was recently debugging a long running database query. In most cases MySQL’s Slow Query Log is a great debugging tool for that. However, in this case I found MySQL’s show processlist to be a better fit.

What show processlist does is provides a list of all MySQL processes running on the server. It’s easy to run:

SHOW FULL processlist;

.. and its results are easy to discern. Once ran, you’ll see that it returns details about each running process. Full details are outlined on MySQL’s documentation page, but in most cases you’ll likely want to pay attention to the process id, user, database, command, and time.

The command field, in particular, is what makes show processlist effective for debugging slow queries, because it cites the sql command the process is hanging on, and it does so immediately. That’s a huge benefit when compared to other tools such as MySQL’s slow query log. For example, the slow query log waits until the slow query completes before it’s logged (because it needs to cite execution time), meaning you’ll have to wait until the query completes before the slow query log tells you what it is. This was a huge drawback in my case, as the query in question had multiple variants that took close to 10 minutes to complete. That results in an awfully slow (and expensive!) debug cycle.

With show processlist my debug cycle was reduced to mere seconds, resulting in happier, and faster, debugging.

Posted in Database | Leave a comment

Honeypot Technique: Fast, Easy Spam Prevention

Spam is one of those things we wish didn’t exist.  It’s annoying and serves no useful purpose.  Mail inboxes filled with junk mail, websites with bogus contact form submissions, and products hit hard by fake sign ups are only a few common victims of spam.  And unfortunately that’s here to stay.

You may have found yourself on the receiving end of those problems.  In fact, you may have reached this blog post in your research to rid or lessen your spam problem.  Fortunately you’ve arrived at an answer.  The Honeypot technique is a fast, easy, and effective means to prevent spam.

Before I go into detail on how to implement the Honeypot technique, I want to cover two other options that are still in use to prevent spam, and why you shouldn’t use them.

Two Spam Prevention Options I Avoid

The first is Captcha.  A captcha is an image that renders text in an not-so-easy-to-read way,  also known as challenge text.  By requiring users to type the challenge text into a text field, it verifies some form of human interaction and intelligence. So if what the user enters matches the challenge text, the user is said to have successfully completed the challenge and their form submission is allowed to proceed.

A captcha displayed as part of a login form.

A captcha displayed as part of a login form.

Spam bots, on the other hand, often lack the intelligence to defeat the challenge.  First because the challenge text appears in an image, not html markup, reducing their chances of reading it.  And second, because they’re often unaware that the form field attached to the captcha is looking for a specific entry.  Most spam bots fail captchas due to one of these reasons.

A second option is implementing a question and answer field.  For example, a sign up form may include the following question:  What color is an orange?  Humans can easily answer that question, whereas spam bots won’t be smart enough.  Once submitted, the answer to the question can be tested. If it’s correct the form was likely submitted by a human and can be handled accordingly.

Both Degrade User Experience

While both options are easy and help prevent spam, I don’t recommend them because they interfere with the user experience.  Often times they’re even frustrating to deal with and motivate users to leave. A good example of that would be captchas that output text too hard for even humans to read.

For that reason I always recommend implementing the least invasive option available.

Enter The Honeypot Technique

The reason the Honeypot technique is so popular is b/c, in addition to how easy and effective it is, it doesn’t interfere with the user experience.  It demands nothing extra of them.  In fact, your users won’t even know you’re using it!

To implement the Honeypot technique, all that’s required is adding a hidden form field to  the form in question.  The form field can have any name or id associated to it, but make sure to add a display: none CSS rule on it (or some other means to hide it from users).  Here’s a brief example:

<input id="real_email" type="text" name="real_email" size="25" value="" />
<input id="test_email" type="text" name="email" size="25" value="" />
#test_email {
display: none;
}

Note that I have 2 email fields, real_email and test_email.  test_email is hidden via display: none, so it’s not visible, nor can it be submitted by real users.

And that’s what gives away whether the form submission is spam or not.  Real users won’t be able to see the field so they won’t submit it with any value. Spam bots, however, will still see the field in the form’s markup, auto-populate it with something, and submit it with the rest of the form.

So from there all that’s needed is to test whether the hidden field was submitted with a value or not.  If it was, the submission can be treated as spam.

And remember, because the field is hidden and out of view, users don’t even know it’s there.  That’s a more user friendly approach to spam prevention vs. having users complete a captcha challenge or answer silly questions.

Conclusion

Spam is here to stay, but fortunately the Honeypot technique offers a fast and effective way to prevent spam.  Even though there are other options to consider, keep your users in mind and always prefer the least invasive approach to mitigate spam.

All the Honeypot techniqure requires is adding a hidden field to the form in question.  With that,  just about any form can become spam free.

Posted in General, PHP | 37 Comments

Ubuntu Server: changing default shell

I love just about everything about Ubuntu Server, except that it doesn’t issue bash as the default shell for new users. It does for the root user, but not for every other user, which is a bit odd.

Not a problem though, because changing the default shell in Ubuntu is pretty easy. So if you’re ever in a position where you want to change your shell environment, you have 2 easy options.

The first is to use the chsh command. In some cases, you may be able to run something like:

chsh /bin/bash

Though when I run that in Ubuntu Server 13.04 I get the following error message: chsh: unknown user /bin/bash.

Fortunately the second option is just as easy: you can also specify your preferred shell in the /etc/passwd file. All that’s needed is to find the desired user and change /bin/sh to whatever shell you wish to use. In my case I prefer bash, so all I had to do was change the following line from …

[username]:x:1000:1000::/home/[username]:/bin/sh

… to …

[username]:x:1000:1000::/home/[username]:/bin/bash

 

And that’s it. Now when you login you should noticed that you get your preferred shell by default.

Posted in General | Leave a comment

Changing hostname in postfix config

Setting a proper hostname for mail servers is a small step, but often required, especially you’re looking to do everything possible to avoid spam ratings in your email.

Having recently completed a fresh install of postfix on a new mail server, I proceeded with running a test email through Spam Assassin to see if any adjustments were needed.

Before I go on … if you’re applications send email and you’re not using a tool like Spam Assassin to gauge spam-potential of your emails, I highly recommend that you start.  Our apps send everything from account-creation emails to invoices, so it’s required that our product emails are trusted, sent, and received without issue.  Spam Assassin has always helped us achieve that.

In this case, Spam Assassin reported a HELO_LOCALHOST flag.  A test against our mail server’s hostname revealed that it was still identifying itself as localhost, which is a potential spam indicator.

Fortunately resolving this is easy, we just needed to change our mail server’s hostname. That’s easy to do with postfix, it just requires 2 steps.

First, change the myhostname directive in /etc/postfix/main.cf to the desired hostname.  In our case we changed it to timepanel.net (the server name for our time tracking application, TimePanel):

# myhostname = localhost
myhostname = timepanel.net

Second, restart postfix:

sudo service postfix restart

And that’s it!  After this change Spam Assassin no longer reports a failed test on our mail server’s hostname.

Posted in General, Linux | Leave a comment

Google Chrome Stuck In Full Screen Fix

If you’re a Google Chrome user you may have encountered a bug where the browser window gets stuck in full screen mode.  In such cases, hitting F-11, or clicking the Exit Full Screen link does nothing.

After searching for fixes, many of which were ineffective, the only fix I found to work was deleting my user preference directory.  If you’re a Linux user, here’s how:

First, go to the google-chrome directory in your home path:

cd ~/.config/google-chrome

Next, verify you have a Default directory.  Rename it as a backup resource, just in case you need it.

mv Default/ Default_backup/

Then restart Chrome.  You should see that a new Default directory is created.  You should also notice Chrome starts normally and is no longer stuck in full screen.  The only drawback here is that your preferences are trashed, but most will find that more than acceptable since Chrome is now usable again.

Posted in General | 4 Comments

TimePanel 1.7.0 Released

We’re happy to announce the release of TimePanel 1.70.  This release includes new invoices features like discounts, default payment term, as well as some minor stability improvements.

Read more on the TimePanel blog:  Version 1.7.0 Release Announcement.

Posted in General | Leave a comment