Adobe Creative Cloud

Maybe I should clarify my [earlier post]( where I complain about Adobe Creative Cloud. At that point, the file-syncing part of the cloud just didn’t work with no explanation of why (or *if* – so we couldn’t tell if it was my dodgy internet connection or what.)

Right now, the new Creative Cloud app is much better, and lists the file syncing as “Coming Soon”. Maybe they should have done that with the original implementation.

App install/update seems to be considerably better than before. I’ve yet to really look at Behance.

Looks like we’re going to have to find a Flickr alternative…

Flickr being not very helpful. Apparently I uploaded a “restricted” video… can I get them to answer me on *which* video? No. So, my photos become invisible unless you log in to Flickr. (i.e. *useless*)

Time to move on after 10 years of Flickring…? Maybe.

**Update:** maybe some G+ for the personal/sharing stuff, and Behance for the “pro” stuff?

Bass Space

We all dig some serious bass these days – given that my favourite artists are funkers, house acts, synth wielders in general – I certainly do. But it seems like I’ve been listening to a lot of music that consciously avoids clogging up the bass recently. Even more oddly the artists are conspicuous for their bass deployment: Bootsy, Prince, New Order…

In the case of Bootsy, if you pick almost any Rubber Band track, the low-end is occupied more by the drums – toms and even chunky snares as much as kick – than Bootsy himself. It also leaves a niche for Bernie Worrell to insert his unique lines. Everybody’s favourite funkateer lands above it with the extra high frequencies of the distortion and Mu-Tron sweeps, quite apart from the fact his lines are often clanking around higher frets anyway. That last point is generally agreed to be the defining sound of Joy Division and New Order: Peter Hook’s home territory of the top half of the neck. It still clunks nicely, but it leaves plenty of space for a DMX or acoustic kick – or a deep, pained voice in the former case. And when Prince’s *When Doves Cry* came on the radio in 1984 it stood out a mile – because it’s amazingly crafted pop, but also because it doesn’t sport a bassline at all. The Linn had all that space to work.

Reverse SEO

Although it’s a truism and/or cliché to say that the first rule of SEO is to write good content, it’s probably also been said somewhere, at least once – I haven’t *searched* for it so I don’t know – that thinking about SEO while writing your good content makes it even better. By which I mean that if you’re thinking consciously about the core meaning of the whole piece while writing every sentence, you end up staying more on topic and writing more useful content. Maybe people who actually know about writing already do this? I wouldn’t know!

Permissions setup for a Debian web host

> This is another of those “note-to-self” posts, where I detail how I’m setting something up so that I can refer back to it, or so I can point someone else to it. As if often the case, some of this may be Debian-specific…

There are different ways of approaching the task of setting the permissions for web directories, depending on how many users have access to the server, how many sites are sharing the server, and lots of other concerns. I tend to be in a situation where anyone who has shell access to the server is at a level trusted with web content, so that simplifies the process somewhat. I’ll look at different levels of dealing with this questions, in increasing levels of security.

## Simplest approach: use the `www-data` group

This is the default group that Debian has for web daemons. If you add everyone who can log in to this group, you can then use this group for all web directories that the notional “web team” needs to access, and make them “group writable”. **Be aware that this configuration also allows the Apache daemon itself to write to the web directories, which is an obvious potential security issue, so you need to be sure that the web applications in there don’t/won’t allow that.**

You can either specify the group when creating the user:

adduser –ingroup www-data USER

or add an existing user to the group:

adduser USER www-data

## Almost as simple as simplest approach: create a `webdev` group

This is also a very simple approach, which won’t allow Apache to write to the web directories, unless you specifically allow it. (This would usually be for cache directories, image upload and so on.)

“webdev” is just an arbitrary name, it can be anything you like as long as it doesn’t exist. First create your new group, then add the user(s) to it:

addgroup webdev
adduser USER webdev

It goes without saying (or should do) that for the above to work, you also need to allow the right group access to the web directories you need. A simple example of this, making a few assumptions of your directory layout, would be:

chgrp -R webdev /www/
chmod -R g+w /www/

So what did we just do?

First, we recursively (`-R`) changed the group to be `webdev` for the `htdocs` directory. Then, we (also recursively) allowed the group write-access (`g+w`) on `htdocs`. Which means: from now on, anyone in the `webdev` group can create and edit files in `htdocs` and any of its subdirectories. Note that these lines will stop any previously configured group-access from working (if it’s a different group from `webdev`). However, if we have a directory with write-access for everyone (AKA “`chmod 777`”) as is sometimes the case with cache directories, for example, they won’t be affected.

So, how can we make this more granular?

## Multi-layered approach: create per-site groups

If we wanted to have some directories writable by all our web team, and others by certain people in certain sub-teams, we can create multiple groups.

Take, for example, two subdomain sites on and Of course, these could be different domains, I’m just sticking with for the, er, examples. We want to deny editing access to the teams working on these two sites to each other’s site. A solution is to create two groups: `webdev-foo` and `webdev-bar`, maybe.

addgroup webdev-foo
addgroup webdev-bar
adduser fooguy webdev-foo
adduser foogal webdev-foo
adduser barboy webdev-bar
chgrp -R webdev-foo /www/
chgrp -R webdev-bar /www/
chmod -R g+w /www/ /www/

This takes care of giving write-access for their sites to `fooguy`, `foogal` and `barboy`. Neither `fooguy` nor `foogal` will be able to write to the site’s directory, and `barboy` won’t be able to edit If we want to allow all three of them to edit or create inside the main site, we just add them to the `webdev` group, assuming we’ve already set the permissions for its root directory and children to be `g+w`.

adduser fooguy webdev
adduser foogal webdev
adduser barboy webdev

## Checking permissions

If we pop over and have a look at these directories, what should be see?

cd /www
ls -l *

The output should be something like:
total 4
drwxrwxr-x 2 root webdev-bar 4096 2012-01-09 18:49 htdocs
total 4
drwxrwxr-x 2 root webdev-foo 4096 2012-01-09 18:49 htdocs
total 4
drwxrwxr-x 2 root webdev 4096 2012-01-09 18:50 htdocs

What does that mean? What we’re seeing here is that in all cases, the permissions are set as `drwxrwxr-x`, which means:

1. It’s a directory
2. User permissions are `rwx` – Read/Write/eXecute
3. Group permissions are also `rwx`
4. Other (“world”) permissions are `r-x` – Read/eXecute

We can also see that each of the `htdocs` entries has `root` as its owner, and the respective group we set before as its group. If we’ve already got a super simple site in these – just an index and an image directory – and list inside of htdocs, we should see:
total 4
drwxrwxr-x 2 root webdev-bar 4096 2012-01-09 19:01 img
-rw-rw-r– 1 root webdev-bar 0 2012-01-09 18:59 index.html
total 4
drwxrwxr-x 2 root webdev-foo 4096 2012-01-09 19:01 img
-rw-rw-r– 1 root webdev-foo 0 2012-01-09 18:59 index.html
total 4
drwxrwxr-x 2 root webdev 4096 2012-01-09 19:01 img
-rw-rw-r– 1 root webdev 0 2012-01-09 18:59 index.html

This tells use that the index and the directory are both editable by the right groups as well. (Files are `-rw-rw-r–`, meaning user and group read/write and world read-only.)

*To clarify: “execute”, when applied to directories, means the ability to change into it or open it. Applied to a file, the execute-bit is a potential hazard, if the file has any code in there, but that’s another story for another day.*

## More granularity: ACL

The approach detailed above is usually enough for most web situations, but if more control is required, we move into ACL territory (Access Control Lists). This is something that has to be made available at the filesystem level, and isn’t usually available on normal web hosts. As such, it’s a bit out of the scope of this post.

MacBook DVD swap-out

When I got my SSD installed in my MacBook, I swapped out the (defective) DVD for a caddy from Mac:Upgrades to house the original boot drive. This was completely unrecognised, but I didn’t have time to worry about it, so put the old boot disk in an external FW400 case and used the data from there. I assumed there must be something wrong with the ATA interface on my MacBook’s motherboard, which could explain the DVD not working.

Long story short, I popped open the MacBook and another almost identical one with working DVD and tried all the combinations of disks I could. The odd result of this is that the original boot disk was the only one that didn’t work in the caddy – every other drive I tried worked. Very odd. Anyway, I just put a different drive in the caddy and used its external case for my rebellious original boot disk. I can’t think of an explanation for that set of circumstances…

Unlocking T-Mobile Wireless Pointer (UK)


While I was in the UK, I bought a T-Mobile Wireless Pointer (which is quite a daft name for their Huawei E583C). The nicest thing about this, I think is the display that actually lets you know what’s going on without deciphering blinking LEDs of various colours. You can see signal strength, 3G confirmation, how many devices are connected by Wifi, how many SMSs are unread, connection status, battery level and which network you’re connected to just as easily as on your phone (on a cute little OLED display, a bit like the small screen on a clamshell.)

That was another good purchase – I’m on a roll! Anyway, getting back to Barcelona, obviously I need to unlock it and make it work with my Vodafone unlimited data plan.

I ended up using DC Unlocker which seems to Just Work. (Although: Windows required… <sigh>)

Vodafone Spain’s access info is, as googled in various places, as follows:

– APN:
– User: vodafone
– Pass: vodafone

Solid State Stress Reliever

I’d read a few blog posts here and there about how swapping your old MacBook’s hard drive for an SSD made amazing improvements, so I decided to give that a go. My 2006 MacBook is definitely creaking, but I don’t work mobile that much, and I can usually use the iPad for whatever I need to really get done before I get back to HQ. (Which is to say, I haven’t quite found sufficient excuse to buy a new MacBook, Air, Pro, or otherwise…)

So, I got a Crucial M4 from Amazon. On swapping that in for the (already upgraded) hard drive the improvement was way past my expectations. Instant app startup is something you don’t want to lose once you have it! My iMac’s main (or usually, *only*) bottleneck is the hard drive. Now my BlackBook is starting apps seemingly faster than that, even though there’s a difference of four times the memory, and four times the cores (if you allow hyper-threading into the equation).

I need to have a word with my local Apple specialists about changing my iMac HD for an SSD…

Nice one, Crucial, and SSDs in general!

A web development workflow

[Update: added illustration of overview.]

I originally called this post “My web development workflow” but although it is *my* workflow, the idea of the article was as a suggestion for one possible methodology for the kind of development with which I’m usually involved. Others I’ve worked with, both in the past and on an ongoing basis, have found this method to be both flexible and fast. Once the concepts are taken onboard, it’s also very easy to understand. Another key advantage is that it’s designed to allow work to be carried out from multiple workstations – I’m using the word “workstation” very loosely here, including mobile devices – because the working copy of the files is at a remote location. The amount of times I’ve saved somebody else’s skin thanks to that…

The whole shebang depends on certain tools of course. Some of the core elements of the setup are ubiquitous and *de facto* standards, such as Git, `rsync`, `ssh`, `ppk`, and any flavour of Unix-like OS. That brings me to the less ubiquitous elements, although within the scope of web development, they are not that far from standard. These components include TextMate, from [MacroMates]( which also means that OS X is a required element. If you’re not based on OS X, you’ll need to replace TextMate with a suitable editing environment. If you’re using Windows, don’t. Life’s too short, really. I’ll come onto how TextMate integrates into the workflow later on.

Here’s a step-by-step rundown of the process to set this workflow up:

1. [create a central *bare* git repository](#centralrepo)
1. [create placeholder project file(s) in a work directory on the dev server](#placeholder)
1. [initialise the working dir as a git project, configure it, and push the placeholder files to the central repo](#gitinit)
1. [synchronise the remote work directory to a local directory](#rsync)
1. [open the local work directory in TextMate](#mate)
1. [configure Remote Project for TextMate with the remote working directory](#rp1)
1. [symlink the web root of the remote working dir to a web-visible location](#symlink)
1. [use Remote Project in TextMate to keep your local version in sync with your remote version-controlled copy](#rp2)
1. start hacking…

Seems like a lot of hassle? Well, it’s all done in seconds after the first time, and the benefits far outweigh the setup steps. Let’s go through each step in more detail.

Here’s an illustration of the overview:

A web development workflow 01

The “mobile device(s)” block can be any platform that can run `ssh` – your laptop, your tablet, your phone, somebody else’s gear, whatever. As long as you can run `ssh` and you know how to use a good text editor (make that `vim`!) you can edit your working copy from anywhere and commit/push your changes.

## Create a central *bare* git repository {#centralrepo}

This is an easy one to get us started. Let’s call our project “devflow” so we have a handle for it. Create a central directory – I use `/opt/git/devflow.git/` – then go in there and initialise it:

cd /opt/git/devflow.git/
git init –bare

Additionally, and predictably if you’ve ever done anything like this before, the permissions need to allow your development team read/write access to the repo. One option is create a group – I always thought “gits” had a ring to it – and set up the perms:

chmod -R g+w /opt/git/devflow.git
chgrp -R gits /opt/git/devflow.git

Now any user in the gits group can write to that repo. If you have to add a user to a group it’s as simple as

adduser {username} {groupname}

## Create placeholder project files {#placeholder}

Also simple, I usually use `~/jobs/devflow` as the working directory on the dev server. Sometimes to get the ball rolling I just create a single `readme.markdown` file.

## Initialise your working copy as a git repo {#gitinit}

Inside your working copy on the dev server, initialise a new git repository:

git init

If you haven’t configured git yet, start by telling it a little about yourself:

git config –global “A. N. Other”
git config –global “”

Once you’ve done that, you need to tell the local git repo where the central repo is – the “origin”:

git remote add origin another@devserver:/opt/git/devflow.git

Apart from the URL of the remote origin, we need to tell git what branch to use and where to find it:

git config –add branch.master.remote origin
git config –add branch.master.merge refs/heads/master

If at this point, you have a look at the config with `cat` you’ll see the results of the configuration we’ve just done:

cat .git/config

[remote “origin”]
url = another@devserver:/opt/git/devflow.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch “master”]
remote = origin
merge = refs/heads/master

We can see that the `git config` commands have written the items to the configuration file. It’s possible to edit this file directly as well – whatever you prefer. Another way to view the state of the configuration is to use git’s own config command:

git config -l

This lists the configuration in a compact version of the config file:


## Synchronise local with remote {#rsync}

In your terminal – I use iTerm, but OS X Terminal is fine – change to a working directory. Once again, for this I use `~/Jobs/DevFlow` or similar. Once you `cd` into there, just use `rsync`:

rsync -Chavuz another@devserver:~/jobs/devflow .

which will bring your project, such as it is, to your local machine. The switches to rsync are to ignore version control systems directories – we don’t want any git files getting local. That will likely create issues for us. The other switches are to get human feedback, use archive mode, be verbose, look for updates, and compress the files *en route*.

Note: this assumes you’ve set up key-based authentication on the dev server. If you haven’t step into my office…

### Setting up PPK

On your local machine – let’s just drop the pretence and call it “Mac” from now on! – run this command:


Then answer with Enter (i.e. blank) to all the questions. This will create a `.ssh` directory in your home. The key files (see what I did there?) in there are


Which as you can guess is the public key, and by elimination, the private key. The public key you can share around however you like – the worst thing that can happen is that someone will give you a login on their machine. In our case, we’re interested in the contents of ``. The contents of that file will be added to another file, on the remote dev server. On the dev server, the simplest thing is to do the same thing: `ssh-keygen`. (You don’t need to create a key on the server, but a side-effect of doing this is that the `~/.ssh` directory is created for you in the right place with the right permissions. Once you have the `~/.ssh/` dir, you can add a file:

touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cat {contents of local} >> ~/.ssh/authorized_keys

What does a public key look like? Something like:

doqeJlBoUrLGNiT+A/gC4kN5haj65pJDvhOU4J0ctD6b dom@macbookhair

So once the version of you that lives on the dev server has your public key it can check it on its list of `authorized_keys` to see whether to let you in.

You should never send the private key to anyone over any transport. And you don’t have to type a password! This is most important for scripted access, like the type that Remote Project is doing for you behind the scenes in TextMate.

## Open your project in TextMate {#mate}

> I would recommend installing the TextMate [ProjectPlus]( plug-in which brings us some nice additional project-handling features. See the above link for full details.

Once you install the `mate` shell command, this is just a question of

cd ~/Jobs/DevFlow/
mate devflow

This will open the whole directory as a project in TextMate.

## Set up Remote Project {#rp1}

The TextMate bundle Remote Project is really a wrapper for `rsync`, allowing you to get from remote, put to remote and compare local and remote copies of your project.

[This section may have to change because of Remote Project’s seeming lack of availability.]

The configuration of Remote Project is handled by an environment variable: `TM_REMOTE_PROJECT`. This is set at the project level. Configure it to the location of the remote working copy: `another@devserver:~/jobs/devflow`.


This is a good point to save the TM project (⌃⌘S). I usually save it in the same directory as the root dir of the working copy. In this case, that would be `~/Jobs/DevFlow/`.

Remote Project, as I have it configured, ignores the `.git` directory on your remote working copy, so you never get into a muddle with two competing copies of that. It also means that wherever you are you can work with the repo as long as you have access to ssh.

## Symlink to a web-visible location {#symlink}

Back on the dev server now, we create a virtual host for our project. This is normally a subdomain, possibly a sub-subdomain, like This means you can configure the DNS with a wildcard record, so when a client gives you something to work on urgently out of the blue, it’s ready.

If you have your vhosts set up to live somewhere like `/var/www/` you can quickly symlink from there to `~/jobs/devflow` and you can see your changes as soon as Remote Project uploads them.

cd /var/www/
ln -s /home/another/jobs/devflow

An example snippet from an Apache Virtual Hosts configuration might be:

DocumentRoot /var/www/

## Start syncing local and remote copies of the project {#rp2}

The Remote Project bundle has three main functions:

– Upload Project Changes
– Get Remote Project
– Compare to Remote Project

They do what they sound like they will do. Inside the bundle, the commands are fairly normal `rsync` statements. The first thing to try is the compare function – right now, there should be no difference between the local and remote. Edit the `readme.markdown` we created, then run the comparison again. Now we should see that the local copy is newer.


The fast way to use Remote Project is with its default shortcut: ⌃⌘P then 1 for upload, 2 for download and 3 for compare.


The confirmation of the sync is just a small tooltip next to the text cursor in the editor:


### My customised Remote Project commands:

#### Upload (excerpt)

rsync -auCz –exclude ‘phpdoc’ –exclude ‘.git’ –exclude “cache”
–exclude “logs” –exclude “.DS_Store” –exclude “Thumbs.db”
–exclude “.*.swp” –exclude “stats” –include “.gitignore”
–include “.htaccess” “$TM_PROJECT_DIRECTORY/” “$ESCAPED_REMOTE”

#### Download (excerpt)

rsync –delete -auCz –exclude ‘.git’ –exclude “cache”
–exclude “logs” –exclude “.DS_Store” –exclude “Thumbs.db”
–exclude “.*.swp” –exclude “stats” –include “.gitignore”
–include “.htaccess” “$ESCAPED_REMOTE/” “$TM_PROJECT_DIRECTORY”

Obviously, in the bundle these commands are one-liners.

You’ll notice something important in the Download excerpt: that `rsync` has its `–delete` option set. This will remove any files from your local directory that aren’t present in the remote. That keeps things tidy, but it can also remove new files you’ve just created locally that you haven’t yet uploaded, so beware of that.

Other elements to those command lines are:

– we don’t upload “phpdoc”, because it’s generated automatically on the remote
– we exclude some standard things that aren’t part of the project: cache, logs, preview file, stats, `vi` swap files
– we make sure .gitignore and .htaccess are included – they’re part of the project, but sometimes default `rsync` configuration has them excluded
– the `-C` switch to `rsync` is to ignore source control directories (“C” for “CVS”) and in current versions that includes git, but not all versions of `rsync` know about git, so we explicity `–exclude` it as well.

## Conclusion

Now you’ve got to this point, you can work locally, with the advantages that brings – being able to view files in Finder, save directly and so on – and handle the versioning remotely in a shell, as it should be! And while you’re working, everything is visible at a development URL that is as close as possible to how the production site will be, because it’s at the root of an Apache Virtual Host, with as much of the configuration exactly the same as the final destination for the project’s hosting.