How to ignore files from git commit and push

git

Sometimes, you will want to ignore files from your git commits and pushes. For example, I had a file that stored some database information on my local machine/staging server, but the same database information doesn’t apply to my production environment.

git comes with a file called .gitignore that allows you to ignore any files during a commit or push. The .gitignore file will need to be in the top level of your working directory, and just needs to include the filename that you don’t want to use.

There is also another file located in /.git/info/exclude that you can include your files to ignore. Mac OSX has a hidden file in all directories called .DS_Store, so the below exclude file is an example that you can use:

# git ls-files --others --exclude-from=.git/info/exclude
# Lines that start with '#' are comments.
# For a project mostly in C, the following would be a good set of
# exclude patterns (uncomment them if you want to use them):
# *.[oa]
# *~
.DS_Store

Grabbing the ID of a Twitter user

json, twitter

Back in the old days, it would be pretty easy to grab the unique identifier of a Twitter account. These days, Twitter have gotten a bit more tricky and protective of such information.

There are, however, loopholes that will help you grab the unique user ID of a Twitter user and all you need is a browser 😉

For the user that you want to retrieve the twitter ID of, visit this page below:

http://api.twitter.com/users/show/some_twitter_username_here.json

JSON will load in your browser. From there, you should be able to very easily find the id node that contains the Twitter user ID. Enjoy!

Discovering Blog Ping Services

curl, php, SEO

When driving traffic to a website, the continuous battle is the constant chopping and changing of your page and content in order to create the perfect SEO formula. I have always been interested how WordPress and Blogspot pages were able to receive traffic so quickly, and so much of it, without any kind of special optimizing of the page. Sure enough, Google has built a smart algorithm that will index and extract content from these blogs, but I couldn’t work out how search engines “knew” new content has been published on a Blog.

I have recently found out about “blog ping services” today, and spent some time reading up on these. Basically, a blog ping service will notify search engines and aggregators when there has been new content published on your page. Very smart! In addition to that, content is indexed very quickly and in many cases will filter through into blog aggregators such as Google blogs and Yahoo! blogs.

So, in PHP, how did I achieve this task? I started out with pingomatic. If you visit pingomatic’s homepage, you will see the different search engines that this blog ping delivers notifications to.

The next step was to integrate this into my code. Firstly, build your pingomatic URL, and keep it safe for later. Generally, the Pingomatic page will tell you to bookmark the page so that whenever you update your blog, you can visit that URL and Pingomatic’s harvester will deliver a message to all of the search engines.

Of course the most important job of a web developer is to strive for optimization and automation. We’re letting Pingomatic do the optimization for us, so now let’s focus on the automation. In my example below, this is a simple way to use PHP:

$webservice = "http://pingomatic.com/ping/?title=yourblog&blogurl=http%3A%2F%2Fwww.yourblog.com&rssurl=http%3A%2F%2Fwww.yourblog.com%2Ffootball_news.xml&chk_weblogscom=on&chk_blogs=on&chk_feedburner=on&chk_newsgator=on&chk_myyahoo=on&chk_pubsubcom=on&chk_blogdigger=on&chk_weblogalot=on&chk_newsisfree=on&chk_topicexchange=on&chk_google=on&chk_tailrank=on&chk_postrank=on&chk_skygrid=on&chk_collecta=on&chk_superfeedr=on&chk_audioweblogs=on&chk_rubhub=on&chk_a2b=on&chk_blogshares=on";
 $ch = curl_init();
 curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
 curl_setopt($ch, CURLOPT_URL, $webservice);
 curl_setopt($ch, CURLOPT_HTTPHEADER,
 array('Content-type: text/xml'
 , 'Content-length: '.strlen($request)
 ,'User-Agent: Mozilla/5.0 (Windows; U;
 Windows NT 5.1; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 (.NET CLR 3.5.30729' ));
 curl_setopt($ch, CURLOPT_POST, true);
 curl_setopt($ch, CURLOPT_POSTFIELDS, $request);
 $result = curl_exec($ch);

Quite simply, if you embed this code into your PHP code, a curl request will be sent to Pingomatic notifying their harvester that there is new content on your page. Done – enjoy the new traffic!

Runkeeper: software that really changes lives

exercise, runkeeper

Just under a year ago, I kept hearing the word “Runkeeper” bandied around the office. I was a little intrigued, but admittedly I had absolutely no idea what on earth this tool did.

I’d completely fallen out of the loop; an iPhone equipped techie (the only one in the office) and I had no idea what on earth this application was all about. The shame of it.

A colleague later explained that this application tracks and catalogues your daily exercise, whilst presenting your activities within a Facebook-like interface where you could “Like” and comment on your buddies’ jogging and activities.

Since university, I’ll admit, exercising really wasn’t that important. I’d played a fair bit of football throughout my early years – not to that good of a level, but enough to keep fit – but since I’d enrolled at college my most strenuous activity would be the lifting of a pint glass or moving from my bed to the computer desk to settle down into another session of Championship Manager.

Last year though, that changed and I decided to give the Runkeeper a go. Very quickly I became addicted with this application and celebrating my “Street Teammates” activities with a click of the Like button or a cheeky comment and praise for their efforts.

Then, within a few months, I noticed my runs were getting longer and quicker; all thanks to the statistics charts and time keeping tool within this app. I was hooked. This tool had changed my daily life, no longer would I go home and watch TV, this would be the way I would choose to spend my Monday and Tuesday evenings.

I was then able to cheer on as my work colleague completed the Tokyo Marathon, and even watch his progress live 300 miles away via Runkeeper. Unbelievable.

In my workplace of around only ten people, five are regular Runkeeper users. I suspect that the developers of this tool will enjoy huge success over the coming years as people take on a new healthier way of life.

This is a great example of how software change lives for the better, thank you very much Runkeeper!

log.io

monitoring

I’ve been playing around with log.io this week. I really like this tool and I think that log.io demonstrates how exciting and useful node.js is. I’ve been looking at nowjs as well, but it’s a little too early for me to comment on that for the time being.

One thing I liked about log.io: it’s really easy to setup. Like really easy. You’ll need node.js installed first to get it running, or npm (node packet manager), but once that’s in place, it’s a matter of unzipping, a minuscule amount of configuration et voila, you have a very good looking logging tool.

Basically, you setup all of your nodes (the harvesters) with node.js as well, and logio on top. These will then be broadcasting messages to your server variant of log.io. The server then converts these messages into a very attractive front end.

The front end itself is cool as well, being able to handle regexp’s and up to four windows simultaneously within the web browser.

I’ve installed this for our company’s development team and I’m pretty sure that many others will find the tool equally as useful.

The only downside that I’ve noticed, is that log.io cannot monitor a log referenced by a symlink. That basically means that you will need to setup a separate script that continuously monitors for any new logfile filename change; also you will have to automate bouncing of log.io to ensure the script is back up and running. Overall, I’m happy though!

Last night I discovered Hostgator shared servers enable…

git

Last night, I discovered Hostgator shared servers enable git services.

Then I stumbled across this page: http://toroid.org/ams/git-website-howto

This page explains exactly how to automatically update a git repo on your local machine and, with one simple git push, puts the full changes live on your webserver.

From here, this page will show you how to create a fresh repository, and how to mirror that to the server.

Using “cat > hooks/post-receive”, you can very quickly specify where your document root is on the webserver, thanks to “GIT_WORK_TREE”.

Now all of this is completed, I can simply run a “git push web” and my local version of my webpage is updated onto the server. Very nice!