Deploy a git repository on remote CentOS Linux web host

Recently came across the need to be able to push a git repository directly to a web server and have the repository’s changes automatically reflected in the server’s web root. The aim was to simply push the repo out to the web server and have it automatically apply the changes to the web site’s code.

There are several ways of doing this and this is not necessarily the most seamless but it doesn’t require as much pre-configuration to get up and running as other methods.

I found a few tutorials and put together my own here. For this purpose I’m using a CentOS 6 Linux remote host accessed via ssh via Putty on a Windows Desktop. The instructions follow the assumption that your remote server’s web root is empty or you have backed it up appropriately before proceeding. I’m also assuming you have a repo somewhere else that you can push to the web server remotely.

  1. Connect to your web server using your ssh client of choice. Once connected you should be prompted to log into your server with credentials.
  2. Verify that git is installed on your host machine. If it isn’t or you’re not sure you can install it using:
    yum install git
  3. Next you will need to create a bare local git repository. I’m making mine under the current user’s home directory. You can put it wherever you like but this seems as good a place as any to me:
    mkdir repo.git && cd repo.git
    git init –bare
  4. Now we have a bare git repo. Now we navigate into it and set up the hook that will link it up to our web root directory (or other web facing sub directory):
    cd hooks
    cat > post-receive
  5. This should give you a blank line where you can type in some stuff. Type this in:
    git –work-tree=/path/to/web/directorygoeshere –git-dir=/path/to/gitrepo/goeshere.git checkout -f
  6. Hit CTRL+D to save the stuff you just entered. Now git will checkout your repo to the web directory you specified when you push changes to it.
  7. Now give the hook you just created the ability to execute:
    chmod +x post-receive
  8. Now you have a working git repo on your host that you can push to. I have added this remote repo as a remote in my local windows msysgit  GUI. You can do the same thing using whichever method you’re using to interact with git.
  9. Now I can simply make changes locally, commit them on my local environment, then push to the remote web server which will check out the repo to the web directory specified in step 5 above.



Add listing approval messages to Directory Press

I recently did a project that required Directory Press for a vendor directory. The DIrectory press system sends out emails for various site events but was lacking in one area. The site required that all listings that are submitted must be approved and published be an administrator. This also required an email to be sent to the post author which is something that Directory Press did not facilitate.

In order to sent the post author a message when their directory listing was published I had to add this block to the functions.php in the theme.

function post_published_notification( $new_status, $old_status, $post ) {
if ( $old_status == 'publish' && $new_status != 'publish' ) {
$author = $post->post_author; /* Post author ID. */

$name = get_the_author_meta( 'display_name', $author );
$email = get_the_author_meta( 'user_email', $author );
$title = $post->post_title;
$permalink = get_permalink( $post );
$edit = get_edit_post_link( $post, '' );
$to[] = sprintf( '%s <%s>', $name, $email );
$subject = sprintf( 'Your Post %s has been published', $title );
$message = sprintf ('Congratulations, %s! Your post “%s” has been published. Please return to verify that your listing appears as you\'d like and make adjustments if necessary.' . "\n\n", $name, $title );
$message .= sprintf( 'View and edit if necessary: %s', $permalink );
//add information here if you want to change the send from address
//$headers[] = 'From: Site Admin <>';

wp_mail( $to, $subject, $message, $headers );
add_action( 'transition_post_status', 'post_published_notification', 10, 3 );

Import CSV with PHP to update MySQL data

I have written a very simple script to perform a useful function for me. I figured I’d share it so that others can make use of it.

If you have a CSV file with the first row containing headers and the subsequent rows containing data that you wish to update along with an ID column, then this will work for you without any issues. If your primary key is different or you wish to match other criteria you may need to adjust the script a bit. This does not facilitate upload but only uses a csv file residing on the same directory as the script.

DirectoryPress Broken CSV export

If you use DirectoryPress 7.0.9 you may have noticed that the CSV export function isn’t working properly. This is going to be a quick “edit this here” fix.

Edit the admin-save.php file in your /wp-content/themes/directorypress/admin directory.

Find around line 1105 or so. It should have:

$dat = array_merge($dat, $FF);

Change that to:

$dat = $dat + $FF;

Save it and upload it to your server. It should now be able to export properly.

MySQL zerofill and lpad: whipping your digits into shape

I’m working with a large dataset. One of the columns is set as an INT(8) but not all of the values are 8 digits. I need to run a query that will sum values and group by the first two digits in that INT(8) column. This presents a potential problem since it may end up grouping incorrectly since I need everything to be grouped based off of 8-digit numbers. How do you make these smaller numbers into 8-digits?

Simple. There are two ways.

  1. Alter the table itself using the ZEROFILL in MySQL. This will add 0s to the left of your values up to the max number of digits defined for that column:
    • ALTER TABLE [table name] MODIFY COLUMN [column name] INT(x) ZEROFILL UNSIGNED;
    • Where table name = your table name, column name = the desired column to pad, and x is the number of digits allowed for that column
  2. Pad the values using lpad() in your query itself, without altering your table:
    • select lpad(column name,x,'0') from table name;
    • Where table name = your table name, column name = desired column to pd, and x = the number of digits to allow.
    • If your value in the DB is 332, your result will be 0332. If your value is 9, the result will be 0009.

Hopefully this will help someone facing the same problem I was. It’s also good to know that there are multiple ways of doing things.

Installing memcache on PHP with CentOS 6

I was following the instructions here to install memcache on a CentOS 6 server that I’m currently configuring. I was able to install the base memcache rpm, but had trouble when installing the PECL extension for PHP.

The first error I got had something to do with not being able to phpize the script.

So, I ran the following command to install the php development package:

yum install php-devel

Then I re-ran the PECL memcached install and came across this message:

checking for the location of zlib… configure: error: memcache support requires ZLIB. Use –with-zlib-dir=<DIR>

And finally found a solution for this by running this command to install the zlib-devel package:

yum install zlib-devel

Now you should be able to successfully install and add the memcache extension to your php.ini. Follow the instructions linked above for more information.

Parse error: syntax error, unexpected $end in /app/views/layouts/default.ctp – CakePHP

I just encountered an unexpected error while creating a portable development server on a USB stick using the latest version of XAMPP portable. This particular application runs on the CakePHP framework and was copied directly from another functioning Windows based machine.

After scratching my head a few moments I realized that the default configuration for XAMPP’s php.ini may be different.

The fix is easy. Go into your php.ini for your php installation and change the following from ‘OFF’ to ‘On’:

short_open_tag = on

Then simply go and restart your apache to apply the changes.

Tuning MySQL for performance with Ubuntu 11.04 and MySQLTuner

One of my current projects requires extremely heavy use of very large data sets housed in sever MySQL databases and spread across several servers. This data feeds various web apps, business processes, and fulfills many requests. It is data that needs to be highly available at all times and return queries in the shortest amount of time possible. There are several database tables with upwards of 15 million records and in some instances these need to be joined to other tables, used in calculations, etc.

I need results now, not 48 hours from now. Performance is essential but hard to come by with legacy servers made out of repurposed machines. In order to see where the bottlenecks are and try to make the best of the situation I decided to do some searching. We’re running Ubuntu 11.04 servers for this purpose and I needed something that would give me a run down of what’s causing performance issues on each server.

That’s when I found mysqltuner. This handy little guy is a Perl script that looks at your my.cnf and other MySQL installation data and makes recommendations about how to improve performance based on your past usage.

It’s easy to install in Ubuntu server 11.04:

sudo apt-get install mysqltuner

To run it:

sudo mysqltuner

Then simply enter your administratrive user and password and you’ll get a nice little printout like so:


This will give you a good set of recommendations about which settings to tweak. From here you can make tweaks, restart MySQL and run the script again to see where you stand.

Hope this will help someone in the same situation!

PHP Fatal error: Call to undefined function curl_init()

I recently set up a development server in a VirtualBox VM running Ubuntu server 11.04. My plan was to move a development database and website to this VM and migrate away from a local XAMPP installation on a Windows box. The only problem is that Apache and PHP were not exactly the same between the two systems. This resulted in some of my scripts not working correctly. More specifically, I was getting the “PHP Fatal error:  Call to undefined function curl_init()” error. On my Ubuntu VM.

This likely meant that curl was not installed or enabled on Ubuntu, so here’s what I did:

  1. Run this command from a terminal:
    • sudo apt-get install curl libcurl3 libcurl3-dev php5-curl
  2. Make sure curl is enabled in your php.ini. This file is usually in “/etc/php5/apache2/php.ini”
    • In the section for dynamic extensions add (or uncomment):
  3. Restart Apache:
    • sudo /etc/init.d/apache2 restart

Missing orders in Virtue Mart admin and how to fix it

I just realized that none of the recent orders in one of the Virtue Mart installations I manage were showing up. It looks like the most recent one was….three months ago. But when I search for a more recent order I get the correct results. So, the orders are in the database but they weren’t being listed on the main order list. This will go over what I’ve found in this install of Virtue Mart 1.1.2. Other versions may not suffer from this phenomenon.


After a little digging I find that the order.order_list.php file in the /administrator/components/com_virtuemart/html folder has what I need to start my investigation. Starting at line 26 there is a bundle of SQL that joins the jos_vm_orders table to the jos_vm_order_user_info table. Well, since this is an administrator-only issue, I just echoed the $list variable at line 46, reloaded the order list in admin.

I then copied the query that was printed at the top of the page, replaced the #__{vm} with my installation’s table prefix of jos_vm and pasted the whole query in my MySQL tool of choice to query my database. Well, turns out the query isn’t returning an up to date result set. I know that the data for orders is being saved to jos_vm_orders because I can look at that table and everything is up to date. So next I move on to the table the query joins it with: jos_vm_order_user_info.

Aha! It looks like jos_vm_order_user_info wasn’t up to date at all. Weird.

According to this forum post it’s because there were custom fields added to the jos_vm_user_info database table and the jos_vm_order_user_info table no longer matched the columns. So the solution was to see what columns didn’t match up and alter the jos_vm_order_user_info table to have those missing columns. Turns out I added a single custom field and it didn’t get added to the order user info table…so after adding the column to the order user info table everything worked perfectly.

So, if you added custom fields to your user registration you should make sure your jos_vm_user_info table matches up those custom fields with the columns in jos_vm_order_user_info.