Skip to content

Monitoring & Understanding Server Load

I have a site whose performance could stand some improvement.  I want to be able to make sure that any changes I make are actually having an impact, so I’m going to start with setting up some monitoring tools. I found a very nice and free monitoring application called Load Average. After I got that all set I did a bit of research so I can fully understand and make the most out of the numbers I see in there.

CPU LOAD

First off, I needed to get a handle on how CPU Load is measured. If you run “cat /proc/loadavg” on a Linux server you will get a string similar to this:

1
0.02 0.03 0.00 1/437 21084

The first three numbers are the average CPU Load over 1 minutes, 5 minutes, and 15 minutes.

The lower the numbers the better, and the highest it should be is equal to the number of cores you have.  With one core it would be bad if it went over 1, but if you had 8 cores then a value of 1 wouldn’t be a problem, but 8.5 would.  To find out how many cores you have you can run “nproc.”

MEMORY USAGE

Check how much memory you’re using by running “free -m.” You’ll get a readout like this:

1
2
3
4
             total       used       free     shared    buffers     cached
Mem:         11909      10785       1124          1        234       9372
-/+ buffers/cache:       1178      10731
Swap:            0          0          0

The number you want to watch out for here is the used buffer/cache, here it’s “1178.”  That’s the memory that’s being used by the applications currently running on your server.  That number should be less then the total memory + swap memory, here’s that’s “11909.”

 

References:

http://stackoverflow.com/questions/11987495/linux-proc-loadavg
http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages
http://serverfault.com/questions/67759/how-to-understand-the-memory-usage-and-load-average-in-linux-server
http://www.cyberciti.biz/faq/linux-get-number-of-cpus-core-command/

HTTPS subdomains

This is pretty much just ripped from this article.  But it took me forever to find a solution, so I’m writing this down!

It is possible to secure a sub-domain without a separate IP or SSL certificate but only if it is a wildcard certificate!  Just Host offers these, but what it doesn’t offer is decent support on the matter 😛

When you make a subdomain, say “sub.example.com”  going to http://sub.example.com – will be just fine.  But, going to https://sub.example.com will load up the contents of your default directory (a.k.a. public_html)  So, in order to get that to work you have to force it to point to the right directory using .htacces, like so:

1
2
3
4
5
RewriteEngine On
RewriteCond %{SERVER_PORT} ^443$
RewriteCond %{HTTP_HOST} ^sub.example.com$ [NC]
RewriteCond %{REQUEST_URI} !^/sub-folder/
RewriteRule ^(.*) /sub-folder/$1

iDrive and hidden files

I’ve been trying to backup all my files to iDrive since August >.o It’s about 500GB so I suppose I shouldn’t be too surprised it’s taken forever, and apparently iDrive is slow, but just yesterday I was cleaning up some files and noticed there were some problematic files that haven’t been helping in the backup process. (over 210K of them in fact!) So I took some time and did 4 things to help get my backups rolling.

#1 Delete all .AppleDouble files

I was working on a Mac for a few years, and have some back-ups from them, and after going through the logs I notice a ton of “.AppleDouble” files. Apparently Macs make an AppleDouble of every.single.file. So…there were literally twice as many files to back up. I can just tell iDrive to ignore those files, and I did, but I also wanted to get rid of them all, since I’m no longer on a Mac. So I loaded up the shell from my QNAP, and ran this command:

rm -rf `find -type d -name .AppleDouble`

Which deletes all directories and their contents named “.AppleDouble” under the current directory.

Also, iDrive came up with a few of these odd errors:

#2 Delete all Icon\r files

IOERROR [/share/MD0_DATA/…/Icon], : No such file or directory

I also found about 20 of these odd Icon files, apparently their placeholders so Mac’s can change the icon of the folder. But their names are awkward so iDrive can’t handle them and I can’t delete them normally. They’re actually named “Icon\r” so I had to log into the shell again and run this command in order to delete them all:

rm $’Icon\r’

#3 Stop generating thumbnails

And last, but not least is my QNAP’s fault, by default file manager it generates .@__thumb folders with icons for every single image. Of which I have quite a few. Log in as admin, and you can disable the file generation this way:

Sakata_remove_icons

That will stop new ones from being created, but not delete all the old ones, you can fire up the shell again and run this command to do that:

rm -rf `find -type d -name .@__thumb`

#4 Upgrade to the latest iDrive

I also noticed that the version of the iDrive app in QNAP’s App Center is out of date, so I grabbed a the latest version here: https://www.idrive.com/qnap-backup

The only problem is you have to download the right one for your NAS, but no matter where I looked I couldn’t find what mine is running. I ended up finding out by opening the iDrive in the App Center, I hovered over the download link, and voila!  It has the version right in the file name.  A little backwards to do it, but I couldn’t find the answer anywhere on the QNAP site.

Screenshot (109)

References:

http://apple.stackexchange.com/questions/31867/what-is-icon-r-file-and-how-do-i-delete-them
http://stackoverflow.com/questions/13032701/how-to-remove-folders-with-a-certain-name
http://forum.qnap.com/viewtopic.php?f=24&t=80532&start=75

Compress images with simple smusher

I wrote a similar post using TinyPNG about a week ago, but the 500 image limit was a bit small so I built my own very simple image compressor.  So I’ve updated the code to work with my simple smusher, added a few improvements, and I have a PHP version as well.

Ruby Version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
require 'open-uri'
require "json"
require "net/http"
require "uri"
require 'rubygems'
 
start = (ARGV[0] ? ARGV[0] : 0).to_i
last = start+100
i = 0
total_savings = 0
Dir.glob("public/system/rich/rich_files/rich_files/**/*.{jpg,png}") do |item|
  #only do 100 images at a time
  break if i == last
 
  #ignorre directories and original files
  next if item == '.' or item == '..' or item.include? "original"
  i = i + 1
  next if i <= start
 
  url = item.sub 'public', 'http://www.jimstoppani.com'
  uri = URI.parse("http://yourdomain.com/?i=#{url}")
  http = Net::HTTP.new(uri.host, uri.port)
  http.read_timeout = 500
  request = Net::HTTP::Get.new(uri.request_uri)
  response = http.request(request)
 
  if response.code == "200"
    result = JSON.parse(response.body)
  else
    break
  end
 
  puts "Converted: #{item}"
 
  if result[0]['dest_size'] < result[0]['src_size']
    open(item, 'wb') do |file|
      file << open(result[0]["dest"]).read
    end
    puts "Original: #{result[0]['src_size']} Optimized #{result[0]['dest_size']}"
    total_savings = total_savings + (result[0]['src_size']-result[0]['dest_size'])
  else
    puts "no size difference"
  end
end
 
puts "All done! Total Savings: #{total_savings}"

This one has a few advantages over the last version – it needs a variable sent with it so it doesn’t run into the problem of re-compressing images that have already been compressed.  It does 100 images at once, so you could run it in a loop like “ruby tinify.rb 0″, “ruby tinify.rb 100″, “ruby tinify.rb 200″ ect.

It also checks the response from the compressor, and it there’s no file size savings it doesn’t copy over the new version. And once it’s done, it will tell you the total savings.

PHP Version

HTML:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Photo Smusher</title>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
 
    <script>
        $(function(){
        var total = 0;
        var start = 0;
 
 
        function smush_it() {
            var jqxhr = $.getJSON("smush.php?img=FOLDER&p="+start, function(data) {
              total = total + parseFloat(data[0].total);
              start = start+5;
              $("#total").text(total);
              $("#start").text(start);
              if(start < 5010) {
                  smush = setTimeout(function(){smush_it()},60000);
              }
            }).fail(function( jqxhr, textStatus, error ) {
                var err = start + ' ' + textStatus + ", " + error;
                smush = setTimeout(function () {
                    smush_it()
                }, 60000);
                $("#error").text(err);
            });
        }
 
        $("#total").text(total);
        $("#start").text(start);
        smush_it();
 
        });
    </script>
</head>
 
<body>
    <p>Progress: <span id="start"></span></p>
    <p>KB saved: <span id="total"></span></p>
    <p id="error"></p>
</body>
</html>

PHP:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<?php
function loadFile($url) {
    $ch = curl_init();
 
    curl_setopt($ch, CURLOPT_HEADER, 0);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_URL, $url);
 
    $data = curl_exec($ch);
    curl_close($ch);
 
    return $data;
}
 
 
function downloadfile($file, $path, $fn) {
    if(isset($file) && isset($path) && isset($fn)) {
        $fc = implode('', file($file));
 
        $mode = (file_exists($path . $fn) ? 'w': 'x+');
        $Files = fopen($path . $fn, $mode);
 
        if ((fwrite($Files, $fc)) != 0){
            fclose($Files);
        } else{
            echo 'Error.';
        }
    }
}
 
if(isset($_REQUEST['img']) && isset($_REQUEST['p']) && is_numeric($_REQUEST['p'])) {
   $total = 0;
   $limit = 5;
   $start = $_REQUEST['p'];
   $end = $start + $limit;
   $item = 0;
 
   $handle = opendir('../images/'.$_REQUEST['img']);
   while ($file = readdir($handle)){
      if($item >= $start && $item <= $end) {
         $extension = strtolower(substr(strrchr($file, '.'), 1));
         if($extension == 'jpg'){
            $image = json_decode(loadFile('http://yourdomain.com/?i=http://yourdomain.com/images/'.$_REQUEST['img'].'/'.$file));
 
            $change = $image[0]->src_size-$image[0]->dest_size;
 
            if($change > 1) {
               $total += $change;
               downloadfile($image[0]->dest,'../images/'.$_REQUEST['img'].'/',$file);
            }
         } 
      } elseif($item > $end) {
         break;
      }
      $item++;
   } 
 
   echo '[{"total":"'.$total.'"}]';
}
 
?>

The HTML page goes through images in the FOLDER you designate, 5 at a time until you hit image 50010, which you can customize based on your folder size. It sends an ajax request to the php file, then reports back the total KB saved.  It will run the script once a minute 60000, which you can change depending on your server’s performance.

Once the PHP page receives a request it makes sure it has all the required request variables, sets up the counters, then goes through all the images in the designated folder.

The compressor will handle php and gif files, but I only had jpgs in this project so I limited it to only process jpgs.  So once it find a jpg it will send the full URL to your image compression service, you can replace “yourdomain.com” with the url of your service.  Then it decodes the json, and if there was any compression in the file size it will download the new image over the old one.  And once it’s gone through 5 images will report the total KB saved back to the HTML file.

 

Simple Smusher

I wrote a post on compressing images with TinyPNG, but that 500 image limit ran out fast!  So I decided to see if I could put together my own image optimization service.

After a little digging I found the image_optim gem.  It’s very nice and straightforward, and includes image_optim_pack gem which installed all the compression libraries as well – making for a very hassle free set-up.  After getting those installed and setting up my rails project, it really came down to a simple interface with one controller and view.

Controller:

require 'open-uri'
require 'image_optim'
require 'fileutils'
 
class WelcomeController < ApplicationController
  def index
    Dir.glob("public/tmp/*").
      select{|f| File.mtime(f) < (Time.now - (60*120)) }.
      each{|f| File.delete(f) }
 
    image = params[:i]
    extension = File.extname(image)
 
    acceptable_extensions = [".jpg", ".gif", ".png"]
 
    return @json = "{\"error\":\"only jpg, png, and gif can be resized\"}" if !acceptable_extensions.include?( extension.downcase )
 
    value = ""; 8.times{value  << (65 + rand(25)).chr}
    tmp_image = "public/tmp/#{value}#{extension}"
 
    open(tmp_image, 'wb') do |file|
      file << open(image).read
    end
 
    before_filesize = (File.size(tmp_image) * 0.001).floor
 
    image_optim = ImageOptim.new
    image_optim = ImageOptim.new(:pngout => false)
    image_optim = ImageOptim.new(:nice => 10)
 
    image_optim.optimize_image!(tmp_image)
 
    after_filesize = (File.size(tmp_image) * 0.001).floor
 
    tmp_image = "http://yourdomain.com/tmp/#{value}#{extension}"
 
    @json = "[{\"dest\":\"#{tmp_image}\",\"src_size\":#{before_filesize},\"dest_size\":#{after_filesize}}]"
  end
 
end

View:

1
<%= raw @json %>

I’m including open-uri to download the image to be compressed, image_optim to do the compressing, and fileutils to check the file size.

  1. I start with the Dir.glob to first delete any temporary images that are over 120 hours old.
  2. Then I grab the image from the url provided with the i param, and make sure it has the right extention
  3. Then generate a random string and create the temporary filename and save the image there
  4. Then I optimize the image and save over it
  5. Then I just write out the JSON I want by hand – I wrote it to resemble smush.it’s syntax

You will need to change “yourdomain.com” to your actual domain here, looking back I should have done that programmatically – but it’s not a big deal.

I have to whole project available on git hub for download.