Scripting

Ramblings and links to interesting snippets


Export table in rails console

This is just a small twist on the very nice snippet I found from this post.  I just changed it slightly, so you can get all the attributes automatically so you can easily swap out any model.  And it will also list the column names in the first row.

1
2
3
4
5
6
7
8
9
10
11
12
require 'csv'
 
file = "#{Rails.root}/public/data.csv"
 
table = User.all;0
 
CSV.open( file, 'w' ) do |writer|
  writer << table.first.attributes.map { |a,v| a }
  table.each do |s|
    writer << s.attributes.map { |a,v| v }
  end
end

Permalink » No comments

Add tracking tokens to WordPress

Just had to do a search and replace to add in tracking tokens site-wide to a wordpress site.  I tried some search & replace plugins, but I found them all a bit buggy, so I opened up PHP My Admin to handle it. I had a list of links so I just ran them all like:

1
UPDATE wp_posts SET post_content = REPLACE (post_content,'http://example.com/page.html','http://example.com/page.html?UTM=token');

then ran the following, because I knew a few links already had the tracking token added:

1
UPDATE wp_posts SET post_content = REPLACE (post_content,'?UTM=token?UTM=token','?UTM=token');

And, last but not least I verified that all links where updated by running:

1
SELECT post_content FROM wp_posts WHERE (CONVERT(`post_content` USING utf8) LIKE '%%http://example.com%%' AND CONVERT(`post_content` USING utf8) NOT LIKE '%%UTM%%')

So, I’m basically just checking that every post with that domain also has the token code in it somewhere.

You can be a bit more thorough by using regex, but that should do the trick if you only have a few links to update.

UPDATE 6/2016

I just needed to do a similar task, but I needed to add a tracking token to all links to a domain, so I needed a bit of regex.  This time I found a nice search and replace plugin that supports regex. And I used these commands to first add in the token:

1
2
Search pattern: ((<a href="(http://)?(www.)?example.com/(.*?))")
Replace pattern: $1?UTM=token

This will pick up all links, weather they have http or not, and weather they have www. or not.  And again I ran this kind of replace to fix any links that already had tracking added to them:

1
2
Search pattern: ?UTM=token?UTM=token
Replace pattern: ?UTM=token

Permalink » No comments

HTTPS subdomains

This is pretty much just ripped from this article.  But it took me forever to find a solution, so I’m writing this down!

It is possible to secure a sub-domain without a separate IP or SSL certificate but only if it is a wildcard certificate!  Just Host offers these, but what it doesn’t offer is decent support on the matter 😛

When you make a subdomain, say “sub.example.com”  going to http://sub.example.com – will be just fine.  But, going to https://sub.example.com will load up the contents of your default directory (a.k.a. public_html)  So, in order to get that to work you have to force it to point to the right directory using .htacces, like so:

1
2
3
4
5
RewriteEngine On
RewriteCond %{SERVER_PORT} ^443$
RewriteCond %{HTTP_HOST} ^sub.example.com$ [NC]
RewriteCond %{REQUEST_URI} !^/sub-folder/
RewriteRule ^(.*) /sub-folder/$1

Permalink » No comments

Compress images with simple smusher

I wrote a similar post using TinyPNG about a week ago, but the 500 image limit was a bit small so I built my own very simple image compressor.  So I’ve updated the code to work with my simple smusher, added a few improvements, and I have a PHP version as well.

Ruby Version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
require 'open-uri'
require "json"
require "net/http"
require "uri"
require 'rubygems'
 
start = (ARGV[0] ? ARGV[0] : 0).to_i
last = start+100
i = 0
total_savings = 0
Dir.glob("public/system/rich/rich_files/rich_files/**/*.{jpg,png}") do |item|
  #only do 100 images at a time
  break if i == last
 
  #ignorre directories and original files
  next if item == '.' or item == '..' or item.include? "original"
  i = i + 1
  next if i <= start
 
  url = item.sub 'public', 'http://www.jimstoppani.com'
  uri = URI.parse("http://yourdomain.com/?i=#{url}")
  http = Net::HTTP.new(uri.host, uri.port)
  http.read_timeout = 500
  request = Net::HTTP::Get.new(uri.request_uri)
  response = http.request(request)
 
  if response.code == "200"
    result = JSON.parse(response.body)
  else
    break
  end
 
  puts "Converted: #{item}"
 
  if result[0]['dest_size'] < result[0]['src_size']
    open(item, 'wb') do |file|
      file << open(result[0]["dest"]).read
    end
    puts "Original: #{result[0]['src_size']} Optimized #{result[0]['dest_size']}"
    total_savings = total_savings + (result[0]['src_size']-result[0]['dest_size'])
  else
    puts "no size difference"
  end
end
 
puts "All done! Total Savings: #{total_savings}"

This one has a few advantages over the last version – it needs a variable sent with it so it doesn’t run into the problem of re-compressing images that have already been compressed.  It does 100 images at once, so you could run it in a loop like “ruby tinify.rb 0”, “ruby tinify.rb 100”, “ruby tinify.rb 200” ect.

It also checks the response from the compressor, and it there’s no file size savings it doesn’t copy over the new version. And once it’s done, it will tell you the total savings.

PHP Version

HTML:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Photo Smusher</title>
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
 
    <script>
        $(function(){
        var total = 0;
        var start = 0;
 
 
        function smush_it() {
            var jqxhr = $.getJSON("smush.php?img=FOLDER&p="+start, function(data) {
              total = total + parseFloat(data[0].total);
              start = start+5;
              $("#total").text(total);
              $("#start").text(start);
              if(start < 5010) {
                  smush = setTimeout(function(){smush_it()},60000);
              }
            }).fail(function( jqxhr, textStatus, error ) {
                var err = start + ' ' + textStatus + ", " + error;
                smush = setTimeout(function () {
                    smush_it()
                }, 60000);
                $("#error").text(err);
            });
        }
 
        $("#total").text(total);
        $("#start").text(start);
        smush_it();
 
        });
    </script>
</head>
 
<body>
    <p>Progress: <span id="start"></span></p>
    <p>KB saved: <span id="total"></span></p>
    <p id="error"></p>
</body>
</html>

PHP:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<?php
function loadFile($url) {
    $ch = curl_init();
 
    curl_setopt($ch, CURLOPT_HEADER, 0);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_URL, $url);
 
    $data = curl_exec($ch);
    curl_close($ch);
 
    return $data;
}
 
 
function downloadfile($file, $path, $fn) {
    if(isset($file) && isset($path) && isset($fn)) {
        $fc = implode('', file($file));
 
        $mode = (file_exists($path . $fn) ? 'w': 'x+');
        $Files = fopen($path . $fn, $mode);
 
        if ((fwrite($Files, $fc)) != 0){
            fclose($Files);
        } else{
            echo 'Error.';
        }
    }
}
 
if(isset($_REQUEST['img']) && isset($_REQUEST['p']) && is_numeric($_REQUEST['p'])) {
   $total = 0;
   $limit = 5;
   $start = $_REQUEST['p'];
   $end = $start + $limit;
   $item = 0;
 
   $handle = opendir('../images/'.$_REQUEST['img']);
   while ($file = readdir($handle)){
      if($item >= $start && $item <= $end) {
         $extension = strtolower(substr(strrchr($file, '.'), 1));
         if($extension == 'jpg'){
            $image = json_decode(loadFile('http://yourdomain.com/?i=http://yourdomain.com/images/'.$_REQUEST['img'].'/'.$file));
 
            $change = $image[0]->src_size-$image[0]->dest_size;
 
            if($change > 1) {
               $total += $change;
               downloadfile($image[0]->dest,'../images/'.$_REQUEST['img'].'/',$file);
            }
         } 
      } elseif($item > $end) {
         break;
      }
      $item++;
   } 
 
   echo '[{"total":"'.$total.'"}]';
}
 
?>

The HTML page goes through images in the FOLDER you designate, 5 at a time until you hit image 50010, which you can customize based on your folder size. It sends an ajax request to the php file, then reports back the total KB saved.  It will run the script once a minute 60000, which you can change depending on your server’s performance.

Once the PHP page receives a request it makes sure it has all the required request variables, sets up the counters, then goes through all the images in the designated folder.

The compressor will handle php and gif files, but I only had jpgs in this project so I limited it to only process jpgs.  So once it find a jpg it will send the full URL to your image compression service, you can replace “yourdomain.com” with the url of your service.  Then it decodes the json, and if there was any compression in the file size it will download the new image over the old one.  And once it’s gone through 5 images will report the total KB saved back to the HTML file.

 

Permalink » No comments

Simple Smusher

I wrote a post on compressing images with TinyPNG, but that 500 image limit ran out fast!  So I decided to see if I could put together my own image optimization service.

After a little digging I found the image_optim gem.  It’s very nice and straightforward, and includes image_optim_pack gem which installed all the compression libraries as well – making for a very hassle free set-up.  After getting those installed and setting up my rails project, it really came down to a simple interface with one controller and view.

Controller:

require 'open-uri'
require 'image_optim'
require 'fileutils'
 
class WelcomeController < ApplicationController
  def index
    Dir.glob("public/tmp/*").
      select{|f| File.mtime(f) < (Time.now - (60*120)) }.
      each{|f| File.delete(f) }
 
    image = params[:i]
    extension = File.extname(image)
 
    acceptable_extensions = [".jpg", ".gif", ".png"]
 
    return @json = "{\"error\":\"only jpg, png, and gif can be resized\"}" if !acceptable_extensions.include?( extension.downcase )
 
    value = ""; 8.times{value  << (65 + rand(25)).chr}
    tmp_image = "public/tmp/#{value}#{extension}"
 
    open(tmp_image, 'wb') do |file|
      file << open(image).read
    end
 
    before_filesize = (File.size(tmp_image) * 0.001).floor
 
    image_optim = ImageOptim.new
    image_optim = ImageOptim.new(:pngout => false)
    image_optim = ImageOptim.new(:nice => 10)
 
    image_optim.optimize_image!(tmp_image)
 
    after_filesize = (File.size(tmp_image) * 0.001).floor
 
    tmp_image = "http://yourdomain.com/tmp/#{value}#{extension}"
 
    @json = "[{\"dest\":\"#{tmp_image}\",\"src_size\":#{before_filesize},\"dest_size\":#{after_filesize}}]"
  end
 
end

View:

1
<%= raw @json %>

I’m including open-uri to download the image to be compressed, image_optim to do the compressing, and fileutils to check the file size.

  1. I start with the Dir.glob to first delete any temporary images that are over 120 hours old.
  2. Then I grab the image from the url provided with the i param, and make sure it has the right extention
  3. Then generate a random string and create the temporary filename and save the image there
  4. Then I optimize the image and save over it
  5. Then I just write out the JSON I want by hand – I wrote it to resemble smush.it’s syntax

You will need to change “yourdomain.com” to your actual domain here, looking back I should have done that programmatically – but it’s not a big deal.

I have to whole project available on git hub for download.

Permalink » No comments