Skip to content

Use ffmpeg to compress video with multiple tracks to specific size

I had some multi-track videos I needed to compress, and I found this awesome script for using ffmpeg to compress videos to specific sizes:
https://stackoverflow.com/questions/29082422/ffmpeg-video-compression-specific-file-size/61146975#61146975

And it works great!  But, only includes the first track.  So I made a few slight changes to make it include multiple audio tracks, and factor them in to the total size of the end result.

I added this part to calculate the total number of audio tracks.  Courtesy of this answer.

1
2
3
4
5
6
7
#number of audio streams
O_ACOUNT=$(\
 ffprobe \
 -v error \
 -select_streams a \
 -show_entries stream=index \
 -of csv=p=0 "$1" | wc -w)

And, then multiplied the audio_rate by the number of tracks

1
2
3
4
5
6
7
8
# Calculate target video rate - MB -> KiB/s
T_VRATE=$(\
 awk \
 -v size="$T_SIZE" \
 -v duration="$O_DUR" \
 -v audio_rate="$O_ARATE" \
 -v audio_count="$O_ACOUNT" \
 'BEGIN { print ( ( size * 8192.0 ) / ( 1.048576 * duration ) - (audio_rate * audio_count)) }')

And I added -map 0 to the export at the end so it will capture all the audio streams

1
2
3
4
5
6
7
8
9
ffmpeg \
 -i "$1" \
 -c:v libx264 \
 -b:v "$T_VRATE"k \
 -pass 2 \
 -c:a aac \
 -map 0 \
 -b:a "$T_ARATE"k \
 $T_FILE

So the final script ended up as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
#!/bin/bash
#
# Re-encode a video to a target size in MB.
# Example:
# ./this_script.sh video.mp4 15
#
# Filename CANNOT contain any spaces
 
T_SIZE="$2" # target size in MB
T_FILE="${1%.*}-$2MB.mp4" # filename out
 
# Original duration in seconds
O_DUR=$(\
 ffprobe \
 -v error \
 -show_entries format=duration \
 -of csv=p=0 "$1")
 
# Original audio rate
O_ARATE=$(\
 ffprobe \
 -v error \
 -select_streams a:0 \
 -show_entries stream=bit_rate \
 -of csv=p=0 "$1")
 
# Original audio rate in KiB/s
O_ARATE=$(\
 awk \
 -v arate="$O_ARATE" \
 'BEGIN { printf "%.0f", (arate / 1024) }')
 
#number of audio streams
O_ACOUNT=$(\
 ffprobe \
 -v error \
 -select_streams a \
 -show_entries stream=index \
 -of csv=p=0 "$1" | wc -w)
 
# Target size is required to be less than the size of the original audio stream
T_MINSIZE=$(\
 awk \
 -v arate="$O_ARATE" \
 -v duration="$O_DUR" \
 'BEGIN { printf "%.2f", ( (arate * duration) / 8192 ) }')
 
# Equals 1 if target size is ok, 0 otherwise
IS_MINSIZE=$(\
 awk \
 -v size="$T_SIZE" \
 -v minsize="$T_MINSIZE" \
 'BEGIN { print (minsize < size) }')
 
# Give useful information if size is too small
if [[ $IS_MINSIZE -eq 0 ]]; then
 printf "%s\n" "Target size ${T_SIZE}MB is too small!" >&2
 printf "%s %s\n" "Try values larger than" "${T_MINSIZE}MB" >&2
 exit 1
fi
 
# Set target audio bitrate
T_ARATE=$O_ARATE
 
 
# Calculate target video rate - MB -> KiB/s
T_VRATE=$(\
 awk \
 -v size="$T_SIZE" \
 -v duration="$O_DUR" \
 -v audio_rate="$O_ARATE" \
 -v audio_count="$O_ACOUNT" \
 'BEGIN { print ( ( size * 8192.0 ) / ( 1.048576 * duration ) - (audio_rate * audio_count)) }')
 
# Perform the conversion
ffmpeg \
 -y \
 -i "$1" \
 -c:v libx264 \
 -b:v "$T_VRATE"k \
 -pass 1 \
 -an \
 -f mp4 \
 /dev/null \
&& \
ffmpeg \
 -i "$1" \
 -c:v libx264 \
 -b:v "$T_VRATE"k \
 -pass 2 \
 -c:a aac \
 -map 0 \
 -b:a "$T_ARATE"k \
 $T_FILE

Change server terminal command prompt

If you deal with a lot of server, sometimes it can get tricky to keep track of which one you’re on…so a simple way to help that is to color code and name them in the command prompt.

All you need to do is:

  1. nano ~/.bashrc
  2. uncomment #force_color_prompt=yes
  3. Find a line like this:
    1
    
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
  4. And Change it to something like this:
    1
    
    PS1='${debian_chroot:+($debian_chroot)}\[\033[01;35;40m\]Staging \u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

For this I made it pink and added “Staging” before the hostname.  01;32m was the default color and I just needed to change it to 01;35m.  Below is a handy chart I found with all the ubuntu color codes.

VSQru

Sources:

https://askubuntu.com/questions/123268/changing-colors-for-user-host-directory-information-in-terminal-command-prompt
https://askubuntu.com/questions/27314/script-to-display-all-terminal-colors

Automatic Timestamping in MySQL

Setup table schema with automatic timestamping

1
2
3
4
5
6
7
CREATE TABLE `users` (
  `id`              BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
  `full_name`       VARCHAR(100) NOT NULL,
 
  `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `updated_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
)

Automatic properties are specified using the DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP clauses in column definitions.

The DEFAULT CURRENT_TIMESTAMP clause assigns the current timestamp as the default value of the column.
The ON UPDATE CURRENT_TIMESTAMP clause updates the column to the current timestamp when the value of any other column in the row is changed from its current value.

CURRENT_TIMESTAMP is a synonym for NOW() and returns the current date and time. It returns a constant time that indicates the time at which a statement began to execute.

 

This was shamelessly stolen :X  I really liked their formatting, and just wanted to make a backup
https://habenamare.com/guides/mysql/automatic-timestamping/

Set up Apache Varnish for a Laravel API

This is how I went about installing and configuring Varnish. This is assuming you already have Apache set up. Varnish only works on HTTP so you first have to set it up to work through the 80 port and then use a reverse proxy so Apache handles the SSL and then passes the request to Varnish.
So first, you have to change Apache from the 80 port to 8080. Essentially you just have to change 80 to 8080 in ports.conf and your VHosts file, but here’s a simple command that’ll do it for you

1
2
sudo sed -i -e 's/80/8080/g' ports.conf 	 	 
sudo sed -i -e 's/80/8080/g' sites-available/exampl.com.conf

Then restart Apache

1
2
apachectl configtest	 	 
sudo /etc/init.d/apache2 restart

Now you’re ready to set up Varnish on port 80. First, install and enable it

1
2
3
sudo apt install -y varnish	 	 
sudo systemctl start varnish	 	 
sudo systemctl enable varnish

Then adjust Varnish to use port 80, and increase its ram. I’ve seen it recommended to set it to use 75% of your server’s total memory, but I set it for a more conservative 30%.

1
2
cd /etc/default/	 	 
sudo nano varnish

change uncommented

1
2
DAEMON_OPTS="-a :6081 \	 	 
     -s malloc,256m"

to

1
2
DAEMON_OPTS="-a :80 \	 	 
     -s malloc,5G"

Do essentially the same change here

1
2
cd /lib/systemd/system/	 	 
sudo nano varnish.service

change this line

1
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m

Just change the 6081 to 80 and 256 to 5G
Then reload the setting and restart Varnish

1
2
sudo systemctl daemon-reload 	 	 
sudo systemctl restart varnish

Now it should be all set on port 80 and caching your HTTP content! You can make sure Varnish is running with this command

1
sudo netstat -plntu

Next we gotta make it work on SSL, first install the neccesary Apache modules

1
2
sudo a2enmod proxy_balancer 	 	 
sudo a2enmod proxy_http

In the VHost, comment out the document root, and add this

1
2
3
 ProxyPreserveHost On	 	 
 ProxyPass / http://127.0.0.1:80/	 	 
 ProxyPassReverse / http://127.0.0.1:80/

This will make Apache pass the request to Varnish on port 80. And Varnish running on HTTPS is all set!
But, to get it to cache the API requests, I had to make a few changes to the Varnish configuration.

1
sudo nano /etc/varnish/default.vcl

First, add this to set a Hit/Miss header so you can easily debug the configuration.

1
2
3
4
5
6
7
8
sub vcl_deliver { #just need to add it to the existing enclosure, don't make a new one	 	 
 if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed	 	 
 set resp.http.X-Cache = "HIT";	 	 
 } else {	 	 
 set resp.http.X-Cache = "MISS";	 	 
 }	 	 
 return (deliver);	 	 
 }

Add this inside of vcl_rev to ignore if the request has cookies. Varnish by default will not cache requests with cookies. My API used keys so this wasn’t an issue.

1
2
unset req.http.cookie; 	 	 
unset req.http.Accept-Encoding;

In vcl_backend_responce add this to unset the laravel cookie session header

1
unset beresp.http.Set-Cookie;

Another adjustment to allow purging of individual cached pages via the PURGE method works by adding this inside vcl_recv

1
2
3
4
#allow to purge saved URLs	 	 
if (req.method == "PURGE") {	 	 
return (purge);	 	 
}

That will allow you to remove any cached page using this command

1
curl -v -k -X PURGE https://url-to-purge

And to setup the headers so it’ll only cache in Varnish, and not in the browser they need to be:

1
2
Cache-Control:s-maxage=60,max-age=0,private,must-revalidate,no-cache,no-store	 	 
Surrogate-Control:*

s-maxage will set the max-age only for the server, and surrogate-control will make Varnish ignore no-cache/no-store
And, in case you haven’t done it yet, you can add a cert to your secure site using these commands

1
2
sudo certbot --apache -d example.com	 	 
sudo /etc/init.d/apache2 restart

If Varnish causes issue with renewing the cert, then try using added this to your crontab

1
0 12 * * * /usr/bin/certbot renew --cert-name example.com --http-01-port 8080 --quiet

To control the length of time that Varnish caches a page for all you need to do is set its Cache-Control header. That essentially needs to be set to

1
max-age=<seconds>, public

and Varnish will do the rest.

Sources:
https://www.howtoforge.com/how-to-install-and-configure-varnish-with-apache-on-ubuntu-1804/
https://bash-prompt.net/guides/apache-varnish/
https://stackoverflow.com/questions/65013805/cant-purge-entire-domain-in-varnish-but-can-purge-individual-pages-do-i-have
https://stackoverflow.com/questions/48988162/disable-cache-on-browser-without-disabling-it-on-varnish

File Monitoring Bash Script

I wrote a very simple bash script to check and report on any php file changes in the past 24 hours, and run a simple check for any suspicious files.  It doesn’t require any software to be installed so it can be used on shared hosting with limited shell access.

It simply uses `find` to check if any php files have been changed, and report back if they have.  And uses fenrir to check for suspicious files.  Fenrir is a simple IOC scanner that checks files for specific patterns that may indicate that those files have been compromised.

The actual script is as follows, you’ll just need to swap the folders and email with the actual file locations and email

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash
 
#check for changed files
CHANGED=$(find /websitedirectory/* -name "*.php" -type f -ctime -1 | head -50)
 
if [[ ${CHANGED} == '' ]]; then
  echo "nothing has changed"
else
  echo "files changed"
  mail -s "Website files changed" your@email.com <<< "file has been changed: ${CHANGED}"
fi
 
#run fenrir
(cd /file_location/fenrir; ./fenrir.sh /websitedirectory/) &
sleep 20m
 
SYSTEM_NAME=$(uname -n | tr -d "\n")
TS_CONDENSED=$(date +%Y%m%d)
 
MATCHES=$(grep "match" /file_location/fenrir/FENRIR_${SYSTEM_NAME}_${TS_CONDENSED}.log)
 
if [[ ${MATCHES} == '' ]]; then
  echo "fennrir found nothing"
else
  echo "fenrir found bad files"
  mail -s "Fenrir found suspicious files" your@email.com <<< "Fenrir found suspicious files: ${MATCHES}"
fi

After you’ve modified the script as necessary and created the file you can set it to run daily by adding this into your crontab

1
0 0 * * * /file_location/site_monitor