How to setup WordPress with NGINX on Debian testing

NGINX is a powerful web server and I wanted to give it a try for some time now.

Its setup is rather simple and pretty straight forward in my humble opinion.

The current settings are for PHP 7.0 using php7.0-fpm under GNU / Debian testing 64-bit.

The first thing to look for is whether cgi.fix_pathinfo is 0 or not inside our /etc/php/7.0/fpm/php.ini.

If it's not, give it the value 0 as it has been a subject of multiple discussions around security. I think it has been resolved for a quite a while now, but it would not harm to turn it off.

Next thing we do is to create our own WordPress virtual host server settings:

server {
    listen 80;
    listen [::]:80;

    root /var/www/html/wordpress;

    # Add index.php to the list if you are using PHP
    index index.php index.html index.htm;

    server_tokens off; # removes NGINX version

    try_files $uri $uri/ /index.php?$args;

    # deny access to files like .htaccess
    location ~ /\.ht {
        deny all;

    location / {

        try_files $uri $uri/ /index.php?q=$uri&$args;

        # Show "Not Found" 404 errors in place of "Forbidden" 403 errors
        error_page 403 =404;

        location ~ ^/(.+\.php)$ {
            include /etc/nginx/snippets/stefanos.conf;

        # Secure the following files from getting accessed
        location = /xmlrpc.php { deny all; }
        location = /license.txt { deny all; }
        location = /readme.html { deny all; }
        location = /wp-config.php { deny all; }
        location = /wp-admin/install.php { deny all; }


The code above I have saved it in a file named and is located in /etc/nginx/sites-available/ directory; you can name it anything you want that could represent you or your project.

If you haven't noticed already, I have included a custom snippet file named stefanos.conf which is located in /etc/nginx/snippets/ directory.

The reason I have done so is because the same code snippet is used elsewhere, in a different server {} and I wanted to avoid unnecessary duplication.

Here's the code snippet:

try_files $uri =404;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;

Save the code inside the aforementioned inclusion path: /etc/nginx/snippets/stefanos.conf

Of course, change the snippet name accordingly.

Now, let's see how my default server {} setup looks like:

# Default server configuration

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;

    server_name _;
    server_tokens off;

    location / {
        index index.php index.html index.htm;
        try_files $uri $uri/ =404;
        autoindex on; # enabling directory listing

    location ~ ^/(.+\.php)$ {
        include /etc/nginx/snippets/stefanos.conf;

    location /phpmyadmin {
        root /usr/share/;
        index index.php index.html index.htm;

        # set client body size to 3M
        client_max_body_size 3M;

        location ~ ^/phpmyadmin/(.+\.php)$ {
            root /usr/share/;

            include /etc/nginx/snippets/stefanos.conf;
        location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
            root /usr/share/;

    location /phpMyAdmin {
        rewrite ^/* /phpmyadmin last;


These settings are inside /etc/nginx/sites-enabled/default, which by the way, this file is a symlink located in /etc/nginx/sites-available/default.

Basically what this setup does is to be used as our standard localhost setup; this way we are able to run http://localhost/phpmyadmin/ or http://localhost/ get the idea.

Now, about symlinking, we are going to do the same thing to our file so we can enable it.

Here is the command:

sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

Next thing we do is to add our domain name inside our /etc/hosts file and associate it with the IP it represents our local user; in my case it's

So your /etc/hosts should look something like this:


By the way, the dots used above are used as an indication that other existing settings that reside inside the file remain as is.

In order to apply the domain name changes, we need to restart our networking service.

sudo systemctl restart networking.service

Logically, it should take a couple of seconds to apply the changes and should not complain about anything. If it does, it means you have messed up with something while typing it; most probably you forgot to add a space between the IP address and the domain name or something else you need to resolve yourself via trial and error.

Lastly, we run a test command for NGINX so it can report back to us whether it liked our settings or not and if everything looks OK and behave as expected, we are going to restart our server and start using our website.

sudo nginx -t

If everything went well, run this command.

sudo systemctl restart nginx.service

That's it folks. We have successfully created our standard default setup and our custom WordPress virtual host server.

If anyone is facing any kind of problem around these settings, feel free to comment below so we can investigate it together.

Introducing inrep

inrep is a shell script that automates repository initialization right from your terminal.

So, what does it do?

Well, it oversimplifies the whole procedure for you when you follow the standard methodology:

  • You go to your GitHub account.
  • You initialize a repository based on a name of your choice.
  • You clone it locally.
  • You create a file or a volume of files.
  • You add it to repository.
  • You commit them.
  • You push them back to your GitHub account.

Imagine having to repeat the procedure for a number of projects...

So, what can we do to improve this not so easy set of steps?

Enter, inrep!

Here is a set of steps it runs for you by freeing your hands from unnecesary repetitive typing pattern:

  • It makes an empty directory for you based on the name you give it.
  • It initializes that directory for the first time.
  • It creates a file and commits it.
  • It pushes the changes on your GitHub account based on the username you either initialized at the beginning of the script or you have passed as an argument.

Let's see an example:

$ sh -u johndoe -p FooDemo
$ sh --project BarDemo2

As you can see, in the second case I did not use -u or --user flag, because I have created a username variable for you inside so you can set it explicitly with your username for the sake of automation.

Frankly, there's nothing else I should explain to you; that simple it is for the sake of simplicity and automation.

I hope you guys find it useful.

Happy coding, enjoy!

Working remotely - Your thoughts please

I have tried to find an online mentor (for free of course, as I'm unemployed for more than a year now), that would be more than willing to have a long and exhausting discussion with me about various programming languages and concepts, all the "do"s and "don't"s about those specific concepts and guide me towards the right path for working eventually remotely, either from home or from anywhere the world as long as I have a wireless connection to use a laptop.

You will say "hey, nothing is free you know" and I agree with you; but look around you and see how many open source projects exist. If I get trained properly or validate my existing knowledge by an experienced, professional senior developer I'm pretty sure I will create a few useful projects, let alone services, plus I will be able to participate to existing projects that need an extra pair of eyes and hands.

So, without extra fuss, anyone can suggest anything?


Introducing mproj

mproj is a rather small bash script that generates C or C++ project templates depending on your needs.

Have you ever worked with an editor like Vim or Emacs and all the time you had to copy / paste a Makefile sample from some other project of yours or one that you have found online and you struggled to make it work for a simple project of yours?

Yep, that had happened to me more than cannot imagine how furious I was when I had to repeat the same procedure for every single project of mine.

I know what you are thinking right now and let me answer immediately: there is no reason to use an IDE for the sake of a demo project; with an editor like Vim, the editor I personally use and like, a couple of megabytes of memory are more than fine and work just fine.

For heaven's sake, why would I force someone to use an IDE just because I happened to use it for a project of mine? Well, since certain IDEs do not export makefiles successfully, you have to either do a double work or go with the old method.

That's why I have decided to follow the good ol' traditional K.I.S.S. concept: keeping things as simple as possible.

So, how this mproj actually works? It's very easy to use it and convenient too. It comes with preset flags that let you choose between C and C++ standards.

The following flags are preconfigured:

  • --c89
  • --c99
  • --c11
  • --c++98
  • --c++11
  • --c++14

In other words, by choosing, let's say --c++11 flag, it will generate an empty C++ project with preset standard flag to support C++11.

Pretty cool, don't you think?

Here's an example:

bash mproj/ --c++11 /tmp/myFirstDemo

It generates a C++11 project template inside /tmp/ directory and named it myFirstDemo.

OK, but what happens if you forget to insert a project name as your second argument?

Well, because I done this a couple of times by mistake, I have decided to default it to [FLAG]_demo, meaning that our previous example without given the project name, it should have had generated c++11_demo at the current working directory.

That's it.

I hope you enjoy it as much as I did while writing it.

Happy coding, enjoy!

A simple C++ Deque example

Being a polymath and a programming polyglot person can be really exhausting, let alone intimidating at some point. Therefore I have decided to write a few things down that would be used as a backup and as a reminder to this rusty brain.

Today we will see a very simple example how double-end queues work, also known as deque.

So, what is a double-end queue? It's a queue container that would let you add or remove elements on both of its sides; bear in mind that this happens dynamically behind the scene for you, therefore you need no worry about it.

Time for our example:

#include <iostream>
#include <deque>

int main()
    std::deque<int> val;

    for (int i = 0; i < 10; ++i)
        std::cout << "front value: " << val.front() << '\t';
        val.push_back( (i * i) + 9 );
        std::cout << "back value: " << val.back() << '\n';
    std::cout << '\n';

    for (auto i = val.begin(); i != val.end(); ++i)
        std::cout << "pop front: " << val.front() << '\t';
        std::cout << "pop back: " << val.back() << '\n';

OK, what does it do?

Basically we create a deque that accepts values of type integer and in our first loop, we push i on the front side of queue and (i * i) + 9 on the back of it.

On the second loop, we use auto as our variable initializer to assign val's beginning value to i and while has not reached the end of it, to first print the value that's about to get popped from the front side and then from the back.

Well, that was it; pretty straight forward I would say.

Alright, that's it for today.

If anyone has a question, please don't hesitate to ask.


GNU Makefile sample version 2.0

Hello everyone,

After a bit of experimentation, I have created the ideal Makefile that works flawlessly for me and my needs.

For now I will write it down and I will explain it to you bit by bit later on.

CXX = g++

CXXFLAGS += -pedantic
CXXFLAGS += -std=c++14

SRC = src
HEADERS = include


OBJDIR := obj
BINDIR := bin

TARGET = $(BINDIR)/hw\_demo

SOURCES = $(wildcard $(SRC)/\*.cpp)
TMPOBJ = $(patsubst %.cpp, %.o, $(notdir $(SOURCES)))
OBJECTS = $(addprefix $(OBJDIR)/, $(TMPOBJ))

all : $(TARGET)

    $(CXX) -o $@ $(OBJECTS)

    mkdir -p $(OBJDIR) $(BINDIR)

$(OBJECTS): $(OBJDIR)/%.o: $(SRC)/%.cpp
    $(CXX) $(CXXFLAGS) $(INC) -c $< -o $@

clean :
    @echo "Cleaning target and object files..."
    @rm -rf $(TARGET) $(OBJDIR) $(BINDIR)
    $(shell find . -type f -iname "\*.pch" -delete)
    @echo "All clear!"

full : clean build all

.PHONY : clean build all


So, what does it do?

First of all, we set our CXX variable to work with g++; you can choose clang for example, but don't ask me how it works as I'm not using it. Sorry people :/

Then we set our C++ variable CXXFLAGS according to our needs: things like warnings, how strict our checking should be, what standard should we use if we want to be explicit about it (since GCC after 5.x series has gnu++11 enabled by default), and what kind of code optimization could we use, again according our needs.

Now, the real show begins! Before I learned all this, I used to have my source files, both headers and actual implementation files, exposed in project's root directory and that was no handy at all, as it was generating lots of objects and sometimes some humongous precompiled files.

That's why I decided to create src and include subfolders inside project's root directory to know exactly where each file should be and where to look for generated objects and or precompiled files at a later time.

Later, INC comes to complement our inclusion variable that would point our parser to the right direction for header files with -I $(HEADERS).

Right after, I have decided to create two different subfolders, named obj and bin, that would exist along src and headers to make my life easier and a lot cleaner without all these objects, binaries, and precompiled files scattered all over the project.

What's next? Assigning to my TARGET variable the binary name it should generate; just give it something useful to make sense according to project's name; so now my TARGET would contain something like bin/hw_demo.

Immediately follows SOURCES variables that uses wildcard function that comes with make by default and collect all source files' names inside our variable with their extension.

Then we have TMPOBJ that holds our objects; long story short, without this temporary variable, I couldn't make it work. It would insist on carry with it the full path of each object file and that would have caused problems with our later assignment to OBJDIR which you will see very soon; I had to find a work around and gladly I did (by mistake, but shhhh...don't tell anyone)!

I used notdir function to extract just the file names from SOURCES and then I would substitute each file's extension with .o with the help of patsubstr function; what follows immediately after is our OBJECTS variable that would take our objects, newly prefixed with our OBJDIR. This way it will point to obj/foo.o obj/bar.o and so on, this time without any problem.

Then it's our targets: all, build, clean, and full.

  • all builds our objects.
  • build makes our two directories right inside our project's root directory.
  • clean removes our object and binary files and then deletes the two directories we have built with build target.
  • full basically runs a cleanup process, then creates our folders, and then builds everything anew.

Last but not least, it's .DEFAULT_GOAL. By setting this special variable to the target of our choice, it executes it first by force. I chose to do so, because my code is rather super tiny and I don't have to worry about compilation time. If a project were larger, I wouldn't have set it.

So, that was it everyone.

I hope this silly script make your life a lot easier and prettier with automation.

If anyone wants to say hi, please feel free.


An ideal GNU Makefile for C and C++ newbies

It's been a year or so I wanted to start using GNU Makefile in place of an IDE and this week I did a bit of a research and found the ideal Makefile that both newbies and mediocre programmers will find it handy.

The code below is taken from my C Makefile; please change the settings according your needs:

CC = gcc
# We include the current directory
INC = I.
CFLAGS = -Wall -pedantic -std=c99 -O2

# Enable this if you use threads in your program
# LDFLAGS = -pthread

TARGET = demo
OBJECTS = demo.o demo_main.o

all : $(TARGET)

# The PThread version
$(CC) -o $(TARGET) $(OBJECTS)  demo.o : demo.c demo.h

# The PThread version
# $(CC) $(CFLAGS) $(LDFLAGS) $(OBJECTS) -c demo.c
$(CC) $(CFLAGS) $(OBJECTS) -c demo.c

demo_main.o : demo_main.c

# The PThread version
# $(CC) $(CFLAGS) $(LDFLAGS) $(OBJECTS) -c demo_main.c
$(CC) $(CFLAGS) $(OBJECTS) -c demo_main.c

full : clean all

.PHONY : clean all

clean :
    rm -f $(TARGET) $(OBJECTS)


A small clarification: I'm using GNU / Linux Debian testing 64-bit, therefore the following code should work out of the box with GNU make.

So basically what am I doing here? In simple words, I'm first cleaning my already built binary files, both executable and object files, and then recompile my project; as simple as that.

So, every time I'm running make command, it would first clean the project and then rebuild it. Its behavior is so due to the use of special variable .DEFAULT_GOAL.

For more information about its behavior, please visit GNU make: Special Variables.

I hope you find it useful as much as I did.

Happy coding everyone.


Useful Linux Commands Series - Part B

I haven't posted anything since October 23rd and I kind of feel uncomfortable with myself, because it's the first time I'm not typical with my obligations; well, what do you know, people change indeed.

Anyhow, I will continue with the series and hopefully in the near future I will be more reliable with time scheduling.

How to convert a flash video file to an MP3

For converting a flash video you can do so with various tools under GNU / Linux; one of them is with FFmpeg or libav tools. The latter is a fork of the former for political reasons that are way beyond this blog's concept or content.

In our case I will use libav tools as it comes with GNU / Linux testing (currently jessie) and it's available in place of ffmpeg.

Here's the command:

avconv -i <flash_video_file>.flv -f mp3 <result>.mp3

That was the simplest example to show. In the past I faced a serious issue with sound and I had to explicitly use an MP3 codec with my command:

avconv -i input.flv -acodec libmp3lame -aq 4 output.mp3

How to extract an audio file from a video

In my humble opinion, extracting the original (raw) audio file from a video is the best thing you can do if you are passionate about quality. In order to do so, you need to have MediaInfo installed.

You can use it from terminal to get all the information you need or you can use its GUI which is convenient for inexperienced users. Doing so, it helps immensely knowing the audio codec, thus making it easier for you to give the appropriate file extension.

Here is the command to extract your raw audio file from a video:

avconv -i file.mp4 -vn -acodec copy "audio_file_extracted.m4a"

How to convert MusePack (MPC) music files to MP3 files

A few years ago I remember I found this new "cool" audio codec and I wanted to rip a CD just for the sake of curiosity; I wasn't disappointed. After the conversion, the original CD disappeared from my room, and frankly I can't remember where I put it LOL! So, now I had to convert it to MP3 this time to sync it with my MP3 player and enjoy my music. The quality was exactly the same, unbelievably awesome results!

Below is the command I run to convert all my MPC files all at once:

for f in *.mpc; do avconv -i "$f" -acodec libmp3lame -aq 4 "${f%.*}".mp3; done

To explain it a bit, "for every .mpc file you find, recode it as MP3 file of quality 4, keep the original name but ignore suffix file extension; we are adding our own, that of .mp3".

How to delete multiple Debian packages

I remember once, I faced a problem with PHP5 packages. Something broke in the background while upgrading my packages, somewhere that I could not detect to fix it, that corrupted all PHP5 config files that I had to completely purge the problematic set and reinstall it.

The command goes as follows:

sudo apt-get purge $(dpkg --get-selections | grep php5 | awk '{print $1}')

How to remove orphaned packages

It's good to keep your system clean from orphaned packages that stuck on your system as remainders of already removed packages. In order to do so, you need to have deborphan package installed. Install it and do the following to clean up your system:

sudo apt-get purge $(deborphan)

Please note that you have to re-run deborphan a few more times to make sure all packages have been purged; some packages do not get purged recursively for some reason.

How to search for music files and play them on the fly with mplayer

One afternoon I was at home experimenting with shell scripting and it hit me; is there a way to search for a song at a location of my choice and as soon as I find the desired song to send to mplayer and play it on the fly? I started laughing with myself for "imagining silly things"; I guess that silly thought was not *that silly* after all. I have had figured out a way to make it happen!

Here's the "funny" command I'm talking about:

find <directory-of-your-choice> -type f -name "*.mp3" -print0 | xargs -0 mplayer

If you want to sort the songs first and then play them, do the following:

find <directory-of-your-choice> -type f -name "*.mp3" -print0 | sort -z | xargs -0 mplayer

OK, logically now you are asking yourself; what about searching for various forms of a file extension of a song? I'm sure you have songs that look something like .Mp3, .MP3, and so forth. Well, if you can search for different forms, why not for other file types as well? Yes, you can!

find <directory-of-your-choice> -type f \
    -iname "*.mp3" -o \
    -iname "*.m4a" -o \
    -iname "*.mp4" -o \
    -iname "*.aac" -o \
-print0 | sort -z | xargs -0 mplayer

How to determine multiple file types at once

A main characteristic of mine is my congenital curiosity about everything. I seek for answers vigorously, how things work, not *WHY* they work the way they do. Imagine now my anxiety to find answers to questions like "how the experts are implementing their programs, with what tools (language that is)" and so forth.

That's the case of file command. Unfortunately, for some reason it does not work properly, as it guesses the file type and that is something I wasn't willing to accept as an excuse. Fortunately for me, I have found mimetype command; just install libfile-mimeinfo-perl package and will make your life easier! You can thank me later.

Here is the command you could run to find file types of important programs located in /usr/bin/ and in /usr/sbin/:

find /usr/bin/ /usr/sbin/ -type f -exec mimetype {} \;

You can grep specific file types you are looking for and even sort them, like we did with other commands above.

That's it for now. I hope you enjoyed this new series. Any new set of commands I may find interesting, I will surely going to share it with you, have no worries about it. Until that time, cheers.

Useful Linux commands Series - Part A

It has been a while I haven't posted anything, so I thought it would be a nice idea to collect a few GNU / Linux commands I have on various textbooks and share them here with you. The main reason I do so is because I needed a certain command in the past and could not find it, plus I did not have my notes with me. Now I have decided to collect them all in one place and from time to time I will be adding a few more in here.

Okay, enough words; here are the commands that saved my day.

How to generate MD5 hashes for files

I have tested the following commands and work just fine.

find /home/user/Documents/ -iname "*" -exec md5sum -b {} \; > md5_hashkeys.txt

Another way to accomplish the same thing is the following code:

find /home/user/Documents/ -type f -print0 | xargs -0 md5sum > md5_hashkeys.txt

Yet another simpler way this time to accomplish the very same thing is the following code:

find /home/user/Documents/ -type f -exec md5sum + > md5_hashkeys.txt

How to backup a directory with rsync

This command saved me many times by now; that's how I backup my most important data from my machine to my external hard drive.

rsync -ahv --progress --include ".*" /home/user/ /media/external_drive_directory/

How to give root access to sudo users

CAUTION: Use it at your own risk! I'm a lazy person and I want things to be almost automated; it does not mean it's secure this way. Use it only for your desktop PC and not on your development or live server.

<username> ALL = NOPASSWD:ALL

How to download specific files from a website with wget

I don't know whether is legal or not, I had to use this command in the past to download a bunch of research documents from a university server as they could not send me a massive package with all PDF files in it; wget saved my day (yes, it was a long time ago).

wget --recursive --level=1 \
--no-directories --timestamping \
--no-parent --accept=.pdf \
--execute="robots=off" --verbose \
--directory-prefix=/home/user/Downloads/ \

How to display file permissions in octal under GNU / Linux

In the past I was wondering how could anyone read access rights' permissions like -rwxr-xr-x. I looked for an alternative to understand what those r, w, and x meant and found the following command to convert permissions to octal values; my life with PHP got a lot easier thanks to stat command.

stat -c '%A %a %n' *

To explain the command a bit, %A it's to display file permissions in human readable form, like the aforementioned example, %a is to display access rights in octal form (our desired result), and last %n to display file name; I chose to display everything just for demonstration purposes; you can become specific if you like.

I hope you enjoyed this series part and don't worry, I will add more content soon enough; hopefully by the end of this week. Until then, see you and enjoy your time with our commands.