Wednesday, 2 December 2015

File upload download in hapi.js

A simple solution for file upload in hapi.js. Here we will show you how to upload and download the file of any extension in a very simple manner.<br>
<a title="Hapi file upload download repo link" href="https://github.com/pandeysoni/Hapi-file-upload-download" target="_blank">Hapi file upload download repo link</a><br>
If the payload is ‘multipart/form-data’ and parse is true, fields values are presented as text while files are provided as streams. File streams from a ‘multipart/form-data’ upload will also have a property hapi containing filename and headers properties.</p>
exports.uploadFile = {
    payload: {
        maxBytes: 209715200,
        output: 'stream',
        parse: false
    },
    handler: function(requset, reply) {
        var form = new multiparty.Form();
        form.parse(requset.payload, function(err, fields, files) {
            if (err) return reply(err);
            else upload(files, reply);
        });
    }
};
Upload function will read file which we uploaded and write in respective mentioned directory. See here
var upload = function(files, reply) {
    fs.readFile(files.file[0].path, function(err, data) {
        checkFileExist();
        fs.writeFile(Config.MixInsideFolder + files.file[0].originalFilename, data, function(err) {
            if (err) return reply(err);
            else return reply('File uploaded to: ' + Config.MixInsideFolder + files.file[0].originalFilename);

        });
    });
};
Here we used checkFileExist function, which will create directory if it does not exist. See here
var checkFileExist = function() {
    fs.exists(Config.publicFolder, function(exists) {
        if (exists === false) fs.mkdirSync(Config.publicFolder);

        fs.exists(Config.MixFolder, function(exists) {
            if (exists === false) fs.mkdirSync(Config.MixFolder);
        });
    });
};
We need to set content type to download the file
Set here content-type according to extension
switch (ext) {
                case "pdf":
                    contentType = 'application/pdf';
                    break;
                case "ppt":
                    contentType = 'application/vnd.ms-powerpoint';
                    break;
                case "pptx":
                    contentType = 'application/vnd.openxmlformats-officedocument.preplyentationml.preplyentation';
                    break;
                case "xls":
                    contentType = 'application/vnd.ms-excel';
                    break;
                case "xlsx":
                    contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
                    break;
                case "doc":
                    contentType = 'application/msword';
                    break;
                case "docx":
                    contentType = 'application/vnd.openxmlformats-officedocument.wordprocessingml.document';
                    break;
                case "csv":
                    contentType = 'application/octet-stream';
                    break;
                default:
                    reply.file(path);
            }
       Hapi file upload download
       Express file upload download
     

Sunday, 22 November 2015

Install GO language on Ubuntu (64 bit)

Go is a general-purpose language designed with systems programming in mind.It was initially developed at Google in year 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. It is strongly and statically typed, provides inbuilt support for garbage collection and supports concurrent programming. Programs are constructed using packages, for efficient management of dependencies. Go programming implementations use a traditional compile and link model to generate executable binaries.

The Go programming language was announced in November 2009 and is used in some of the Google's production systems



Installation

sudo apt-get install python-software-properties
 
sudo add-apt-repository ppa:duh/golang 
 
sudo apt-get update
 
sudo apt-get install golang 


To confirm:
 
go version

O/P

go version go1.2.1 linux/amd64

Type go in terminal output will look like this 
 



Saturday, 12 September 2015

Bye Bye MongoDB and MySQl

Why do we want to switch?

So, first off, let's explain our reason for that move. One of the reasons why we've been running on MySQL until now has been replication, which was quite the hassle in the past with PostgreSQL. But times have changed. Now, not only does it offer built-in streaming replication, it can even be configured as master <--> master. Add to that PostgreSQL's excellent data consistency behaviour (more on MySQL's shortcoming later), and it's already getting interesting.
What about MongoDB then? We have used it exclusively for chat transcripts, which are actually one thing mongo handles really well. Our only grief with it was that it tended to consume absurd amounts of memory over time. Since PostgreSQL added JSON support a while ago, we decided to switch over, the biggest upside being one service less to maintain.
At the same time, we decided to upgrade our ejabberd version, since the old version didn't play nice with PostgreSQL. By doing so, we could get rid of yet another nasty service, xmpp-bosh, since recent ejabberd versions have native support for bosh over websockets.

What was affected

So, after explaining why we set out on this journey, here is our target setup:
  • One PostgreSQL master with three slaves as hot standby (Userlike is dead serious about not losing data)
  • Upgrade ejabberd to current version
  • Drop xmpp-bosh
  • Drop MongoDB
  • Drop that MySQL

MySQL Migration

To move our data to PostgreSQL, we needed some kind of conversion process, and there are basically two different ways to go about it:
  1. Create an sql dump and do some fixing and converting on that to make it importable into PostgreSQL
  2. Use the django orm to dump everything as JSON, then reimport the result
We decided to go with option two since there are a few differences in how the django ORM creates and handles fields between MySQL and PostgreSQL. We wanted to dodge the problems which could potentially arise from that.
As we had to learn, straightforward dumping of large datasets using the default dumpdata and loaddata commands isn't viable because the utility tries to do it without chunking in one go. So, naturally, we ran out of memory. Fortunately, we were not the first to run into this problem, so this helped us lots: https://github.com/fastinetserver/django-dumpdata-chunk.
I didn't mention it before, but we're using cachemachine, which caches django querysets using redis. It turns out dumping your whole database with such a thing in place is a terrible idea. The redis instance caught fire and exploded at a later point from the load we put on it, so we disabled cachemachine between the migration steps.
Another thing worth mentioning is that loaddata triggers django signals on your models, which could potentially lead to things like signup mails going out to already registered customers. This can be circumvented by checking if you get passed raw=True into your signal handler. We solved this by adding a decorator to our signal handlers, which skips the execution in this case.
But being done at this point would've been too easy, the next issue we had were MySQL fields which violated constraints or contained things that PostgreSQL is more strict about. Just like django's IPv4 address field actually validating the contents.
Those issues were fixed by processing the dumpdata json files with a few custom cleanup functions before loading them again.
That leaves us with the ejabberd chat roster database, which was previously handled by MySQL, so we had to make this play along with PostgreSQL, too. This by itself wasn't too difficult, we're using mod_global_roster and apart from a few database permission restrictions that had to be ported over, we could just switch the connector settings and be done with it.

Mongo Migration

This part went a lot more smooth, mostly because our mongo use case was quite simple. We dumped all mongo data into a JSON file and imported the chat transcripts into the JSON field for each chat, the only thing that needed handling here was datetime data, which the mongo connector implicitly converted for us in both directions. Datetime data is now stored as an ISO-8601 date string, which is converted back and forth by our own code.
One nice feature mongo has is updating a JSON document in-place. We used that to append each line in a chat transcript to an array. We lost this capability - PostgreSQL as of yet does not support appending to an array using JSON queries - we would have to add a stored procedure to pull that off. Not being very fond of that idea, instead we simply buffer entries for the chat transcript and only update the row every X entries or after some amount of time has passed.

Enter (and exit) PgBouncer

After all that praise for PostgreSQL, there is one potential downside, which is that connection setup is a more expensive process compared to MySQL. This can be mitigated by using PgBouncer, which can hold already set-up connections and hand them out to the needy. During development, it turned out that after all, for our setup, this was some kind of premature optimization. Just giving every django worker process a persistent connection is enough for us and since less is more, we dropped PgBouncer in the end.

Migration Branches

Time doesn't stand still for long in our development process, so after development made the first switch to PostgreSQL, we piled on new database migrations. This left us with the problem of the production MySQL server, dumping its data and loading it into the current development branch doesn't work in this scenario. So we made a migration branch and forked off right at the point where MySQL turned PostgreSQL for the first time. Having such a branch also enabled us to put all the code from the previous steps (i.e. disabling signals, cache machine etc.) into a convenient location separated from the regular code base. So, the plan emerged to switch to the postgres-migration branch, do all the conversion work, switch back to master, apply every regular django migration that happened since then and be done.

Will it blend?

One concern that came up during development was that the new stack was significantly slower than the old one. This isn't even about "Is MySQL faster than PostgreSQL?", it can simply happen due to seemingly minor difference in each database’s behavior. After all, what we had before was reasonably optimized for MySQL, since we have been running on it for quite some time. Luckily for us, our friends over at StormForger were interested in more people testing out their product, so we've decided to give it a spin to see where we're at. StormForger is a load testing service, you can configure your websites' API endpoints and setup up different scenarios to stress your application.
Outlining in detail how we use StormForger now is enough material for another blog post, which we will write in the future, but here is a short rundown:
Overview of test cases
Result of a test case
The bottom line is, we actually don't know in hard numbers where we stand compared to before since we have never load-tested the old system this extensively. Still, we do now know our current limits very well. In response to this, several database queries were optimized or restructured - up to the point, where people not involved in the process started to notice it and gave positive feedback.

Preparing for failure

Battle hardened developers who are reading this are probably thinking "That are a lot of critical things changing at once" - and they’re absolutely right. Working with an established product that is in use by customers is rewarding and a lot of fun, but you cannot break it in any major way.
The important questions here are "What can go wrong?" and "What can we do about it, when it actually does go wrong?".
Since we did a lot of testing before, there was not much doubt about the migration process itself. What was worrying us the most was the major ejabberd upgrade, since we had a very fine tuned setup with the old version and couldn't load-test the new version under production conditions except by actually putting it in production.
The decision was made to have a downgrade to the previous ejabberd version as a fallback option, which proved yet to be another challenge. Our previous ejabberd version bundled an ODBC connector, which was incompatible with the PostgreSQL 9 series. Tracing back the source from git through a point in time before where svn was still in use, we could locate the necessary version of the connector with just the patch we needed. Doing it this way instead of simply using the newest version minimised the risk of incompatibilities, which is an important thing for a failsafe fallback plan. After fiddling around with different erlang compiler versions, since we used an official binary release, we got it running and eventually had our old ejabberd talking to our new PostgreSQL database.

Final words

With everything in place, we did a smooth rollout of the release. Of course it was a long night simply due to the amount of time the migration process necessary, but we took it in stride without any issues.

Tuesday, 1 September 2015

What are some downsides of MeteorJS?


Meteor is kind of "finished product" framework if I may call it that. It does a lot for you. For me that is kind of a downside. I don't like frameworks that do too much. When they do too much, a lot of magic is done without your knowledge and in the end you may find yourself limited. Are you sure want to use a method that will re-render the all page or something for example? Sometimes you don't even know that it is happening. I prefer something simpler like Backbone. I don't really like magic in frameworks. For me, too much magic without my knowledge is a downside.

Other downside is the size footprint. I can't find how much it weighs but for so many magic... Backbone is 23kb. That will cost in the load times.

In my case I use Spoonjs or use Backbone + Plugin (one I've made to custom it further to my usage). Meteor is a closed box solution and in my case I don't find space for it. With big projects Backbone + plugin / Spoonjs will give me the freedom to do whatever I like. The worst thing it could happen to the developer is to reach a point of the project where you find that the framework is strapping you. For small projects, Meteor would be overkill, so...

Meteor is a well built framework but it won't ever have the control of a smaller one that doesn't do so much. I guess that the speed of development with it in small / medium projects could be interesting.

By the way instead of the CLI of the Meteor I use Grunt / Gulp and yo and such tools.

Simply put - it is perfect for a MVP but not so good for the end use application.

Monday, 31 August 2015

MongoDB Backup and restore (mongodump, mongorestore, mongoexport, mongoimport)

This Post describes the process for creating backups and restoring data using the utilities provided with MongoDB.

The mongorestore and mongodump utilities work with BSON data dumps, and are useful for creating backups of small deployments. For resilient and non-disruptive backups, use a file system or block-level disk snapshot function, such as the methods described in the MongoDB Backup Methods document.

mongodump -d testDB
 
This command will give backup in dump directory (/home/user/dump) here will give your db backup 


mongorestore --host localhost --port 27017 <path_of_database>
 
this command will restore database and then you can check it in to mongo console 
 
 
 
MongoDB’s mongoimport and mongoexport tools allow you to work with your data in a human-readable Extended JSON or CSV format. This is useful for simple ingestion to or from a third-party system, and when you want to backup or export a small subset of your data. For more complex data migration tasks, you may want to write your own import and export scripts using a client driver to interact with the database.
The examples in this section use the MongoDB tools mongoimport and mongoexport. These tools may also be useful for importing data into a MongoDB database from third party applications.
If you want to simply copy a database or collection from one instance to another, consider using the copydb, clone, or cloneCollection commands, which may be more suited to this task. The mongo shell provides the db.copyDatabase() method.
 
 
 
 
When you export in CSV format, you must specify the fields in the documents
to export. The operation specifies the name and address fields
to export.
 
 
mongoexport --db users --collection contacts --type=csv --fields name,address --out /opt/backups/contacts.csv 



Import JSON to Remote Host Running with Authentication

In the following example, mongoimport imports data from the file /opt/backups/mdb1-examplenet.json into the contacts collection within the database marketing on a remote MongoDB database with authentication enabled.
mongoimport connects to the mongod instance running on the host mongodb1.example.net over port 37017. It authenticates with the username user and the password pass.



mongoimport --host mongodb1.example.net --port 37017 --username user --password pass --collection contacts --db marketing --file /opt/backups/mdb1-examplenet.json

 

Tuesday, 27 January 2015

SmartOS Introduction

SmartOS unites four extraordinary technologies to revolutionize the datacenter:

ZFS + DTrace + Zones + KVM
These technologies are combined into a single operating system, providing an arbitrarily-observable, highly multi-tenant environment built on a reliable, enterprise-grade storage stack


SmartOS is a specialized Type 1 Hypervisor platform based on Illumos.  It supports two types of virtualization:
  • OS Virtual Machines (Zones): A light-weight virtualization solution offering a complete and secure userland environment on a single global kernel, offering true bare metal performance and all the features Illumos has, namely dynamic introspection via DTrace
  • KVM Virtual Machines: A full virtualization solution for running a variety of guest OS's including Linux, Windows, *BSD, Plan9 and more

MongoDB(NoSQL Database)

MongoDB is a powerful, flexible, and scalable data store. It combines the ability to
scale out with many of the most useful features of relational databases, such as secondary
indexes, range queries, and sorting. MongoDB is also incredibly featureful: it has
tons of useful features such as built-in support for MapReduce-style aggregation and
geospatial indexes.
There is no point in creating a great technology if it’s impossible to work with, so a lot
of effort has been put into making MongoDB easy to get started with and a pleasure to
use. MongoDB has a developer-friendly data model, administrator-friendly configuration
options, and natural-feeling language APIs presented by drivers and the database
shell. MongoDB tries to get out of your way, letting you program instead of worrying
about storing data.