Sunday, December 07, 2014

Smarter Egypt Hackathon, Nov 2014

Last weekend, IBM Egypt in cooperation with ITI  and other parties organized a 4 days hackathon in the theme of applying big data technologies to solve national problems in Egypt.


The organizing parties involved:

  • Information Technology Institute (ITI) that hosted the event and highly promoted it, in belief of the value of such activity towards the national development of the country, as well as allowing its students, among others, a chance to refine their skills, gain hands on experience on real world critical problems and a chance to make sounding success stories.
  • IBM, Beside the national target, IBM has ongoing cooperation with ITI. and the event was a good chance to promote IBM Technologies, specially in the national projects domain. possible success stories would have high ROI for IBM.
  • Government representatives from few domains that were target of the hackathon theme. The representatives brought sample amounts of data, and their domain experience about their fields. specifically:
    • Water
    • Agriculture
    • Health
I was a participant in the hackathon. along with the couple of professionals from alexandria, and 4 ITI students.   We called our team "waterfall" and worked on designing a product to make the operations of water network maintenance more efficient reducing the wasted time, effort and downtime. We were the 2nd place winners  AHL.



I would like to elaborate on some notes regarding the event:
  • Tremendous effort and results has been made to organize such event. i'm really thankful to all entities involved
  • While the government entities are usually closed minded when it comes to data. I am glad some were convinced to disclose some of their data. However,  declaring the event to carry "Big Data" theme, while the samples of the data doesn't include more than few thousands (one of the samples in a specific domain was less than 200 records) can be considered a joke.  I'd call it "data" hackathon, rather than "Big Data" one.
  • Although The hackathon was announced about and some resources were posted earlier, more benefit could have been made by replacing the technical sessions in the first day with actual workshops a week ahead or so. at least for students, who needed time to be able to use the technologies proposed.
  • The government can make use of similar activities on a large scale by inviting professional as Entities, as well as  individual. with larger time span in a similar kind of competition (not necessarily on site) and with potential actual projects afterwards.  Of course, companies can make this effort and try to proceed working on proposed solutions for the discussed problems. 

Overall, i'm highly satisfied and enthusiastic with the event. and looking forward for next steps from both the community and interested parties.

Sunday, November 23, 2014

Facebook "Thank you" moments

Few years ago, Facebook launched their "7 years look back" videos. where any Facebook user can go to https://www.facebook.com/lookback  to find a video generated for him, showing highlights over his historical activities and photos, beautifully animated along with a nice background music, and it went viral. everybody like to watch and share his highlights.

Nowadays, something more of that kind is up.  Facebook launched "Thank you" videos. Facebook users can visit https://www.facebook.com/thanks  with a list of friends on the side. Upon selecting a friend, a video will be generated with a wonderful theme and soundtrack showing highlights between you both. with generic thank you words. The content is usually photos where  are you both tagged in, or posts made by one of you where the other is tagged. Of course starting with your spouse by default :)

The wonderful part is that you can go through the list, and see highlights of you and different people. which eventually you would love to share with some of them.   even with those friends with not much interaction. facebook stays abstract and short, instead of making assumptions and looking stupid.  well done at that part.

Try it now https://www.facebook.com/thanks

Monday, April 28, 2014

Stress Testing with Siege and Bombard, know your limits

I couldn't find a clear quick intro on getting siege and bombard in action, so I'm writing one here.

Siege is a load testing and benchmarking utility that has been available for quite a while, It allows you to hit your web application on a specific url (or a set of urls in a file) with specific concurrency and size settings.  siege summarizes the measures of the test outcome including:

  • Transaction rate (requests/sec)
  • Actual concurrency (even if you hit with 200 concurrent connections, your server might be responding with just 80)
  • Average, longest and shortest response time
example:

triggering 200 concurrent users, each hitting the url twice

siege -c200 -r2 http://www.modsaid.com/

summary:

Transactions:         400 hits
Availability:      100.00 %
Elapsed time:       38.44 secs
Data transferred:        0.26 MB
Response time:       10.09 secs
Transaction rate:       10.41 trans/sec
Throughput:        0.01 MB/sec
Concurrency:      104.96
Successful transactions:         400
Failed transactions:           0
Longest transaction:       34.19
Shortest transaction:        0.52


Siege is available in most repos and can be installed directly on ubuntu/debian via

apt-get install siege

However, in order for POST requests to work normally, I'd recommend installing the latest version (currently 3.0.5) from source

wget http://www.joedog.org/pub/siege/siege-3.0.5.tar.gz
tar zxvf siege-3.0.5.tar.gz
cd siege-3.0.5/
./configure && make && sudo make install

Similar tools exist, mainly ab of apache,  but ab is very basic compared to the flexibility of siege

Using siege is great for a single test, but you'll need to run it several times with different parameters in order to be able to understand your limits. and there comes the handy wrapper, bombard.

Bombard is a wrapper for siege that allows you to start your load testing with certain parameters and increase the load (concurrency) incrementally. and it draws charts that let's you see how your server reacts.  plotting makes things more clear and gives you a better sense of your server current limits.


Installing bombard requires dependencies:
  • GD2 perl module
    sudo apt-get install libgd-graph-perl
  • Chart-2.x
    wget http://www.cpan.org/authors/id/C/CH/CHARTGRP/Chart-2.4.6.tar.gz
    tar zxf  Chart-2.4.6.tar.gz && cd Chart-2.4.6 && \
    perl Makefile.PL && make && make test  && sudo make install
Then you can install bombard from the source

git clone git@github.com:allardhoeve/bombard.git
cd bombard
./configure
make && sudo make install

Now you can check all options through bombard -h

let's try experimenting out site:
Let's start with:
  • starting concurrency 50
  • increment 10
  • runs 10 times  (so it will try 50, 60, 70, 80, ..., 140)
  • each run will take 1 minute
Note: Bombard expects the absolute path of the file, it will not work with relative paths

bombard  -f /home/me/bombard-test-urls.txt -i10 -r10 -s50 -t1







From both graphs we can see that the transactions rate increases as we increase test concurrency. but it nearly saturates at concurrency 80.  The response time starts at nearly half a second, and remains so as we increase the load. until we reach 80 concurrency.  then it increases. the increase here is not from our web application itself, It is actually from having the requests buffer at the web server and application.

The two graphs indicate that our setup is able to handle up to 80 concurrent requests decently. then the performance will degrade as the load increases.


If you find this useful, please share your experience via comments, or get it touch with @modsaid



Friday, April 18, 2014

Linode finally back to the new baseline of VPS Hosting

2 days ago Linode finally announced their new VPS Hosting offering with doubling the RAM and moving to Fast SSD storage

SSD Storage will allow much faster operations for both setup operations, compilation and ongoing running services.

And the new memory upgrade is considered a drop in the pricing to the half, since people can now use the same server Ram with half the price they used to pay.

The smallest Instance offered by linode is now the 2GB Instance with $20/mo as stated in their pricing list:

Plan RAM SSD CPU Transfer Outbound
Bandwidth
Price
Linode 2G 48 GB 2 cores 3 TB 250 Mbps $0.03/hr | $20/mo
Linode 4G 96 GB 4 cores 4 TB 500 Mbps $0.06/hr | $40/mo
Linode 8G 192 GB 6 cores 8 TB 1 Gbps $0.12/hr | $80/mo
Linode 16G 384 GB 8 cores 16 TB 2 Gbps $0.24/hr | $160/mo
Linode 32G 768 GB 12 cores 20 TB 4 Gbps $0.48/hr | $320/mo
Linode 48G 1152 GB 16 cores 20 TB 8 Gbps $0.72/hr | $480/mo
Linode 64G 1536 GB 20 cores 20 TB 10 Gbps $0.96/hr | $640/mo
Linode 96G 1920 GB 20 cores 20 TB 10 Gbps $1.44/hr | $960/mo

This is considered a good Getting Back in Track  for linode.  both offering has been there from DigitalOcean for more than 15 months. Causing lots of users to test waters with it and others to actually migrate to digital ocean.

Now the cost of the 2G Instance and larger instances is now the same for both hosting providers.
However, Digital Ocean still offers smaller setup starting from 512MB Server for $5/mo  which can be more suitable for lots of users.  Digital Ocean offering since the beginning of 2013 and still today:


Plan RAM SSD CPU Transfer Price
512MB 20GB 1 Core 1TB $5/mo
1GB 30GB 1 Core 2TB $10/mo
2GB 40GB 2 Core 3TB $20/mo
4GB 60GB 2 Core 4TB $40/mo
8GB 80GB 4 Core 5TB $80/mo

Linode, however still have an edge for better processor offering in some of the package, and is still known for their reliable support.



Thursday, March 06, 2014

Redmine service hook for github

Github services

github.com allows you to activate some service hooks on your code repository. Hooks include lots of interesting functionality like posting email notification about commits among others.
At eSpace, we use github.com as our code repository, and redmine as our project management tool and issue tracking. We wanted commits to reflect automatically in the project management tool, which will keep the whole team feeling the progress directly.
All service hooks are open source, you'll need to add them to https://github.com/github/github-services by
  1. Forking https://github.com/github/github-services
  2. implementing ur new service or patch
  3. sending pull request to the original repo

Redmine service hook

A service already existed for redmine to cover the following need:
For code browsing to work well in redmine, it was found that the best way to do it with git, is to let redmine watch a local clone of the repo... upon pushing new commits to github, the service hook is used to trigger pulling those updates to the local repo on redmine server so the code seen through redmine remains updated.

Redmine issue updater


Great thanks to @basayel for helping me out in this. We modified the redmine service in github to add another different functionality. 
When a team member pushes a commit that includes "Fixing #1234", we wanted an update to be made to issue #1234 on redmine about that commit. so we can easily from redmine reach commits that are related to the feature or bug fix

Activating the plugin

To make the issue updater active, you need to be the admin of the repo on github
  1. visit "admin" tab for your repo
  2. select service-hooks > redmine
  3. enter the
    1. redmine url
    2. API Key (can be generated from the account settings on redmine)
    3. check both active and Update Redmine Issues About Commits
  4. save and you are good to go
Notes:
  • The updates will be posted on redmine authored by the owner of the API key
  • We created a user "github watcher" that we shall use for this in all our projects

The related pull request pull request: https://github.com/github/github-services/pull/374 has been merged on September 2012 and has been working well since then
We had an issue when working with redmine instances that run over https