Tuesday, December 24, 2013

Organize your readings with Pocket, opportunity for more

I have been using pocket for a while now to queue all the articles I want to read, so I can avoid interruptions and read them when the time is more convenient.  This was actually the philosophy of the app since it was known as Read It Later.

This allows me to go through the list later on mobile while i'm in transportation or waiting at some facility.

Seems perfect,  but actually things don't go just perfect.  not something wrong in the app, but rather myself.  Like lots of people out there, I'm a lazy procrastinating person. I'm working on fixing this, however, it is a fact I have to admit.

This resulted in a fact that I queue lots and lots of stuff in pocket, and read a lot lot less than what's there.  Saving things on pocket queue gives you a sense that it won't be lost and you can always get back to read it whenever you want. well... sometimes this "whenever" is not good with lazy people.

With a clear options to ignore those people, which is truly understandable. there is another opportunity in helping them fix their habits and attitude by several approaches. here is one:

What about having an optional functionality to automatically retire/archive entries older than X days or weeks.    Articles do get obsolete anyway. So instead of feeling secure that the entries will not get lost. one will know that it is a limited time, and you have to go through things, and perhaps skip unimportant ones and read what matters in a reasonable time.

Everything online is a stream, it doesn't matter what's the first article you ever saved. you always get to capture the latest. This is actually has a better analogy with life itself. everything happening around us is a stream, and life is limited.

Now is this idea really an opportunity or just a meaningless thought.  I guess the attitude of current users might tell better. maybe i am the only lazy person in here :).   To figure this out, there can be a measure
The percentage of "read" items  to the total items queued per person.  or maybe a more complex measure taking the rate of read items vs added items per month.

There is no easy way to see those numbers for myself, but definitely pocket team can tell.
X= (number of read items) / (number of queued non-archived items)

and the question is.  what's the average of  that measure across all users?...   what's the value for me "modsaid" ?

Tuesday, December 03, 2013

Exporting data from remote mysql instance in csv, using mysql client and sed

Mysql allows exporting query results to csv using the INTO OUTFILE, like the following example:

SELECT a,b,a+b INTO OUTFILE '/tmp/result.csv'
  FROM test_table;

This, however, will cause the data to be exported to the file system of the database server.  Sometimes you do not have access to that server and you are only connected remotely from a different machine using mysql command line.

One way to export the data is to pass a query and redirect the output to a file as follows

  FROM test_table;"  > output.txt

This will do the trick, except that the resulting file is TAB separated instead of CSV.

You can simply download the file and open it using any text editor and replace all tabs with commas. but that's not practical for large data sets.  Instead, we can use sed.  Sed is a powerful tool that can be used to replace text in streams.  If you are unfamiliar with it, i recommend you check it out. the simple example below describes the usage quickly

$ echo "this is original text"
this is original text
$ echo "this is original text" | sed -e's/original/manipulated/g'
this is manipulated text

Seems perfect. So we can only use it to replace TAB with COMMA.  but wait.  sed does not understand "\t"..   how can we pass it as part of the argument?!!

Luckly, brandizzi explained the proper way to do it. To write TAB into the command line, just hit CTRL+V then TAB

So our final step will be
$ sed -i -e 's/ /,/g' output.txt


Wednesday, May 15, 2013

Backup and Restore Git Repositories

Sometimes you need to move repositories from your local git server to github.com, bitbucket.org or vice versa. the situation where you want to archive the whole "bare" repo, not just a clone with the master or a single branch.

Backup can be done simply by adding --bare to the clone command

git clone --bare git@github.com:modsaid/test-repo.git

This will result in a cloned repo on the filesystem that is similar in structure to a bare repository

Restoring can be done by creating an empty repository on your git server (github.com or bitbucket.org). and running

cd test-repo.git
git push --mirror git@bitbucket.org:modsaid/test-repo.git


Tuesday, January 15, 2013

Migrating repos from SVN to git

It is straight forward to migrate your old SVN code to git. I highly advice everyone to do that even if they're not going to publish it on github or actually use it. but backing up a git repo can get to be very handy.

Thanks to Kevin Menard for his  svn2git, the migration can be very straight forward.  I have created a usage repo, using-svn2git, to speed things by adding it to Gemfile and using a specific rvm gemset, assuming you are using rvm. to make it as simple as:
  • git clone
  • bundle install
  • start the migration...

Of course having the standard SVN repo structure will save you a lot. You will probably only have to maintain the proper author.txt mapping file between svn and git users (example of the file is in the repo).

I have tried to keep the readme short and to the point. I hope you find it useful.