Registering client filters for JAX-RS ClientBuilder

I write this here so I don't forget some of the goodies in below packages :)
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.ClientRequestContext;
import javax.ws.rs.client.ClientRequestFilter;
import javax.ws.rs.client.Entity;
import javax.ws.rs.client.Invocation;
import javax.ws.rs.client.Invocation.*;
Helpful Links


public class RequestFilter implements ClientRequestFilter {

    private static final Logger LOG = Logger.getLogger(RequestFilter.class.getName());

    public void filter(ClientRequestContext requestContext) throws IOException {
        LOG.log(Level.INFO, ">> " + requestContext.getMethod() + " " + requestContext.getUri());
        LOG.log(Level.INFO, ">> (body) " + requestContext.getEntity().toString());
        LOG.log(Level.INFO, ">> (headers) " + requestContext.getHeaders().toString());


public class ResponseFilter implements ClientResponseFilter {

    private static final Logger LOG = Logger.getLogger(ResponseFilter.class.getName());

    public void filter(ClientRequestContext requestContext, ClientResponseContext responseContext) throws IOException {
        LOG.log(Level.INFO, "<< " + requestContext.getMethod() + " " + requestContext.getUri());
        LOG.log(Level.INFO, "<< (status) " + responseContext.getStatus());
        LOG.log(Level.INFO, "<< (headers) " + responseContext.getHeaders().toString());

        InputStream entityStream = responseContext.getEntityStream();
        if (entityStream != null) {
            LOG.log(Level.FINER, "<< (body)" + entityStream.toString());


JSONObject body = new JSONObject();
body.put("key", "value");

Client client = ClientBuilder.newClient();
client.register(ResponseFilter.class); // Response logging
client.register(new RequestFilter()); // Request logging

Response response = client.target("https://api.example.com")

if (response.getStatus() == 200) { ... } else { ... }


Setup Liberty Server on Eclipse

Instructions for setting up a Liberty Server in Eclipse Mars 4.5.2 Below instructions assume IBM WebSphere Application Server Liberty Developer Tools Beta has been installed from Eclipse Market Place.

Creating a new Liberty Server

  1. Create a server using the Wizard.
    Click File > New > Server
    Click Next
  2. Select WebSphere Application Server Liberty
    Click Next
  3. Select Install from an archive or repository
    Click Next
  4. Enter the Destination path (This will be the place where the server will live)
    Select Download and install a new runtime environment from ibm.com
    Select WAS Liberty V8.5.5.9 with Java EE 7 Full Support
    Click Next
  5. Click Install for the following bundles:
    • Base Bundle
    • Liberty Core Bundle
    • V9 Bundle for Java EE 7 Full Support
    • V9 Bundle for Java EE 7 Web Profile
    Click Next
  6. Accept the terms
    Click Next
  7. Select Stand-alone server
    Click Next
  8. Enter a name for your server. I usually leave it as defaultServer
    Click Finish
  9. Finally a confirmation alert will show. Now we have our Liberty server installed in Eclipse. Almost ready :)

Adding a simple keyStore

Since we have javaee-7.0 feature enabled we need to add a keystore, otherwise we will not get rid of the following error:

The enabled features require that a keyStore element and a user registry are defined in the server configuration. Use the server configuration editor to add these items.
We can do as the comments and create a properly encoded password but for development we can do just:
<keyStore id="defaultKeyStore" password="keyStorePwd"/>


JAX-RS Maven project for WAS Liberty

So the other day I wrote this, apparently not a good approach these days. This is probably the official (current) way of creating a web app project. The following instructions are specific for WebSphere Application Server Liberty Profile. Much of these information is based on the following sites/posts/answer:

Create a Maven project

This could be created manually, but Eclipse is good at this :)
  1. FileNewMaven Project
  2. Check Use default Workspace location
    Click next
  3. Select the http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/maven/repository Catalog
    Select the com.ibm.tools.archetype webapp-jee7-liberty
    Click next
  4. Enter a Group Id: com.ibm.jp.myproject
    Enter a Artifact Id: myproject
    Enter Package: com.ibm.jp.myproject
    Click Finish
At this point we have created the project but it seems it is a bit broken. At written here we can get rid of the following error by modifying the pom.xml
ArtifactDescriptorException: Failed to read artifact descriptor for com.ibm.tools.target:was-liberty:pom:LATEST: VersionResolutionException: Failed to resolve version for com.ibm.tools.target:was-liberty:pom:LATEST: Could not find metadata com.ibm.tools.target:was-liberty/maven-metadata.xml in local (/Users/paulbastide/.m2/repository)

Modify the pom.xml

  1. To get rid of above error add the following to pom.xml
            <id>maven online</id>
            <name>maven repository</name>
            <id>liberty maven online</id>
            <name>liberty maven repository</name>
  2. Setup to use Dynamic Web Module 3.1 Add the following to pom.xml
    Also we need to make sure 3.1 is setup properly in web.xml
    <!DOCTYPE xml>
    <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
        <display-name>Servlet 3.1 Web Application</display-name>
  3. Setup to use Java 7 or 8 (needed for Dynamic Web Module 3.1)
    For Java 7
    For Java 8 we need to use the 3.5.1 version of the plugin

Update the project

After all the changes we just do right click in the project MavenUpdate Project ...
If we check the Project Facets we should have
  • Dynamic Web Module 3.1
  • Java 1.8
  • JAX-RS(Rest Web Services) 2.0

Here is a copy of the full pom.xml and web.xml files :)


Create a JAX-RS maven project and import it into Eclipse

See updated instructions here in this other post

The following instructions work but are Not Recommended since modifying the pom.xml would be useless since those modifications will not be reflected in Eclipse project :(
  1. Create the skeleton
    Instructions taken from Stackoverflow - Maven 3 failed to execute goal apache ...
    mvn archetype:generate \
        -DgroupId={project-packaging} \
        -DartifactId={project-name} \
        -DarchetypeArtifactId=maven-archetype-webapp \
    That will create a {project-name} directory with the contents:
    ├── pom.xml
    └── src
        └── main
            ├── java (This directory will contain all the source code, must be created manually)
            ├── resources
            └── webapp
                ├── WEB-INF
                │   └── web.xml
                └── index.jsp
  2. Import it into eclipse
    FileImportExisting Projects into Workspace and then select {project-name} directory. Once imported, go to project properties and do the following settings:
    1. Setup the facets
      Instructions taken from this article from OPENTONE and adapted for my needs
      • Java 1.8
      • Dynamic Web Project 3.1
      • JAX-RS 2.0
      • Javascript will be selected automatically later. It is not necessary to explicitly select it here
    2. Setup the target runtime
      Select "WebSphere Liberty Profile"


Convert flac to aac with ffmpeg

I write this so I don't forget (Original Superuser thread)
  • Install ffmpeg with libfaac support
    brew reinstall ffmpeg --with-faac;
  • Convert them
    cd flac_directory
    for f in *.flac; do ffmpeg -i "$f" -acodec libfaac -aq 400 "${f%flac}m4a"; done
Certain files will throw an error: [libx264 @ 0x2f08120] height not divisible by 2 (500x489) In such case adding the following option did the trick for me:
-vf "scale=640:-2"
Taken from Stackoverflow - Maintaining ffmpeg aspect ratio


Swift en el servidor con Kitura

Este post es 90% una traducción de esta pagina en Japonés que lo explica muy bien.

Recientemente IBM esta llevando Swift al servidor y para esto ha publicado varias herramientas. Kitura es uno de estas, es un framework para aplicaciones, es open-source y esta basado en Express.js, el popular framework de nodejs.


  1. Instalamos unas dependencias usando homebrew
    brew install http-parser pcre2 curl hiredis
  2. Instalamos el último compilador swift de la página official de Swift.org: Swift - Latest Development Snapshots. No se preocupen, ya esta compilado y viene en un archivo listo para instalar .pkg.
    Luego de la instalación agregamos la siguiente locación al PATH modificando el archivo ~/.bash_profile
    export PATH=/Library/Developer/Toolchains/swift-latest.xctoolchain/usr/bin:$PATH

Nuestro projecto

  1. Creamos un directorio para nuestro proyecto e iniciamos este con el comando swift build
    mkdir swift-sample && cd swift-sample
    swift build --init
  2. Modificamos el archivo Package.swift:
    import PackageDescription
    let package = Package(
        name: "myFirstProject",
        dependencies: [
            .Package(url: "https://github.com/IBM-Swift/Kitura-router.git", majorVersion: 0),
    Y con estos cambios instalamos de nuevo nuestras dependencias:
    swift build
    Algunos se habrán dado cuento que swift build es como un equivalente al npm install en nodejs or bundle install en ruby. Esta vez saldrán mas mensajes y al final terminará en error. Pero no preocuparse, es por que Kitura router depende un una librería en C y el package manager no soporta el linkage C/C++ todavia. Este error tambien esta descrito en el README de Kitura

    ld: library not found for -lcurlHelpers for architecture x86_64
    :0: error: link command failed with exit code 1 (use -v to see invocation)
    :0: error: build had 1 command failures
    error: exit(1): ...
  3. Copiamos el Makefile de Kitura-net. En mi caso es la version 0.3.2, usar la version que cada uno tenga :]
    cp Packages/Kitura-net-0.3.2/Makefile-client Makefile
  4. Ahora ya estamos listos para escribir nuestro codigo!. Modificamos el archivo Sources/main.swift
    import KituraRouter
    import KituraNet
    import KituraSys
    let router = Router()
    router.get("/") { request, response, next in
        response.status(HttpStatusCode.OK).send("Hello, World!")
    let server = HttpServer.listen(8090, delegate: router)
  5. Compilamos nuestro pequeño proyecto
  6. Ejecutamos el archivo ejecutable que acabamos de crear
  7. Ahora vayamos a nuestro explorador o con curl
    curl http://localhost:8090/
    Hello, World!
Por ahora son varios pasos, esperemos con el paso del tiempo todo es sea mas fácil. Yo ya estoy contento con esto!



このようなsvgを表示したい とすると 幾つかのアプローチが考えられる。


一番楽チンで IEでサポートされず無視されるからプロトタイプの時に便利。バックスラッシュで改行可能


.move {
    background-image: url('data:image/svg+xml;utf8,\
    <svg xmlns="http://www.w3.org/2000/svg" width="8" height="8" viewBox="0 0 8 8">\
    <g transform="translate(7, 7) rotate(135)">\
    <line x1="-0.5" y1="0" x2="0.5" y2="0" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>\
    <line x1="-2.5" y1="2" x2="2.5" y2="2" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>\
    <line x1="-4.5" y1="4" x2="4.5" y2="4" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>\




<svg xmlns="http://www.w3.org/2000/svg" width="8" height="8" viewBox="0 0 8 8">
    <g transform="translate(7, 7) rotate(135)">
        <line x1="-0.5" y1="0" x2="0.5" y2="0" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>
        <line x1="-2.5" y1="2" x2="2.5" y2="2" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>
        <line x1="-4.5" y1="4" x2="4.5" y2="4" stroke="rgba(255,0,0,0.5)" stroke-width="1"/>


.move {
    background-image: url("myfile.svg")


クローズブラウザー。リクエストは発生しない。普通の人間は多分読めないし、エンコードして短くなったわけではありません。 おそらくプログラムでコンテンツを作成してbtoaでバイナリに変換してbackground-imageとして表示したい時などに有効な方法でしょうかね

(最初間違って;base64,を書いていなかったから 何も表示されなかったですね。@rikuoさんご教授をありがとうございます!)


.move {
    background-image: url("data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4IiBoZWlnaHQ9IjgiIHZpZXdCb3g9IjAgMCA4IDgiPg0KICAgIDxnIHRyYW5zZm9ybT0idHJhbnNsYXRlKDcsIDcpIHJvdGF0ZSgxMzUpIj4NCiAgICAgICAgPGxpbmUgeDE9Ii0wLjUiIHkxPSIwIiB4Mj0iMC41IiB5Mj0iMCIgc3Ryb2tlPSJyZ2JhKDI1NSwwLDAsMC41KSIgc3Ryb2tlLXdpZHRoPSIxIi8+DQogICAgICAgIDxsaW5lIHgxPSItMi41IiB5MT0iMiIgeDI9IjIuNSIgeTI9IjIiIHN0cm9rZT0icmdiYSgyNTUsMCwwLDAuNSkiIHN0cm9rZS13aWR0aD0iMSIvPg0KICAgICAgICA8bGluZSB4MT0iLTQuNSIgeTE9IjQiIHgyPSI0LjUiIHkyPSI0IiBzdHJva2U9InJnYmEoMjU1LDAsMCwwLjUpIiBzdHJva2Utd2lkdGg9IjEiLz4NCiAgICA8L2c+DQo8L3N2Zz4=")


Setup Komodo with Go

Komodo IDE looks great but I needed to explicitly setup some stuff to make if work with Go. I believe these settings are necessary because Komodo cannot read my GOPATH definition which was done in ~/.bash_profile. Also because I didn't define GOBIN as it is not mandatory. Hence various tools are not found :(
  • Add GOPATH environment variable

    echo $GOPATH
  • Set godef and gocode

    # Install godef and gocode if needed
    go get -v github.com/rogpeppe/godef
    go get -u github.com/nsf/gocode
    • I realised that it has been a couple of months I didn't update anything. brew update; brew upgrade gave me go 1.5.1 so I had to re-compile godef again:
      cd $GOPATH/src/github.com/rogpeppe/godef
      go clean -r -i
      go install -v
    Now we can add their paths to Komodo. If go binary is not in the $PATH you can set up manually too but it should be. Their location are ${GOPATH}/bin/godef and ${GOPATH}/bin/gocode
Now restart Komodo :)




general coloriser can be installed via homebrew:
brew install grc


We need a general configuration file ~/.grc/grc.conf where we write a regex and a filename per kind of command we call with grc. For example to colorise:
tail -f messages.log
We will call
grc tail -f messages.log
grc will look for entries in ~/.grc/grc.conf that match tail -f messages.log and use the file of the same entry to color the output

Content of ~/.grc/grc.conf

# Colorise messages.log

Content of ~/.grc/messages.log.conf

# Informative logs
# More informative logs
# Error
# Stacktraces (start with tab)
colours=none, none, red




Ternjs is awesome tool for working with javascript. It provides autocompletion not only for current file but also for the entire project! Ternjs is made of a server (a daemon) that will read javascript files and will return completions/candidates upon client requests. Clients are usually plugins for editors like emacs, sublime text, etc. Here I show the emacs settings

Install server

Via npm is the easiest way:
npm install -g tern

Install the client

  • Install needed packages in emacs
    Install tern-mode and tern-auto-complete via package-list-packages
  • Test server communication
    This is optional. It is to check server and client can establish communication.
    Open a js file on emacs and activate tern mode M-x tern-mode. It will start the server automcatically and a message like the following will appear:
    Making url-show-status local to `http` while let-bound!
  • Make a .tern-project file
    This is optional but recommended. Ternjs uses a .tern-project file to understand the project. Ternjs can be used to do nodejs or the usual client web development. If no .tern-project is found then a fallback will be loaded. You can find more in the manual. This is mine:
        "libs": [
        "loadEagerly": [
  • Tell emacs to use tern always This is also optional. Since you can trigger tern-mode, etc manually. I just prefer to have it automatic:
    ;;; Tern for Javascript
    ;;; http://truongtx.me/2014/04/20/emacs-javascript-completion-and-refactoring/
    ;;; npm install -g tern-autocomplete
    ;;; Tern with Nodejs' Express
    ;;; https://github.com/angelozerr/tern-node-express
    ;;; npm install -g tern-node-express
    (add-hook 'js-mode-hook (lambda () (tern-mode t)))
    (eval-after-load 'tern
         (require 'tern-auto-complete)
    ;;; Start tern-mode automatically when starting js mode
    (add-hook 'js-mode-hook (lambda () (tern-mode t)))
Now we are ready!. Open a js file and you should get autocompletions right away yay!

Working with version control systems

Every time ternjs server starts it will create .tern-port file that contains the port number clients should access. This is a random number and it is recreated automatically. So, unlike .tern-project, .tern-port should NOT be committed.
If you are working with git, add this to .gitignore:
# Ignore ternjs port number



NoSQL系のDBが増えてきて最近 CloudantDB触る機会があって面白いと思ったのでここでメモを書くことにした。 まだまだわからないことだらけなのでこれをまたアップデートしていきたい。


  • SQL系:
    一つのDBの中に複数のテーブルがありえる。テーブルの中にデータがある。テーブルにはスキーマが必須なのでデータに型がある。 検索するにはSQL Queryを書いて実行

  • CloudantDB:
    一つのDBの中にJSONドキュメントがある。スキーマがない。それぞれのドキュメントに入っているデータの形式が全然違うことがありえる(しかしJSONである必要がある)。 検索するにはsearch indexが必要。search indexを自分で定義する。クエリーで使えるパラメーターはsearch indexの内容で決まる。そのクエリーはCloudant Queryという。そしてCloudant QueryがとLucene Queryベースだそうです。






curl ${CLOUDANT}/_all_dbs


car_answersというデータベース次の5つのドキュメントを用意する。 Cloudantのコンソルを使って5つのドキュメントを作ることができるが、折角なのでここでAPIで作る
  1. まず、データベースの作成
    curl ${CLOUDANT}/car_answers -X PUT
  2. 次にアップロードするドキュメントの配列を用意
        "docs": [
                "_id": "1",
                "created_at": "2015-05-15 00:00:00",
                "maker": "Mercedes Benz",
                "モデル": "A 180",
                "answers": {
                    "評価": "☆☆☆",
                    "その他": "特になし"
              "_id": "2",
              "created_at": "2015-05-16 00:00:00",
              "maker": "Toyota",
              "モデル": "レクサス",
              "answers": {
              "_id": "3",
              "created_at": "2015-05-17 00:00:00",
              "maker": "Honda",
              "モデル": "フィット",
              "answers": {
              "_id": "4",
              "created_at": "2015-05-18 00:00:00",
              "maker": "Mercedes Benz",
              "モデル": "E 250 CABRIOLET",
              "answers": {
              "_id": "5",
              "other": "他とちがうもの"
  3. 最後に用意されたJSON配列をバルクAPIで一気にアップロード
    curl ${CLOUDANT}/car_answers/_bulk_docs -X POST -H "Content-Type: application/json" -d "$json"


curl ${CLOUDANT}/car_answers/_all_docs
curl ${CLOUDANT}/car_answers/_all_docs?include_docs=true
curl ${CLOUDANT}/car_answers/_all_docs?include_docs=true&limit=200


curl ${CLOUDANT}/car_answers/3
{"_id":"3","_rev":"1-fd7d17addf8149d0c522368e69bda27c","created_at":"2015-05-17 00:00:00","maker":"Honda","モデル":"フィット","answers":{"評価":"☆☆☆☆☆","その他":"すぐ購入したいと思います"}}





まずはCloudantDBのコンソルで作る。car_answersというデータベースを選択しAll Design Docsの➕ボタンを選択しNew Search Indexを選ぶ。そしてCreate Search Indexという画面が表示される。

Create Search Indexの入力フィルドの説明

  • Save to Design Document
  • Index name
  • Search index function
    function (doc) {
        index("brand", doc.maker);
  • Analyzer
    例えばドキュメント1を処理した場合は"Mercedes Benz"というストリングがインデックスされる。"Mercedes Benz"を一つのストリングとして扱う("Keyword")か英語の文法を使って単語で分けるか日本語の文法で分けるかUnicode Text Segmentationアルゴリズム("Standard")で分けるかなど。アナライザーによって検索でドキュメントがヒットするかしないか変わる。より詳細は[Cloudant/For Developers/Search Indexes/Analyzers]を参照。


curl ${CLOUDANT}/car_answers/_design/search/_search/car?q=brand:Honda
 car_answers データベースの名前
 _design/search デザインドキュメントのID
 _search 検索時に使う決まり
 car 検索インデックスの名前
 q CloudantQuery用のパラメーター
 brand:"Honda" CloudantQueryそのもの

curl ${CLOUDANT}/car_answers/_design/search/_search/car?q=brand:Mercedes%20Benz
curl -v ${CLOUDANT}/car_answers/_design/search/_search/car?q=brand:Mercedes%20Benz\&limit=1\&include_docs=true
> GET /car_answers/_design/search/_search/car?q=brand:Mercedes%20Benz&limit=1&include_docs=true HTTP/1.1
{"total_rows":2,"bookmark":"g2wAAAABaANkAB5kYmNvcmVAZGI1LmlibTAwOS5jbG91ZGFudC5uZXRsAAAAAm4EAAAAAIBuBAD___-famgCRj-czh4AAAAAYQBq","rows":[{"id":"1","order":[0.028130024671554565,0],"fields":{},"doc":{"_id":"1","_rev":"1-a408ef0c09d5b0aaec96b87e04e049de","created_at":"2015-05-15 00:00:00","maker":"Mercedes Benz","モデル":"A 180","answers":{"評価":"☆☆☆","その他":"特になし"}}}]}


function (doc) {
    if (typeof doc.maker === 'undefined') {
    if (doc.created_at) {
        index("created_at", doc.created_at, {"store": true});
    if (doc["モデル"]) {
        index("model", doc["モデル"], {"store": true});
    if (doc.answers && doc.answers["評価"] && doc.answers["その他"]) {
        var stars = doc.answers["評価"];
        index("stars", stars, {"store": true});
        index("stars_num", (stars.match(/☆/g) || []).length);
        index("comment", doc.answers["その他"], {"store": true, "index": false});
index関数の第三パラメーターはオプションだったので使っていなかったけど、4つのキーがあって次の表で簡単にまとめてみた。詳細は[Cloudant/For Developers/Search Indexes/Options]を参照
 store 検索結果に値を含むかどうか。デフォルトではfalse
 index 検索インデックスに入れるかどうか。デフォルトではtrue
 facet faceting機能をオンにするかどうか。デフォルトではfalse。今回は使わない
 boost 検索結果の中で優先度を上げる係数。デフォルトでは1.0。今回は使わない


まずは({"store": true}を付けた)モデルで検索すると
curl -v ${CLOUDANT}/car_answers/_design/search/_search/car2?q=model:CABRIOLET
> GET /car_answers/_design/search/_search/car2?q=model:CABRIOLET HTTP/1.1
{"total_rows":1,"bookmark":"g2wAAAABaANkAB5kYmNvcmVAZGI4LmlibTAwOS5jbG91ZGFudC5uZXRsAAAAAm4EAAAAAOBuBAD_____amgCRj_Do3oAAAAAYQBq","rows":[{"id":"4","order":[0.1534264087677002,0],"fields":{"stars":"★★","model":"E 250 CABRIOLET","comment":"イメージしていたものと少し違った","created_at":"2015-05-18 00:00:00"}}]}

* より複雑な関数を使った検索index:falseなどについてかく
* より複雑なCloudantQueryで検索 ranges, [], (), AND, OR, などについてかく
* CloudantQueryでエスケープ \/など


Cloudant basics

Find a document by id: I want to look for an design document which name is 'search'. So the id of the document would be '_design/search' Search the document using the primary index with a parameter:
$ curl -u accountame:password 'accountname.cloudant.com/qdef/_all_docs?key="_design/search"&include_docs=true'
{"id":"_design/search","key":"_design/search","value":{"rev":"14-1826c1455da23e06b926e9a29ac9ec63"},"doc":{"_id":"_design/search","_rev":"14-1826c1455da23e06b926e9a29ac9ec63","views":{"names":{"map":"function(doc) {...}"}}}}
Search the document with its id directly:
$ curl -u accountame:password 'accountname.cloudant.com/qdef/_design/search'

{"_id":"_design/search","_rev":"14-1826c1455da23e06b926e9a29ac9ec63","views":{"names":{"map":"function(doc) {...}"}},"language":"javascript","indexes":{"doc":{"analyzer":"standard","index":"function (doc) {...}"}}}

Useful Links

Lucene stuff
Escaping Lucene Characters using Javascript Regex
CouchDB API documentation (Basis of Cloudant)
View collation (selecting stuff using views)
Cloudant library for nodejs on Github
Search with complex keys (this is too advanced for me probably)
Find documents within a range (me)
URL encoder/decoder - Useful tool for working with non ascii strings in below tutorials

Official IBM Cloudant sites

Cloudant API Documentation. (With very simple examples at the right)
Cloudant.com - Primary Indexes (_all_docs)
Cloudant.com - Secondary Indexes (views)
Cloudant.com - Search Indexes (Queries)
Cloudant.com - Examples. Full Text indexing is quite easy to understand
Cloudant.com - Authentication
Cloudant.com - Introducing Cloudant Query


Curl + Charles

As this blog post, when using Charles for debugging requests done by cURL:

When using libCurl (programmatically):

curl_setopt($ch, CURLOPT_PROXY, "");
curl_setopt($ch, CURLOPT_PROXYPORT, 8888);

// And to avoid certificate trust errors...
// WARNING: Do not use this in production environments
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);

When using curl from the terminal

Option --proxy or its short for -x can be used
curl --proxy 'http://my.host.com/api/path?param=value'
For example:
curl \
  -X GET \
  -H 'Authorization: Basic MYTOKEN' \
  --proxy '' \
  -v \


Software development is an engineering discipline

I don't usually post to write thoughts but very objective and technical things however this time is kind of an exception.

On O'reilly Software Architecture - Glenn Vanderburg of LivingSocial on why software development is an engineering discipline.

I couldn't agree more.
Maths don't define an engineering, they are there to economically aid or demonstrate a particular hypothesis. They were introduced after the engineering itself. In Software development it is (economically and timely speaking) cheap to build the product so demonstrating things with maths is not really a must

Software engineering is pragmatic and iterative as many other engineering science. However it is immature.

This cartoon and what follows right after it is what I think is the best moment in the session.

Why is this so funny? It is funny because it is absurd. No body will ever do that.

Why is this so funny? It is funny because it is absurd - No body will ever do that.

Yeah, software developers will never do that ...


Swift stuff

I started with Swift recently. So this is going to be the place I put all the gotchas, tricks and everything including complains
  • Private functions are private:
    self.addTarget(self, action: Selector("valueChanged"), forControlEvents:.ValueChanged)
    private func valueChanged() { ... }
    Leads to: unrecognized selector sent to instance the function must not be private.
    func valueChanged() { ... }
  • Downcasting:
    These two are basically equal. (cell is an UITableCellView). In objc there is no problem accessing nil stuff. It is a NOP, has good and bad sides. In swift it must be explicitly declared and accessing nils leads to crashes.
    // objc
    [[cell viewWithTag:102] setText:@”Something”];
    //or something syntaxly closer to below swift code
    ((UILabel *)[cell viewWithTag:102]).text = @”Something”;
    // swift
    (cell.viewWithTag(102) as? UILabel)?.text = “Something”
    // to refer `label` more than once create a variable and access its methods if not nil
    if let label = cell.viewWithTag(102) as? UILabel {
        label.text = “Something”


Change computer name

Just after installing a new version of OS X you just realized you don't like the name of your computer?. No problem just go to Preferences > Sharing and change it:

But then you go to Terminal.app and realized that your prompt still has the old name :(. That is because the default terminal prompt uses the hostname. Doing above things does not change the hostname.

You could change the prompt by modifying $PS1 variable but that will not solve the problem as the kernel hostname would still have the old name.

Change the Hostname

To change it we use sysctl kern.hostname=NEW_NAME
$ hostname

$ sudo sysctl kern.hostname=IgnacioMBP
kern.hostname: Ignacio-no-MacBook-Pro.local -> IgnacioMBP

$ hostname
To confirm current value we use hostname command. Now we have to restart the terminal so the new hostname is read again populating $PS1 as intended :)

See also:
MDLog:/sysadmin - How to change the hostname of a Linux system


Setup Go in Windows and Mac

Some notes on how to setup the computer to get started with go language. Very simple notes. Installation notes for Mac are at the end

On Windows

  1. Installed it via the msi installer from the downloads page
  2. Open the command line (cmd.exe) and check everything is fine
    > go version
    go version go1.4 windows/386
    Go installer sets up GOROOT (although I think it is not needed to define it explicitly since the go tool can find it)
    > echo %GOROOT%
    We can also check other variables
    > go env GOOS
    > go env GOARCH

  3. The only environment variable that needs to be set is GOPATH

    Add GOPATH as a new User Environment Variable, its value should be somewhere we are going to put all our go code, also it should not be the same as GOROOT .
    > echo %GOPATH%
    As explained in the docs, GOPATH directory needs to have 3 directories in it: src, pkg and bin
    cd %GOPATH%
    mkdir src
    mkdir pkg
    mkdir bin
  4. Optionally, we could add GOPATH\bin to our PATH so we can run executables easier after each time we use the install command: go install something

On Mac OS X

  1. Install it via homebrew
    brew install go
  2. Create a workspace. (The workspace is a directory with 3 subdirectories src, pkg and bin) so we can set $GOPATH variable in the next step. I created inside ~/Documents directory
    cd ~/Documents
    mkdir goworkspace
    cd goworkspace
    mkdir src
    mkdir pkg
    mkdir bin
  3. Set $GOPATH in ~/.bash_profile. Optionally we can add bin
    # Go stuff                                                                                                          
    export GOPATH="${HOME}/Documents/goworkspace"
    export PATH=$PATH:"${GOPATH}/bin"
  4. Also get go vet and go doc
    go get golang.org/x/tools/cmd/vet
    go get golang.org/x/tools/cmd/godoc
Done!. Now I am ready to start with hello world or a little library :)


Cleaning my Mac

The other day I spent like 4 hours removing unneeded stuff from my PC. It took me this long because I had several old files like unused VirtuaBox kernel extensions (I remember installing them when trying to learn some WebOS programming), numerous useless plugins all over the computer, other kinds of extensions, trash, etc. I googled each one of them to make sure I don't delete important things by mistake.
This is a list of things/places to check so in the future I don't have to google everything again. :)
  1. First, EtreCheck is a tool I like it very much because is not offensive or intrusive as Onyx or similar tools, unlike CleanMyMac 2 it has no ads (CleanMyMac is crap). EtreCheck is free, relatively fast and open source.
    EtreCheck attempts to just report the facts, without making any judgements or recommendations. Debugging is not a task that can be automated. It needs humans, ...
    Just run the app and it will tell what extensions, plugins, etc is loaded in the system. These are good places to start to look for.

  2. Login items Go to Preferences > Users & Groups
    Delete all unneeded items from from the list. Example, Dropbox, Google drive. All those things I don't use anymore. I only use OneDrive now :).

    Also read this: Stackexchange - What is the yellow warning sign in the login items tab....

  3. Daemons and/or agents.
    Look for things like google talk, Adobe ancient stuff, etc. Legacy stuff (prior to 10.4)
    Items loaded at Mac starts up time. Run as the root user.
    Items loaded when any user logs in. Run as that user.
    Items loaded only when ${USER} user logs in. Run as that user.
    Useful commands:
    $ launchctl list
    $ launchctl list | grep -v "com.apple"
  4. Kernel extensions This places are quite sensitive. Make sure you don't delete unwanted things from here of the entire OS might break.
    Cnet - Increasing system stability by pruning the kernel extension folder

  5. Internet plugins
    Make sure you have read this article:Unsupported third-party add-ons may cause Safari to unexpectedly quit or have performance issues and don't delete Apple stuff incorrectly.
    /Library/Internet Plug-Ins/
    /Library/Input Methods/
    ~/Library/Internet Plug-Ins/
    ~/Library/Input Methods/


SIMBL Plugins

What is SIMBL ?

As the wiki says, SIMBL is an application enhancement (InputManager bundle) loader for Mac OS X developed by Mike Solomon. Now a days SIMBL is not maintained anymore but there is a new version of it called EasySIMBL, maintained by Nomura-san.
In short, it is a plugin loader for any Cocoa Application in OSX. So, If you have an app you want to hack (lets say add some extra functionality) to an App and that app does not brings Plugin functionality then EasySIMBL is your option.

First Steps

  1. Download and Install EasySIMBL.
  2. Create an Xcode project
    File > New > Project > OS X > Framework & Library > Bundle
  3. Follow further instructions from "Creating A SIMBL Plugin Bundle" from old SIMBL wiki page.
    Basically, make sure your Info.plist have SIMBLTargetApplications with appropriate values in BundleIdentifier, MaxBundleVersion and MinBundleVersion.

    For the more curious, there is an undocumented parameter: RequiredFrameworks. See details here. It says it has never been used but I personally think it could be useful in cases when your plugin requires a non-standard framework which is embedded in the target app. Using this key load your plugin only if the required frameworks exist.
  4. Start by implement load method as explained in the old wiki.
  5. Once you build your plugin you can move it manually onto EasySIMBL and set the Debug level to "Notice + Info + Debug" to see a bit more of information about the loading process of your plugin
  6. Open Console.app and see "All Messages" now open the target App and check what happens...


Obviously the process of manually copying the .bundle to ~/Library/Application\ Support/SIMBL/Plugins is tedious. We can do it a bit better:

# Copy the product to SIMBL plugins dir
SIMBL_PLUGIN_DIR="${HOME}/Library/Application Support/SIMBL/Plugins"

However, in the end this is not what we want. We want to be able to run the target app with the plugin and be able to set breakpoints, etc!.

Setting Xcode for plugin development

This step is not specific to EasySIMBL. Basically every plugin in OSX can be developed/debbuged in this way:
  1. Tell Xcode where to place product after build.
    In our case the location is: "~/Library/Application Support/SIMBL/Plugins". Do this by editing CONFIGURATION_BUILD_DIR on your target:
    Note that some apps (that support plugins without the need of EasySIMBL) will require the plugin to be in ~/Library/Application Support/TheApp/Plugins/ or sometimes in /Applications/TheApp/Contents/Plugins. Just make sure you have enough access permissions so Xcode can write there (Hint: use chmod if needed).

    For example iPhoto.app
  2. Tell Xcode what application to start on "Run".
    In my case I just created a example plugin for Writer.App (That awesome app!). So I edit my schema. Product > Scheme > Edit Scheme...
  3. Specific to EasySIMBL. EasySIMBL seems not to load correctly the latest version of the plugin on Sandboxed apps. Nomura-san says he is aware of this issue and it can be work-arounded by un-checking and checking the "Use SIMBL" before each time we run our plugin.
  4. This is basically all. This is my 1 cent and comments are welcome :)


Recent Posts Widget for Blogger

I got a little bored the standard "Blog Archive" widget. So I searched for a plain list of post on the widget site without luck but I found this useful helperblogger page which provided a script that uses below API to get the entire list of posts parse it and show it nicely.
GET https://www.blogger.com/feeds/blogID/posts/default
(Full documentation of Blogger's API can be found here)
So, what I did was to modify the code a bit for visual purposes and then brought it into my code. Just set below code as an "HTML/JS Widget":
<script type="text/javascript">
var numposts = 20;
var showpostdate = false;
var showpostsummary = false;
var numchars = 100;
function showrecentposts(json) {
    for (var i = 0; i < numposts; i++) {
        var entry = json.feed.entry[i];
        var posttitle = entry.title.$t;
        var posturl;
        if (i == json.feed.entry.length) break;
        for (var k = 0; k < entry.link.length; k++) {
            if (entry.link[k].rel == 'alternate') {
                posturl = entry.link[k].href;
        posttitle = posttitle.link(posturl);
        var readmorelink = "»»";
        readmorelink = readmorelink.link(posturl);
        var postdate = entry.published.$t;
        var cdyear = postdate.substring(0, 4);
        var cdmonth = postdate.substring(5, 7);
        var cdday = postdate.substring(8, 10);
        var monthnames = new Array();
        if (showpostdate == true) {
            monthnames[1] = "Jan";
            monthnames[2] = "Feb";
            monthnames[3] = "Mar";
            monthnames[4] = "Apr";
            monthnames[5] = "May";
            monthnames[6] = "Jun";
            monthnames[7] = "Jul";
            monthnames[8] = "Aug";
            monthnames[9] = "Sep";
            monthnames[10] = "Oct";
            monthnames[11] = "Nov";
            monthnames[12] = "Dec";
        if ("content" in entry) {
            var postcontent = entry.content.$t;
        } else if ("summary" in entry) {
            var postcontent = entry.summary.$t;
        } else {
            var postcontent = "";
        var re = /<\S[^>]*>/g;
        postcontent = postcontent.replace(re, "");
        if (showpostdate == true) {
            document.write(' - ' + cdday + ' ' + monthnames[parseInt(cdmonth, 10)] + ' ' + cdyear);
        if (showpostsummary == true) {
            if (postcontent.length < numchars) {
            } else {
                postcontent = postcontent.substring(0, numchars);
                var quoteEnd = postcontent.lastIndexOf(" ");
                postcontent = postcontent.substring(0, quoteEnd);
                document.write(postcontent + '... ' + readmorelink);
<script src="http://nacho4d-nacho4d.blogspot.com/feeds/posts/default?orderby=published&alt=json-in-script&callback=showrecentposts">

My changes

  • Removed all custom ui modifications and put the post in a ul. I added a custom bullet by giving setting the ul an id #archive-list and adding its style in the template
    #archive-list {
     list-style: none;
     margin-left: 0;
     padding-left: 1em;
     text-indent: -1em;
    #archive-list li:before {
     content: "\0BB \020";
  • I thought of using feeds/posts/default instead of the full path http://nacho4d-nacho4d.blogspot.com/feeds/posts/default however it does not work on posts pages.


Ubuntu server setup

Rackspace stuff... I have rebuilt my system many times, so these are some useful notes to begin with

First time login and another user creation

$ ssh root@ww.xx.yy.zz
$ sudo adduser username

Some additional steps:

$ sudo apt-get update
$ sudo apt-get install git-core curl build-essential openssl libssl-dev

Color $PS1

In ~/.bashrc find and uncomment: force_color_prompt=yes

Install Emacs24

add-apt-repository ppa:cassou/emacs
apt-get update
apt-get install emacs24 emacs24-el
A piece of my ~/.emacs file
;; Add ~/.elisp directory to my load path.
;; Not needed since .emacs.d is read by default
(add-to-list 'load-path' "~/.emacs.d")

;; Show trailing white spaces
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "#191970")

;; Adds Other Package manager repositories
(require 'package)
(add-to-list 'package-archives
             '("elpa" . "http://tromey.com/elpa/"))
(add-to-list 'package-archives
             '("marmalade" . "http://marmalade-repo.org/packages/"))
(add-to-list 'package-archives
             '("melpa" . "http://melpa.milkbox.net/packages/"))

Install Go

apt-get install golang

Install Nodejs

I stopped trying to make a web server with node. But these are the steps I used to follow
$ git clone https://github.com/joyent/node.git && cd node
$ git checkout -b v0.8.1 v0.8.1
$ ./configure
$ make
$ sudo make install
$ node -v

# Install npm
$ curl http://npmjs.org/install.sh | sudo sh
$ npm -v

# If perl complains:
# perl: warning: Please check that your locale settings:
#  LANGUAGE = (unset),
#  LC_ALL = (unset),
#  LC_CTYPE = "UTF-8",
#  LANG = "en_US.UTF-8"
#   are supported and installed on your system.
# Do:
$ export LANGUAGE=en_US.UTF-8
$ export LANG=en_US.UTF-8
$ export LC_ALL=en_US.UTF-8
$ locale-gen en_US.UTF-8
# and run it again


Package xcb-shm was not found in the pkg-config search path

I tried running a hello world program in GTK from the official tutorial page

They basically say:

clang++ `pkg-config --cflags gtk+-2.0` hello.cpp `pkg-config --libs gtk+-2.0` -o hello

This assumes that gtk is installed (brew install gtk+). Also for easiness, they recommend to use pkg-config to get the entire list of paths to compile something that uses gtk.

The problem is that pkg-config complains about "xcv-shm blah blah blah" and it does not work.

The answer from here is to set PKG_CONFIG_PATH appropriately. Apparently is a miss configuration of gtk + x11 + brew.

$ pkg-config --cflags gtk+-2.0
Package xcb-shm was not found in the pkg-config search path.
Perhaps you should add the directory containing `xcb-shm.pc'
to the PKG_CONFIG_PATH environment variable
Package 'xcb-shm', required by 'cairo', not found
PKG_CONFIG_PATH should be set so it reads from Xquartz path so we need to add it to ~/.profile.
$ echo "export PKG_CONFIG_PATH=/opt/X11/lib/pkgconfig" >> ~/.profile

Now it works :)

$ pkg-config --cflags gtk+-2.0
-D_REENTRANT -I/opt/X11/include/cairo -I/opt/X11/include/pixman-1 -I/opt/X11/include/libpng15 -I/opt/X11/include -I/opt/X11/include/libpng15 -I/opt/X11/include -I/opt/X11/include/freetype2 -I/opt/X11/include -I/opt/X11/include/freetype2 -I/opt/X11/include -I/usr/local/Cellar/gtk+/2.24.22/include/gtk-2.0 -I/usr/local/Cellar/gtk+/2.24.22/lib/gtk-2.0/include -I/usr/local/Cellar/pango/1.36.1/include/pango-1.0 -I/usr/local/Cellar/atk/2.10.0/include/atk-1.0 -I/usr/local/Cellar/gdk-pixbuf/2.30.1/include/gdk-pixbuf-2.0 -I/usr/local/Cellar/pango/1.36.1/include/pango-1.0 -I/usr/local/Cellar/harfbuzz/0.9.25/include/harfbuzz -I/usr/local/Cellar/pango/1.36.1/include/pango-1.0 -I/usr/local/Cellar/glib/2.38.2/include/glib-2.0 -I/usr/local/Cellar/glib/2.38.2/lib/glib-2.0/include -I/usr/local/opt/gettext/include 


LLDB from the command line

Before IDEs and all current eye-candy stuff, everything was done like this:
  • Compile a sample.cpp program with debug symbols
    $ clang++ -g sample.cpp
  • Star the debugger
    $ lldb
  • Create a target to start the debug session
    $ target create a.out
  • Setup your breakpoints
    # breakpoing set --file sample.cpp --line 7
    $ b sample.cpp:7
  • List breakpoints
    $ breakpoint list
  • Set a condition to a preakpoint (conditional breakpoint). Use list to get the breakpoint number and replace it with N.
    $ condition N (int)[[myObj name] isEqualToString:@"Bar"]
  • Run the target
    $ run a.out
  • Step in
    $ s
  • Step over (next)
    $ n
  • Print the stack trace
    $ st
From now do as usual...
More Commands on lldb-gdb table

2016/01/23 Example: Debug Nodejs

We download nodejs from github and build it with the DEBUG configuration
git clone https://github.com/nodejs/node.git
cd node/
make -C out BUILDTYPE=Debug # I got this information by reading the Makefile :)
At this point we have a binary with debug information in out/Debug/. Let's start the lldb on it
cd out/Debug
On lldb, create a target, set some breakpoints and run it
(lldb) target create node
(lldb) b /Users/ignacio/Documents/github/node/src/node.cc:4127
(lldb) run node /Users/ignacio/Downloads/node-js-bug--debug-gets-stuck-on-two-c-commands/perlito5.js

Now keep on debugging :)



I've been wanting to use clang-format since a while already. It turns out that installation instructions are kind outdated. I had to read various commits in the clang repository to realize clang-format command it was moved from clang to llvm repository.

2015/01/18 Update: Use brew!

Now clang-format is on brew!. Just do:
brew install clang-format

2014/08/06 Update: Use prebuilds!

Now is even simpler, just download the prebuilds for your system, decompress and use them!
# Download
$ curl -O http://llvm.org/releases/3.4.2/clang+llvm-3.4.2-x86_64-apple-darwin10.9.xz
# Decompress
$ tar xvfJ clang+llvm-3.4.2-x86_64-apple-darwin10.9.xz 
# Locate clang-format
$ ./clang+llvm-3.4.2-x86_64-apple-darwin10.9/bin/clang-format --help
Just one thing. I couldn't find the git hook python script available. Still, it is in the source and it does not need to be compiled, just copy th
# Download only (not the entire llvm project) the repository that contains the hook script
$ svn co http://llvm.org/svn/llvm-project/clang-tools-extra/trunk extra
# Voila!
$ find . name git-clang-format
# Now just copy it somewhere into your $PATH

Old approach: Build llvm

We need to compile LLVM! I wish brew provides something like --with-tools or something like --with-clang-format. For now we have to do the entire thing by ourselves. Some instructions were taken from Clang's getting started page:

Getting LLVM + Clang

$ svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
$ cd llvm/tools
$ svn co http://llvm.org/svn/llvm-project/cfe/trunk clang
$ cd ../..
$ cd llvm/tools/clang/tools
$ svn co http://llvm.org/svn/llvm-project/clang-tools-extra/trunk extra
$ cd ../../../..
$ cd llvm/projects
$ svn co http://llvm.org/svn/llvm-project/compiler-rt/trunk compiler-rt
$ cd ../..

Building LLVM + Clang

$ ./llvm/configure
$ make
In my case, make didn't succeed, it takes forever and it fails somewhere after building clang-format but that is ok. clang-format was built and I wasn't planning to install llvm/clang from source either, I am fine with Xcode/brew stuff :p

Installing clang-format

If it is not clear where makeput all compiled stuff we can always use find:
$ find . -name "clang-format"
We need to copy clang-format command from above location to somewhere in our PATH or make an alias. Also we want to install git-clang-format and maybe other scripts.
# copy clang-format command to your $PATH
$ cp ./Release+Asserts/bin/clang-format /somewhere/in/your/PATH

# install (copy) git-clang-format subcommand to your $PATH
$ cd ./tools/clang/tools/clang-format
$ cp git-clang-format ~/.bin

Using clang-format command

Now, lets say we are inside a directory which contains c/cpp/objc sources which we want to reformat. We simply create a .clang-format file which should be of the form:
Key: Value
Key: Value
Key: Value
Key: Value
And then we would run clang-format file.c Some commands to know more about the command and keys available:
# dump all configuration keys
clang-format -dump-config 
clang-format -dump-config  style=webkit

# help
clang-format --help
clang-format --help-list-hidden

# format a file (it will read .clang-format file if available)
clang-format filename.c

# format a file with a explicit style
clang-format -style=google filename.c
Check clang's official style documentation page for more info.

Using it with git

Since we installed git-clang-format file to our PATH, if we are in a git repository, we can do:
git clang-format
And it will reformat all staged code, pretty awesome!

Are you wondering where did clang-format subcommand come from?! Answer: custom git subcommands are simply executables in the PATH and its name should have the "git-" prefix and should not have extension.

Hope this info is helpful to somebody :)


インターネット接続 制限あり、ウィンドウズタブレットでの件

新しい無線ルーター買って電波が強くなった筈だから楽しみだったのに ウィンドウズタブレットのインターネット接続が「制限あり」??!







  1. デバイスマネージャーを開きます。
  2. [ネットワーク アダプター]をダブルクリックします。
  3. [Broadcom 802.11abgn Wireless SDIO Adapter]をダブルクリックします。
  4. プロパティーが表示されますので、[詳細設定]タブをクリックします。
  5. 表示された[詳細設定]タブから   以下の項目の設定変更を行ってください。
    • [20/40 Coexistance]の[Auto]を[Enable]に変更します。
    • [40MHz Intolerant]の[Disabled]を[Enable]に変更します。
    • [802.11n Preamble]の[Auto]を[Mixed Mode]に変更します。
    • [Afterburner]の[Disabled]を[Enable]に変更します。
    • [Bandwidth Capability]の[11a:20/40;11bg:20MHz]を[11a/b/g:20/40MHz]に変更します。
    • 以上で、操作は終了です。

あと、やっぱりドライバーのアップデートした方が良さそうだ、 Acerうんち全然アップデートしないからSony様からパクりました。 とりあえず、 DriverIdenfier.comからv5.93.98.4ダウンロードできた。 93.97.113でもだめでしたし、その一個前もだめだったし。




I wish this was a tar tutorial, however there are plenty of them already. Maybe some people find it useful too, specially when downloading, opening tar balls... :)
A ".tar" file is a collection of files within a single file in an uncompressed form. If the file is a ".tar.gz" ( usually called tarball) or ".tgz" file then it is a collection of files in a compressed form.
To compress a file you could create the tar file with the z option. Or alternatively you could create the file with any other tool and then use gzip to compress it.

Some Tar Options

  • c : create a new archive
  • d : delete from the archive
  • r : append files to the end of the archive
  • t : list contents
  • u : update (append files if newer)
  • x : extract
  • f : file?
  • v : verbose
  • z : gzip

Uncomprezing files:

# target file is a zipped tar ball
$ tar -zxvf file.tar.gz
# target file is simply a tar ball
$ tar -xvf file.tar

Listing contents of the file

$ tar -tvf myfile.tar

Compressing files

# tar contents of folder foo in foo.tar
$ tar -cvvf foo.tar foo/
That is all, maybe I add more options as needed.


Installing Emacs from the source

This VM I am in is a bit old and somewhow apt-get is not pulling the latest packages... so I need it to get it manually.
As many other open source software built with make: Get a release, from http://ftp.jaist.ac.jp/pub/GNU/emacs/ for example, then compile and install.
$ wget http://ftp.jaist.ac.jp/pub/GNU/emacs/emacs-24.3.tar.gz
$ tar -xzvf emacs-24.3.tar.gz
$ cd emacs-24.3
$ less INSTALL; # Just take a glance of the install notes
$ ./configure; # Check the output is not something weird
$ make; # No errors should appear here
$ sudo make install; # Gets moved to somewhere in the PATH
If in the way perl complains about LANG, etc variables:
 perl: warning: Setting locale failed.
 perl: warning: Please check that your locale settings:
  LANGUAGE = (unset),
  LC_ALL = (unset),
  LANG = "en_US.utf8"
    are supported and installed on your system.
 perl: warning: Falling back to the standard locale ("C").
Follow instructions: Perl warning Setting locale failed in Debian
$ export LANGUAGE=en_US.UTF-8
$ export LANG=en_US.UTF-8
$ export LC_ALL=en_US.UTF-8
$ sudo /usr/sbin/locale-gen en_US.UTF-8
$ sudo /usr/sbin/dpkg-reconfigure locales
# now choose the appropriate language from the list to generate
Finally re-run from the last step, install in this case:
$ sudo make install


SQL stuff

I am just a beginner in SQL, so I write some trivial stuff here. Lets say I have some datum in my table of logs:
SELECT *, stime - time AS delay FROM logs;
| usid | time | stime | delay |
|    1 |   10 |    10 |     0 |
|    1 |   10 |    12 |     2 |
|    1 |   15 |    17 |     2 |
|    1 |   13 |    15 |     2 |
|    1 |   18 |    21 |     3 |
|    1 |   19 |    22 |     3 |
|    1 |   20 |    25 |     5 |
|    1 |   21 |    26 |     5 |
|    1 |   22 |    26 |     4 |
|    1 |   23 |    26 |     3 |
  • usid is some random id
  • time is record time in the client side
  • stime is record time in the server side
  • delay is simply the diff of server time and client side

Some exercises:

Count the number of records with a particular delay:
SELECT stime - time AS delay,
       COUNT(*) FROM logs
GROUP BY stime - time;
| delay | count(*) |
|     0 |        1 |
|     2 |        3 |
|     3 |        3 |
|     4 |        1 |
|     5 |        2 |
Too much detail (too granulated), so we want to group 2 and 3 delay into 2, 4 and 5 into 4, etc. To make sure I am doing the right calculations, here is the delay and rounded delay:
       stime - time AS delay,
       FLOOR((stime - time) / 2) * 2 AS rounded_delay
FROM   logs;
| stid | time | ctime | delay | rounded_delay |
|    1 |   10 |    10 |     0 |             0 |
|    1 |   10 |    12 |     2 |             2 |
|    1 |   15 |    17 |     2 |             2 |
|    1 |   13 |    15 |     2 |             2 |
|    1 |   18 |    21 |     3 |             2 |
|    1 |   19 |    22 |     3 |             2 |
|    1 |   20 |    25 |     5 |             4 |
|    1 |   21 |    26 |     5 |             4 |
|    1 |   22 |    26 |     4 |             4 |
|    1 |   23 |    26 |     3 |             2 |
So now group by the rounded delay (named just "delay" here)
SELECT FLOOR((stime - time) / 2) * 2 AS delay,
       COUNT(*) FROM logs
GROUP BY FLOOR((stime - time) / 2) * 2;
| delay | count(*) |
|     0 |        1 |
|     2 |        6 |
|     4 |        3 |
Add a percentage, I know I have 10 rows in my table so I can do this:
       c.cnt / 10 AS percentage
      SELECT FLOOR((stime - time) / 2) * 2 AS delay,
             COUNT(*) AS cnt
      FROM   logs
      GROUP BY FLOOR((stime - time) / 2) * 2
) c;
| delay | cnt | percentage |
|     0 |   1 |     0.1000 |
|     2 |   6 |     0.6000 |
|     4 |   3 |     0.3000 |
Although, probably is better not to hard-code the count:
       c.cnt / ( SELECT COUNT(*) FROM logs) AS percentage
       SELECT FLOOR((stime - time) / 2) * 2 AS delay,
              COUNT(*) AS cnt
       FROM   logs
       GROUP BY FLOOR((stime - time) / 2) * 2
) c;
| delay | cnt | percentage |
|     0 |   1 |     0.1000 |
|     2 |   6 |     0.6000 |
|     4 |   3 |     0.3000 |
There should be much better ways (better performance) of doing this... I am glad to receive some comments :)


Converting from to unix time

I write this because I always forget the right formats, options...

BSD Date - Mac OSX

The f option is for input format. The j option is to not try to set the time. The last argument is optional, is the output format and it requires an x as a prefix.
  • Convert to unix time
    $ date -jf "%Y-%m-%d %H:%M:%S %Z" "2013-07-19 00:00:01 JST" "+%s"
  • Convert from unix time
    $ date -jf "%s" 1374159601 "+%Y-%m-%d %H:%M:%S %Z"
    2013-07-19 00:00:01 JST
    $ date -jf "%s" 1374159601
    Fri Jul 19 00:00:01 JST 2013

GNU Date - Linux, etc

The -d is for the input date. Format is recognized automatically. The last parameter, same as BSD Date, is the output format (optional). It must be prepended by a +.
  • Convert to unix time
    $ date -d "2013-07-19 00:00:00 JST" "+%s"
  • Convert from unix time
    $ date -d @1374159600 "+%Y-%m-%d %H:%M:%S %Z"
    2013-07-19 00:00:00 JST
    Also :
    $ date -d @1374159600
    Fri Jul 19 00:00:00 JST 2013



Wikis with Gollum

Gollum is a simple wiki system built on top of Git that powers GitHub Wikis. Super easy to starting working in your wiki.


gem install gollum redcarpet rdiscount
gollum --help


git clone https://github.com/allending/Kiwi.git
cd Kiwi.wiki/
git clone https://github.com/allending/Kiwi.git
gollum --page-file-dir Kiwi.wiki/ TestWiki

Editing changes

As documented, changes done directly in the source should be committed to trigger gollum to reflect them


If coda is your editor then below markdown mode might become handy: Markdown.mode by bobthecow - Github


Installing GLEW in MacOS X

The old way

The easy way is at the bottom of the post :)
  1. Download the sources from SourceForge:
  2. http://glew.sourceforge.net/

  3. Unzip/Untar the file (Some browsers will do this automatically)

  4. Before compiling ...
  5. Usually you would run `make` but the shell script will fail to find/guess "$system" variable so we need to set it manually (as of in version 1.7.0 [08-26-11]). So open the makefile with some editor and change system to be `darwin` as shown in the image.
  6. Compile, Install and Clean
  7. make
    sudo make install
    make clean
    Some harmless warnings appear but ... let them be.

  8. We are done!
  9. Glew's headers are in /usr/include/GL and the dynamic and static libraries are in /usr/lib/ as usual :)

The easy way

  1. Use homebrew and save you all these little hazels ... :)
    brew install glew


  1. Installing GLEW in Mac OS X (Leopard) - Julian Villegas
  2. GLEW on Mac OS - 175 CS Forum


CoreImage and UIKit coordinates

Recently I found a very nice CoreImage tutorial for iOS: Easy face detection with core animation in iOS5 and after downloading the source I found that this quite popular tutorial was teaching something wrong, coordinates system conversions!.

I can't remember how many bugs/problems I had because of not converting the coordinates well or using the incorrect coordinate system when I was at the uni. So I decided to write this little post so my great audience (approx. 6 viewers per day) can benefit from it :)

CoreImage Coordinate system

(0,0) x-axis y-axis

In CoreImage, each image has its own coordinate space its origin in at the left bottom corner of the image. Each images's coordinate systems is device independent.

UIKit Coordinate System

(From View Programming Guide for iOS)

The origin of an UIViews frame is at the top left corner and its coordinates are in their superview coordinate space. They are not independent.

This means that in CoreImage each image has its origin as {0.0} while in UIKit views are not necessarily like so

Converting coordinates

We should convert CoreImage coordinates to UIKit coordinates system, not the other way around, because chances are that most of your code will be in UIKit coordinates and because other iOS developers will expect {0,0} to be at the top left corner, be nice and don't change everything just because of CoreImage. :)

This is easily done with an affine transform:
Where ui and ci subindexes mean UIKit and CoreImage coordinates respectively and h is the height of the image in regard.

We could do this manually but happily there are a bunch of functions for this task like: CGAffineTransformMakeScale, CGAffineTransformTranslate, CGPointApplyAffineTransform and even CGRectApplyAffineTransform!. Thanks to Apple for making our life easier.

The code

// Create the image and detector
CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                          context:... options:...];

// CoreImage coordinate system origin is at the bottom left corner
// and UIKit is at the top left corner. So we need to translate
// features positions before drawing them to screen. In order to do
// so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
                                    0, -imageView.bounds.size.height);

// Get features from the image
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {

  // Get the face rect: Convert CoreImage to UIKit coordinates
  const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

  // create a UIView using the bounds of the face
  UIView* faceView = [[UIView alloc] initWithFrame:faceRect];


  if(faceFeature.hasLeftEyePosition) {
    // Get the left eye position: Convert CoreImage to UIKit coordinates
    const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);



You can download the sample from here and see the result is pretty much the same as the original tutorial. Only this time we didn't scramble with the coordinate system :)

In case, you didn't notice, the original example changes the whole window coordinate system causing its origin to be at the bottom left (like Cocoa in the Mac) hence the imageView appears at the bottom.