Behind $.ajax

相信大家对于下面这段代码都不会太陌生:

doSomethingBefore();

$.ajax({
  url: "test.html",
  success: function() {
	doSomethingWhenSucceed();
  }
});

doSomethingAfter();

上述代码的执行顺序是:

  1. 主程序首先调用doSomethingBefore
  2. 其次,主程序发起ajax调用,接下来继续执行doSomethingAfter,主程序结束。
  3. 待ajax请求得到响应并且响应成功时,doSomethingWhenSucced开始执行。

了解了javascript异步编程模型的人们这样的执行结果也不会觉得奇怪。那么,这段代码背后,到底发生了什么,我将在这里分享一下我的理解。

被jQuery惯坏了的程序员们或许已经忘记了使用原生的javascript api发起ajax调用了,我们先通过一个简单的例子回忆一下:

var xmlhttp = new XMLHttpRequest();

xmlhttp.onreadystatechange = function() {
  if (xmlhttp.readyState === 4){
	console.log(xmlhttp.responseText);
  }
};

xmlhttp.open("GET","https://api.github.com/users/tater/events",true);
xmlhttp.send();

XMLHttpRequest是ajax技术中最重要的一个概念,它是浏览器暴露给浏览器脚本语言(例如javascript)的一个接口,浏览器脚本语言(例如javascript)就可以通过这个API发起HTTP,HTTPS请求,并获取响应。

当我们需要发起一个ajax调用通常经过以下几步:

  1. 创建一个XMLHttpRequest对象
  2. 注册该XMLHttpRequest对象的onreadystatechange事件侦听器,即http请求成果后需要执行的动作。
  3. 使用这个对象的open方法发起异步http调用
  4. 浏览器发起http请求,并同时更新该XMLHttpRequest对象的readyState,触发readystatechange事件,(即此readystatechange事件进入javascript事件队列)。
  5. javascript引擎线程轮询事件队列时,遇到readystatechange事件,调用该事件的侦听器函数,完成此次调用。

具体执行步骤参见下图:

Ajax workflow 上述步骤也体现了ajax如何在javascript单线程执行模型下工作的,关于javascript单线程执行的细节, 我的前同事四火最近写了一篇关于javascript单线程执行的文章,详细介绍了javascript中单线程执行任务的原理。

Create a scala project with sbt

在ruby的世界里,习惯了使用bundle来管理依赖,Rake来实现自动化构建,那么在scala的世界里,如果遇到一个好玩的开源库,例如akka,如何很快的把玩一下呢?以前我的做法会是:

  • 打开InteliJ,创建一个scala项目
  • 下载开源包到本地
  • 写程序

有没有更便捷,轻量级一点的方式呢?答案是可以的,这里我就简单介绍一下使用sbt管理依赖,自动化构建scala程序。

  • 安装sbt.
  • 安装sbt的idea插件
  • 创建一个工程目录
  • 在这个工程目录下新建一个名为build.sbt的文件,包含以下内容。
name := "<your project name>"

version := "0.1"

scalaVersion := "2.11-M3"

libraryDependencies += "org.scalatest" % "scalatest_2.10" % "1.9.1" % "test"

这个文件中是scala语法,指定了项目名称,版本,依赖的scala的版本,以及项目所需要的依赖,例如我想要使用akka,那么就在该文件中加入以下内容

libraryDependencies += "com.typesafe.akka" % "akka-actor_2.11.0-M3" % "2.2.0"
  • 在工程目录下运行sbt update下载项目依赖包。
  • 运行sbt gen-idea生成idea工程文件。
  • 打开InteliJ导入刚刚生成的工程。

好,开始写scala程序吧。

由于我经常创建小的scala项目,于是写了一个ruby程序来生成scala项目

Ruby clean code之block and instance eval

引子

自从来到ruby世界,我就被ruby那自由的语法、优雅的对象模型、漂亮的dsl深深地迷住了,了解更多的ruby特性,能够帮你实现更漂亮,流畅的api。在这篇文章中,我将以一个例子来演示如何使用ruby的block和instance_eval实现更具表现力的api。

需求

这个例子是一个来源于真实项目需求,为了演示方便,我对其做了一些简化。程序的输入是一个格式固定的json字符串,输出是从这个json中获取到一些属性值创建出来的一个给定类型的对象。然而,不同于以往的json和对象之间的序列化,反序列化,这里的从json字符串中的值与对象属性之间的对应关系有一定的逻辑。 json中的值和对象值对应关系有如下几种:

  1. json属性和对象属性直接对应。
  2. json属性和对象属性直接对应,当json中没有该属性时,使用给定默认值。
  3. 对象的属性的类型不是普通类型,当json中有对应属性的值时,需要根据json中的值创建一个对应的类型对象。
  4. 等等

我们先来看下最初的实现版本:

注:这里的json不是一个字符串对象,而是经过JSON.parse处理后得到一个嵌套的hash,下同。

class Post
  attr_accessor :author_name, :date, :tags

  def initialize(json)
    init_author_name(json)
    init_date(json)
    init_tags(json)
    #init_xxx...
  end
  
  # omit some code here...
  
  # json中的title和对应上面的第一种情形
  def init_author_name(json)
    @author_name = json['author']['name']
  end

  # 对应上面的第二种情形
  def init_date(json)
    @date = json['date'].blank? ? "1970-01-01" : json['date']
  end

  # 对应上面的第三种情形
  def init_tags(json)
    @tags = []
    unless json['tags'].nil?
      json['tags'].each do |tag|
        @tags.append(Tag.new(tag))
      end
    end
  end
end

Bad smell

看到这样的代码,你发现什么bad smell了吗?重复代码?不像,但是那么多的init_xxx方法看起来就是有那么点不自然。

在我看来,这份代码有两个问题:

第一,从json到Post对象的转换职责,不应该是Post类的职责,这份代码违反了单一职责原则

第二,由于无法很好地将json中的值和对象值对应关系规则建模,导致我们不得不创建多个init_xxx方法,然后在在initialize方法中逐一调用这些方法。然而在这些init_xxx方法之间,存在着结构化重复

如何改进?

首先,要分离职责,把json到Post对象的转换职责放到一个新类PostBuilder中。

其次,要对对应关系进行抽象。

改进

我们在来分析一下json中的值和对象值对应关系规则,还是有规律可循的,对应关系都由三部分组成:json属性对象属性名转换规则(默认没有转换规则)。其中,通过jsonpath来标识json属性,通过block来表示转换规则, 我们可以建立一个MapingRule类来对此关系进行建模。

由此我们得到如下代码:

class Post
  attr_accessor :title, :date, :tags
end

class MappingRule
  attr_accessor, :json_path, :attr_name, :converter

  def apply(obj, json)
    value = JSONPath.new(@json_path).on(json)
    unless value.nil?
      obj.send("#{@field_name}=", @converter.call(value))
    end
  end
end

class PostBuilder
  def initialize
    @rules = []
  end
  
  def rule(json_path, attr_name, converter)
    @rules << MappingRule.new(json_path, attr_name, converter)
  end
  
  def build json
      post = Post.new
      @rules.each do |rule|
      	rule.apply(post, json)
      end
  end
end

# 创建builder
builder = PostBuilder.new
buider.rule("author name", :author_name)
buider.rule("date", :date, -> (date) { date.nil? ? "1970-01-01" : date} )
buider.rule("tags", :tags, -> (tags) { tags.map {|tag| Tag.new(tag)} })

# 使用builder从json创建对象

post = buidler.build({"date" => "2013-09-10", "tags" => ["music", "IT"] })

回顾

与最初版本相比,我们引入了jsonpath和block来对转换规则进行建模(创建了MappingRule类),在PostBuilder#build中循环应用各个rule完成对象的创建,消除了多个init_xxx的重复。至此,代码已经达到一个令人满意的状态。然而,能否让我们的PostBuilder的接口更加漂亮些?

再改进,更具表达力的api

我们再来看下PostBuilder的使用场景:

  1. 创建一个PostBuilder对象。
  2. 给这个对象增加一些转换规则。
  3. 使用这个对象从json创建对象。

因此,可以说,在一个PostBuilder对象被添加规则之前,它是不完整的,是不可用的,即第一二步应该是一个原子操作,我们可以把initialize变为private方法,增加一个config类方法,这个方法可以接受一个block,在此block中对builder增加规则,在这个方法中创建一个builder实例,同时把这个实例传递给block完成buidler的创建。代码如下:

#增加一个config类方法
class PostBuilder
  def self.config
    builder = PostBuilder.new
	yield(builder) if block_given?
	builder
  end
  
  #...
  private
  def initialize
  #...
  end
end

#创建builder
builder = PostBuilder.config do |builder|
    buider.rule("author name", :author_name)
    buider.rule("date", :date, -> (date) { date.nil? ? "1970-01-01" : date} )
    buider.rule("tags", :tags, -> (tags) { tags.map {|tag| Tag.new(tag)} })
end

#使用builder
post = buidler.build({"date" => "2013-09-10", "tags" => ["music", "IT"] })

再改进,更简洁的api

至此,这个PostBuilder提供的api已经非常干净了,然而,这个api还是有改进空间的。在block中builder这个单词出现在每个增加规则的地方。有没有办法把这个重复也给消除掉呢?答案是可以的,instance_eval隆重登场了。对PostBuilder.config方法做如下修改:

  def self.config(&block)
    builder = PostBuilder.new
	builder.instance_eval(block)
	builder
  end

那么,创建builder的代码就简化为:

builder = PostBuilder.config do
    rule("author name", :author_name)
    rule("date", :date, -> (date) { date.nil? ? "1970-01-01" : date} )
    rule("tags", :tags, -> (tags) { tags.map {|tag| Tag.new(tag)} })
end

PostBuilder.config中使用instance_eval对block进行evaluate,相当于在新创建的builder上执行block中的代码,同样能达到对builder增加规则的效果。

使用instance_eval能够使代码变得更加简洁,然而随之而来的风险是,你也给了你的api调用者一个在这个新建对象上执行任意代码的机会。因此,在简洁性和风险之间,你需要做一个权衡。

再抽象

再回头看看PostBuilder,只需些许改动,我们就能从json创建任意类型的对象,于是我们得到一个InstanceBuilder类,如下:

post_builder = InstaneBuilder.config do
    instane_class Post
    rule("author name", :author_name)
    rule("date", :date, -> (date) { date.nil? ? "1970-01-01" : date} )
    rule("tags", :tags, -> (tags) { tags.map {|tag| Tag.new(tag)} })
end

你可以试着实现一个这个InstaneBuilder#instane_class方法。

结语

通观上面的例子,我们通过使用ruby的block和instance_eval,把一个复杂丑陋的代码变得干净,层次清晰,同时,更加容易扩展。 在这里,我抛出自己对编写代码的一点想法,供各位参考:

  1. 在开始编写实现代码前,先考虑一下如何提供一套干净的,更具表达力的api,让api调用者喜欢使用你的api(sinatra做了一个很好的榜样)。
  2. 恰当地使用block,instance_eval 能够很容易的构建一个internal dsl。

Reference

想了解更多关于blockinstance_eval, internal dsl可以参考如下两篇文章:

How do I build DSLs with yield and instance_eval?

Creating a ruby dsl

Rails application deployment automation with mina

TLDR:

In this post, I will introduce a really fast deployment tool - Mina, and show you how to deploy a rails application which runs on unicorn with nginx, also I’ll show you how to organize your mina deployment tasks.

Note:

All code in this post can be find here.

About mina

Mina is a readlly fast deployer and server automation tool”, this how the team who built it describes it. The concept behind Mina is to connect to a remote server and executes a set of shell command that you define in a local deployment file(config/deploy.rb). One of its outstanding feature is it is really fast because all bash instructions will be performed in one SSH connection.

Init

To use mina automating deployment, you need to get the following things ready.

  1. a remote sever
  2. create a user for deployment (e.g. deployer) and add this user into sudoer list.
  3. generate a ssh key pair, add the generated public key into GitHub.
  4. create a deployment target fold on the remove server(e.g. ‘/var/www/example.com’)

once you got these things done, run mina init in your project directory, this will generate a deployment file - config/deploy.rb, then set the server address, deployment user, deployment target and other settings in deployment file, like the following:

set :user, 'deployer'
set :domain, ENV['on'] == 'prod' ? '<prod ip>' : '<qa ip>'
set :deploy_to, '/var/www/example.com'
set :repository, 'git@github.com:your_company/sample.git'
set :branch, 'master'

Setup

Then run mina setup, this will create deployment folders, which looks like this:

/var/www/example.com/     # The deploy_to path
|-  releases/              # Holds releases, one subdir per release
|   |- 1/
|   |- 2/
|   |- 3/
|   '- ...
|-  shared/                # Holds files shared between releases
|   |- logs/               # Log files are usually stored here
|   `- ...
'-  current/               # A symlink to the current release in releases/

Provision

It is very common to setup a new server and deploy application on it, it will be good if we can automating this process, here comes the provision task:

    task :provision do
      # add nginx repo
      invoke :'nginx:add_repo'

      queue  "sudo yum install -y git gcc gcc-c++* make openssl-devel mysql-devel curl-devel nginx sendmail-cf ImageMagick"

      #install rbenv
      queue  "source ~/.bash_profile"
      queue  "#{exists('rbenv')} || git clone https://github.com/sstephenson/rbenv.git ~/.rbenv"
      queue  "#{exists('rbenv')} || git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build"
      queue  "#{exists('rbenv')} || echo 'export PATH=\"$HOME/.rbenv/bin:$PATH\"' >> ~/.bash_profile && source ~/.bash_profile"

      #install ruby
      queue  "#{ruby_exists} || RUBY_CONFIGURE_OPTS=--with-openssl-dir=/usr/bin rbenv install #{ruby_version}"

      #install bundle
      queue  "#{ruby_exists} || rbenv local 2.0.0-p247"
      queue  "#{exists('gem')} || gem install bundle --no-ri --no-rdoc"

      #set up deploy to
      queue "sudo mkdir -p #{deploy_to}"
      queue "sudo chown -R #{user} #{deploy_to}"

    end

    #helper method
    def ruby_exists
    "rbenv versions | grep #{ruby_version} >/dev/null 2>&1"
    end

    def exists cmd
      "command -v #{cmd}>/dev/null 2>&1"
    end

to be able run this taks multi times, I create some helper method detecting whether an executable exists or not. e.g. ruby_exists, exits('gem') these helper method will return if the executable exits, otherwise, it will run next command to install the executable.

With this task, you can get a server ready for deployment in seveural minutes.

Deploy

Once the server is provisioned, you can deploy your application with mina deploy, here is a typical deploy task:

    desc "Deploys the current version to the server."
    task :deploy => :environment do
      deploy do
        invoke :'git:clone'   #clone code from github
        invoke :'deploy:link_shared_paths' #linking shared file with latest file we just cloned
        invoke :'bundle:install' #install bundle
        invoke :'rails:db_migrate' #run database migration
        invoke :'rails:assets_precompile' #compile assets
        invoke :'unicorn_and_nginx' #setup nginx and unicorn config
        to :launch do
          queue '/etc/init.d/unicorn_myapp.sh reload' #reload unicorn after deployment succeed
        end
      end
    end

Unicorn and nginx

to run our application on unicorn and nginx, we need to create our own unicorn and nginx configuration file and start nginx and unicorn with our configuration. here is the task:

    desc "Setup unicorn and nginx config"
    task :unicorn_and_nginx do
      queue! "#{file_exists('/etc/nginx/nginx.conf.save')} || sudo mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.save"

      queue! "#{file_exists('/etc/nginx/nginx.conf')} || sudo ln -nfs #{deploy_to}/current/config/nginx.conf /etc/nginx/nginx.conf"

      queue! "#{file_exists('/etc/init.d/unicorn_avalon.sh')} || sudo ln -nfs #{deploy_to}/current/scripts/unicorn.sh /etc/init.d/unicorn_myapp.sh"
    end

Conclusion

With these tasks: mina init, mina provision, mina deploy, mina helps you deploying easily with less mistake, have fun with mina!

Thoughts on design a RESTful API on RoR stack

Recently I’m working a RoR stack RESTful API project, I was involved in proposing tech stack, architecture, deployment pipeline. There was many thoughts in this progress, so I wrote down our thoughts here, in case it may help you when you met a similar question.

Tech Stack

There’re bunch of api framework out there in ruby stack, such as Grape, rails-api, Sinatra. I’ll share my understanding here:

Sinatra

Sinatra providing a lightweight, simple DSL to creating web application. we can create a web/api application with sinatra within 15 seconds! the downside is, it is not a full stack framework, it requires us to combine sinatra with other frameworks. for example, we have a backend database for storing and retrieving information, you need to interating sinatra with a orm framework(e.g. ActivateRecrod or DataMapper).if we want to render info on a web page, we need to integrate a view template engine.

Grape

Grape is a ruby framework, which is designed for creating RESTful api service. it have sevural great features for creating RESTFul api. for example, api verisoning, automatic api doc generation, etc. Similar to sinatra, it is not a full stack framework, it requires some integration work. BTW, Grape can be easily integrated into any Rack application(e.g. sinatra, rails).

Rails::API

Rails::API is a subset of a normal Rails application, created for applications that don’t require all functionality that a complete Rails application provides. It is a bit more lightweight, and consequently a bit faster than a normal Rails application.

In the end, we choose Rails::API as our tech stack for the following reason:

  • it is a fullstack framework, including ORM, db migration, validation, etc all in one place.
  • we can leveage some rails’s feature, e.g. generator, db migration.
  • it is a subset of rails, is designed for creating a API application.
  • rails’s REST conversion.

API Design

Content Type Negotiation

A most important part of designing RESTful API is content-type negotiation. the content type negotiation can be both in request/response header and url suffix:

in request header, Content-Type indicating content type of data in request body, Accept telling server what kind of content type the client expected to accept.

in response header, Content-Type indicating content type of data in the response body.

Also, request to /users/2.json expecting the server return user info in JSON format, but request to /users/2.xml expecting get a XML response.

there’re several kind of standard content type, e.g. application/json, application/xml. People can define their own content type, e.g. application/vnd.google-apps.script+json
My feeling is, if your api is a public endpoint, you’d better define your own content type.

let’s take a example, a authentication api expecting get the following request data:

{
  email: "sample@gmail.com",
 password: "my_password"
}

you have two content type choices: application/json and application/vnd.mycompany.credential+json, if this is a public api, I’ll chose the customized content type - application/vnd.mycompany.credential+json, or this is an internal api, I’ll chose the standard content type - application/json. I made this choice by considering the following reasons:

  Pros Cons
Customized content type Could define a json schema, api server and client could use this json schema ensure request is processable. adding complexity
Standard content type Simple and straightforward no validation to request data, any unexpected message could be send to server

Code conversion

I struggled with workflow management when I play the very first story in this project, the problem is, it is very common and business workflow have more then two exit points. e.g. a login workflow, the possiable exit points are:

  1. username password matched, login succeed
  2. username password mimatched, login failed
  3. user is inactived, login failed
  4. user is locked because of too many login failures, login failed.

in a rails project, it is very important to keep your controllers clean, controller’s only responsibility is passing parameters and routating, so it is better is to put these business logic into Model or Service. here comes the problem: how can we let the controller konw the extact reason of failure without put the business logic in controller? return value can not fufill this requirement, so here come out our solution: modeling these exception exit point with ruby exception, handing different exceptions in controller with different exception handler. and we found it makes the controller and model much more cleaner. let’s have a look at this example:

Before, the controller was messy:

      #in controller
      def create
        user = User.authorize(params[:email], params[:password])
        if user.nil?
          render :status => 401, :json => {:error => "Unauthorized"}
        elsif !user.activated?
          render :status => 403, :json => {:error => "user hasn't been activated"}
        else
          response.headers["Token"]= Session.generate_token(user)
          render :status => 201, :nothing => true
        end
      end

After, controller is much cleaner

      #in controller
      rescue_from InactiveUserException, with: :user_is_inactive
      rescue_from InvalidCredentialException, with: :invaild_credential
      rescue_from UserNotFoundException, with: :user_not_found

      def create
        login_session = User.login(params[:email], params[:password])
        response.headers["Token"]= login_session.token
        render :status => :created, :nothing => true
      end

Error code

It is very common to return failure reason when API call failed. Even we can return failure reason in plian english, as an API provider, we shouldn’t assume that API client will use error message we provided, it’s better to return a structured failure reason which can be parsed by API client. let’s take a look at example from Github API:

    {
      "message": "Validation Failed",
        "errors": [
        {
          "resource": "Issue",
          "field": "title",
          "code": "missing_field"
        }
      ]
    }

failure reason was returned as an JSON object, resource representing what kind of resource is requested, field indicates which field fails api call, code indicating the exact faliure reason.

Another thing I want to highlight is - do not define numeric error code, it will be a nightmare for you and your client. a better solution is define meanningful error code, like missing_field, too_long, etc.

Documentation

RESTful api don’t have frontend, so it is very important to make your document friendly. Aslo, it is very common that a developer changing code but forgot to change api doc, so it would be great if we can generating api document automaticly. considering we have a well formed test suite(or spec) for the api, why cann’t we just extract information from these tests/specs and generating document automaticly. Acturally there’re some gems trying to solve this problem: apipie-rails, swagger-ui. we’re using apipie-rails, but we’ve found some missing features in apipie-rails. e.g. it can not record extract request and response headers, while headers play a important rule in a RESTful api.

Testing

We have two kind of tests in this project: integration test and unit test.

integration tests test the api from end point, it is a end to end test. we use requesting rspec define this test.

unit tests test different layer. hints: only stub method on the boundary of each layer.

integration test

Make tests less flakey by using contract-based testing instead of hitting live endpoints

Versioning

we could specify api version in either url or http request header. In theory, verion number in url is not restful, any thoughts, please let me know

Deployment

We deploy our api on amazon EC2.

  • Provision: creating new node from a customized AMI (with many required dependence installed).

  • Build pipeline:

    the CI will build a rpm package one all test passed, then this package will be pushed to S3, after that, this package will be installed on the provisioned node.