分享 High performance Rails

blacktulip · June 02, 2013 · Last by aptx4869 replied at November 19, 2013 · 6108 hits
Topic has been selected as the excellent topic by the admin.

虽然没有什么新东西,但是蛮系统的。

let me see see

好文,拜读

竟然和我们用的组件惊人的一致,unicorn,oob,memory killer,slim•••

> Banchmark.ms { Article.last(100).to_a }
Article load (8.1ms) SELECT * FROM articles ...

The query is fast enough, but creating 100 AR objects is slow

这点提醒到了!

没看懂他们为什么在 Unicorn 里面 GC.disable,然后用 Run GC each 10 requests

#6 楼 @huacnlee 这样在用户请求的时候不会发生 GC,然后在请求完成后进行 GC,避免 GC 对用户请求时间的影响。

#8 楼 @huacnlee 搭配文中提到的 unicorn-worker-killer,根据服务器的内存和 unicorn worker 配置数量调整一下单个上下限,我们的应用实测下来平均从 140ms 下降到了 120ms,效果还是很明显的

#9 楼 @quakewang unicorn-worker-killer 应该只是保证稳定性吧,这玩意儿用来防止意外的内存泄漏会很有帮助,哈哈哈

#6 楼 @huacnlee 因为他们不希望 GC 终端用户的请求,否则 200ms 的响应指标会被打破。

#10 楼 @huacnlee 如果没有内存监控自杀的话,我发现在 gc disable 的情况,用 oob 内存会莫名暴涨,很奇怪。

这个很有用

cache 和数据库等 IO 问题是所有语言都会碰到的,用好 cache,语言本身就不那么重要了

好奇怪啊,我试了下在 json 里面用 cache digest,然后用 ab 测试,响应时间反而显著变慢了……是不是也就是说如果生成的 json 不大,就根本没必要 cache 么……

json.cache! @users do |json|
  json.users @users do |user|
    json.id         user.id
    json.user_name  user.user_name
    ...
  end
end
Concurrency Level:      100
Time taken for tests:   7.412 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      6368120 bytes
HTML transferred:       4215000 bytes
Requests per second:    674.61 [#/sec] (mean)
Time per request:       148.235 [ms] (mean)
Time per request:       1.482 [ms] (mean, across all concurrent requests)
Transfer rate:          839.06 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0       5
Processing:    12  146  31.5    144     417
Waiting:       12  146  31.5    144     417
Total:         17  146  31.4    144     417

Percentage of the requests served within a certain time (ms)
  50%    144
  66%    152
  75%    158
  80%    161
  90%    170
  95%    184
  98%    232
  99%    291
 100%    417 (longest request)

vs

json.users @users do |user|
  json.id         user.id
  json.user_name  user.user_name
  ...
end
Concurrency Level:      100
Time taken for tests:   5.761 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      6368028 bytes
HTML transferred:       4215000 bytes
Requests per second:    867.98 [#/sec] (mean)
Time per request:       115.210 [ms] (mean)
Time per request:       1.152 [ms] (mean, across all concurrent requests)
Transfer rate:          1079.55 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.6      0       6
Processing:    18  113  27.5    109     407
Waiting:       18  113  27.5    109     407
Total:         23  113  27.4    109     407

Percentage of the requests served within a certain time (ms)
  50%    109
  66%    118
  75%    125
  80%    129
  90%    137
  95%    145
  98%    166
  99%    243
 100%    407 (longest request)

顺便吐槽下新版的 nginx,内置了 etag 然后强制隐藏了上游 etag,没法用 rails 里面的 etag 了……

add_header ETag $upstream_http_etag;

都没用……

#16 楼 @aptx4869 你确认有这个问题嘛?可以给 Nginx 发 issue 报告一下这个问题。

#15 楼 @aptx4869 cache 用的是 disk cache?生成的 json 内容是不是很小?换 memcached 或者 redis 试试看

#18 楼 @quakewang json 确实不大,于是刚刚试着让 json 长度变大,将 index 里面 per_page 从 8 增加到 25,差距反而更大了……

8 Time per request: 115.210 [ms] vs 148.235 [ms] 25 Time per request: 150.185 [ms] vs 212.686 [ms]

cache 是用的 redis:

config.cache_store = :redis_store, "redis://localhost:6379/0/cache", { expires_in: 90.minutes }

json.cache! 这是 rails 4 json builder 自带的?没用过呢,之前用 cache 都是在 controller 里面用 caches page 或者 view 里面用 fragment cache,这个性能对比测试看上去是 cache 没生效的样子。

rubykaigi2013 他们有放出当天的录像,不过貌似感觉不全

Hall A: http://www.ustream.tv/channel/rubykaigi1 Hall B: http://www.ustream.tv/channel/rubykaigi2

#17 楼 @lgn21st 最近发现这不是 nginx 的 bug,而是 feature…… http://forum.nginx.org/read.php?2,240120,240127#msg-240127 因为开启了 gzip,nginx 把 rails 生产的 strong etag 给去掉了……

You need to Sign in before reply, if you don't have an account, please Sign up first.