martini实现路由配置很方便,但是形式却很固定,以 Get 为例:
m.Get("/", fun() string {
return "hello world
})
这是请求根路径时的处理方式,返回给用户“hello world”字符串信息。 我的问题是,第二个参数的传递方式能不能采用想 ruby 中 lambda 的形式, 将 handler 封装成可调用函数的形式。
我尝试的方式是:
type Person struct {
name string
}
func (p Person) say_hi() {
return "hello" + p.name
}
func main() {
m := martini.Classic()
p := Person{name: "xiaoxiao"}
m.Get("/", p.say_hi())
}
可是返回的 panic 信息: “handler must be a callable func”
我的思路有问题吗?
#5 楼 @tnt pypy 无甚了解,不敢做任何评论,“mux 超它一倍”有点言过其实,go-web-benchmark,其实 martini 的源码我也没有看多少,仅仅只是一个"make Martini base on MVC”的尝试的经验,我不是它的忠实使用者,但就你所说的意思通过框架本身的封装性和深度 (不知道是不是黑魔法的意思)来判断性能,有点缺乏信服力。
你那偏旧,martini 这几个月 router 之类的改动蛮大的,而且 Go 1.3 的性能拉升对除了 martini 这种黑魔法玩嗨的框架都提升极大。
我这两个例子,为了避免干扰,没用 martini-contrib 的 render 输出 json, 也没用 Classic 一些 serve 静态文件和日志打印的 middleware
运行环境是 1.3 开发版,做的是
ab -n 100000 -c 1000 http://host/ 和 ab -n 100000 -c 1000 http://host/helloworld.json
martini 当前版本是
/ => Requests per second: 9463.05 [#/sec] (mean)
/helloworld.json => Requests per second: 7103.08 [#/sec] (mean)
mux 当前版本是
/ => Requests per second: 17820.39 [#/sec]
/helloworld.json => helloworld.json Requests per second: 17788.92 [#/sec]
测试代码是
package main
import (
"encoding/json"
"fmt"
"github.com/go-martini/martini"
"io"
"net/http"
)
func json2writer(w io.Writer, result interface{}) error {
if resp, ok := w.(http.ResponseWriter); ok {
resp.Header().Set("Content-Type", "application/json")
}
encoder := json.NewEncoder(w)
if err := encoder.Encode(result); err != nil {
return err
}
return nil
}
func main() {
m := martini.New()
r := martini.NewRouter()
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "helloworld")
})
r.Get("/helloworld.json", func(w http.ResponseWriter, r *http.Request) {
json2writer(w, map[string]interface{}{
"message": "helloworld",
"okay": true,
})
})
m.Action(r.Handle)
http.Handle("/", m)
http.ListenAndServe(":5000", nil)
}
package main
import (
"encoding/json"
"fmt"
"github.com/gorilla/mux"
"io"
"net/http"
)
func json2writer(w io.Writer, result interface{}) error {
if resp, ok := w.(http.ResponseWriter); ok {
resp.Header().Set("Content-Type", "application/json")
}
encoder := json.NewEncoder(w)
if err := encoder.Encode(result); err != nil {
return err
}
return nil
}
func main() {
m := mux.NewRouter()
m.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "helloworld")
})
m.HandleFunc("/helloworld.json", func(w http.ResponseWriter, r *http.Request) {
json2writer(w, map[string]interface{}{
"message": "helloworld",
"okay": true,
})
})
http.Handle("/", m)
http.ListenAndServe(":5000", nil)
}
@tnt 其实你的压测是https://github.com/cypriss/golang-mux-benchmark 上的第一种测试类型“Simple - A single route, GET /action. Renders 'hello'.”数据比较确实 margin 要慢于 mux 很多,这和你的说法是成立的,但是你看一下第二种类型,“RouteN”,随着 N 值增大,mux 的速度是越来越慢于 martini 的。
试了下,确实,不过要近 3000 条路由 martini 才能反超 mux, 比较贴近实际的 (500 条以内), mux 还是能压 martini 近一倍。而且 mux 路由匹配应该是能优化的,martini 为了仿动态性依靠注入机制导致运行时大量反射操作加速应该很难吧。
package main
import (
"encoding/json"
"fmt"
"github.com/go-martini/martini"
"io"
"net/http"
)
const (
N = 1000
)
func json2writer(w io.Writer, result interface{}) error {
if resp, ok := w.(http.ResponseWriter); ok {
resp.Header().Set("Content-Type", "application/json")
}
encoder := json.NewEncoder(w)
if err := encoder.Encode(result); err != nil {
return err
}
return nil
}
func main() {
m := martini.New()
r := martini.NewRouter()
for i := 1; i < N; i++ {
r.Get(fmt.Sprintf(`/%d/(?P<id>[1-9]\d*)`, i), func(i int) func(params martini.Params, w http.ResponseWriter) {
return func(params martini.Params, w http.ResponseWriter) {
json2writer(w, map[string]interface{}{
"result": true,
"i": i,
"params": params,
})
}
}(i))
}
m.Action(r.Handle)
http.Handle("/", m)
http.ListenAndServe(":5000", nil)
}
package main
import (
"encoding/json"
"fmt"
"github.com/gorilla/mux"
"io"
"net/http"
)
const (
N = 1000
)
func json2writer(w io.Writer, result interface{}) error {
if resp, ok := w.(http.ResponseWriter); ok {
resp.Header().Set("Content-Type", "application/json")
}
encoder := json.NewEncoder(w)
if err := encoder.Encode(result); err != nil {
return err
}
return nil
}
func main() {
m := mux.NewRouter()
for i := 1; i < N; i++ {
m.HandleFunc(fmt.Sprintf(`/%d/{id:[1-9]\d*}`, i), func(i int) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
json2writer(w, map[string]interface{}{
"result": true,
"i": i,
"params": mux.Vars(r),
})
}
}(i))
}
http.Handle("/", m)
http.ListenAndServe(":5000", nil)
}
ab -n 10000 -c 100 http://localhost:5000/(N/2)/10000
N = 500
martini => Requests per second: 5286.49 [#/sec] (mean)
mux => Requests per second: 9058.32 [#/sec] (mean)
N = 1000
martini => Requests per second: 3949.45 [#/sec] (mean)
mux => Requests per second: 6380.75 [#/sec] (mean)
N = 1500
martini => Requests per second: 3288.06 [#/sec] (mean)
mux => Requests per second: 4732.49 [#/sec] (mean)
N = 2000
martini => Requests per second: 2673.03 [#/sec] (mean)
mux => Requests per second: 3229.62 [#/sec] (mean)
N = 2500
martini => Requests per second: 2224.42 [#/sec] (mean)
mux => Requests per second: 2472.72 [#/sec] (mean)
N = 3000
martini => Requests per second: 1807.92 [#/sec] (mean)
mux => Requests per second: 1750.74 [#/sec] (mean)