Facebookが作っているニューラルネットワークのツールがオープンソース化されたそうで。
Torch | Scientific computing for LuaJIT.
環境はUbuntu14.04
ホームページに公開されている方法の通りでインストールできる。
$ curl -sk https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash $ curl -sk https://raw.githubusercontent.com/torch/ezinstall/master/install-luajit+torch | PREFIX=~/torch bash $ echo "export PATH=~/torch/bin:\$PATH; export LD_LIBRARY_PATH=~/torch/lib:\$LD_LIBRARY_PATH; " >>~/.bashrc && source ~/.bashrc $ ~/torch/bin/luarocks install nn
nn/training.md at master · torch/nn · GitHub
にある通りにXORのテストをしてみる。
起動。
ubuntu@ip-172-31-27-81:~$ th ______ __ | Torch7 /_ __/__ ________/ / | Scientific computing for Lua. / / / _ \/ __/ __/ _ \ | /_/ \___/_/ \__/_//_/ | https://github.com/torch | http://torch.ch
教師データの作成
th> dataset={}; [0.0000s] th> function dataset:size() return 100 end -- 100 examples [0.0001s] th> for i=1,dataset:size() do ..> local input = torch.randn(2); -- normally distributed example in 2d ..> local output = torch.Tensor(1); ..> if input[1]*input[2]>0 then -- calculate label for XOR function ..> output[1] = -1; ..> else ..> output[1] = 1 ..> end ..> dataset[i] = {input, output} ..> end [0.0008s]
ネットワークの作成
th> require "nn" true [0.0107s] th> mlp = nn.Sequential(); -- make a multi-layer perceptron [0.0001s] th> inputs = 2; outputs = 1; HUs = 20; -- parameters [0.0000s] th> mlp:add(nn.Linear(inputs, HUs)) nn.Sequential { [input -> (1) -> output] (1): nn.Linear(2 -> 20) } [0.0001s] th> mlp:add(nn.Tanh()) nn.Sequential { [input -> (1) -> (2) -> output] (1): nn.Linear(2 -> 20) (2): nn.Tanh } [0.0001s] th> mlp:add(nn.Linear(HUs, outputs)) nn.Sequential { [input -> (1) -> (2) -> (3) -> output] (1): nn.Linear(2 -> 20) (2): nn.Tanh (3): nn.Linear(20 -> 1) } [0.0001s]
学習。
th> criterion = nn.MSECriterion() [0.0001s] th> trainer = nn.StochasticGradient(mlp, criterion) [0.0001s] th> trainer.learningRate = 0.01 [0.0000s] th> trainer:train(dataset) # StochasticGradient: training # current error = 0.94303028750855 # current error = 0.81505805303687 # current error = 0.69978081138196 # current error = 0.60082425903428 # current error = 0.52480236545174 # current error = 0.47201262208585 # current error = 0.43779499334206 # current error = 0.41628235012982 # current error = 0.40265411888491 # current error = 0.39364373859055 # current error = 0.38724902974158 # current error = 0.38230491009015 # current error = 0.37814883526018 # current error = 0.37440484332827 # current error = 0.3708558692019 # current error = 0.36737167831577 # current error = 0.36386944118989 # current error = 0.36029291775182 # current error = 0.35660217366475 # current error = 0.352769265749 # current error = 0.34877731247394 # current error = 0.34462141986871 # current error = 0.34031036603419 # current error = 0.33586796009716 # current error = 0.33133289503198 # StochasticGradient: you have reached the maximum number of iterations # training error = 0.33133289503198 [0.1410s]
テスト
th> x = torch.Tensor(2) [0.0001s] th> x[1] = 0.5; x[2] = 0.5; print(mlp:forward(x)) -0.8682 [torch.DoubleTensor of dimension 1] [0.0002s] th> x[1] = 0.5; x[2] = -0.5; print(mlp:forward(x)) 0.2253 [torch.DoubleTensor of dimension 1] [0.0001s] th> x[1] = -0.5; x[2] = 0.5; print(mlp:forward(x)) 0.2538 [torch.DoubleTensor of dimension 1] [0.0001s] th> x[1] = -0.5; x[2] = -0.5; print(mlp:forward(x)) -0.9561 [torch.DoubleTensor of dimension 1] [0.0002s]
ちゃんとできてるし、それなりに速そう。Luaになれないが、FANNより使いやすそうだ。