Implicitly-defined neural networks for sequence labeling
July 31, 2017
Conference Paper
Author:
Published in:
Annual Meeting of Assoc. of Computational Lingusitics, 31 July 2017.
R&D Area:
Summary
In this work, we propose a novel, implicitly defined neural network architecture and describe a method to compute its components. The proposed architecture forgoes the causality assumption previously used to formulate recurrent neural networks and allow the hidden states of the network to coupled together, allowing potential improvement on problems with complex, long-distance dependencies. Initial experiments demonstrate the new architecture outperforms both the Stanford Parser and a baseline bidirectional network on the Penn Treebank Part-of-Speech tagging task and a baseline bidirectional network on an additional artificial random biased walk task.