Dark Mode

Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

poloclub/dodrio

Repository files navigation

Dodrio

An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge.

For more information, check out our manuscript:

Dodrio: Exploring Transformer Models with Interactive Visualization. Zijie J. Wang, Robert Turko, and Duen Horng Chau. arXiv preprint 2021. arXiv:2103.14625.

Live Demo

For a live demo, visit: http://poloclub.github.io/dodrio/

Running Locally

Clone or download this repository:

git clone git@github.com:poloclub/dodrio.git

# use degit if you don't want to download commit histories
degit poloclub/dodrio

Install the dependencies:

npm install

Then run Dodrio:

npm run dev

Navigate to localhost:5000. You should see Dodrio running in your broswer :)

To see how we trained the Transformer or customize the visualization with a different model or dataset, visit the ./data-generation/ directory.

Credits

Dodrio was created by Jay Wang, Robert Turko, and Polo Chau.

Citation

@inproceedings{wangDodrioExploringTransformer2021,
title = {Dodrio: {{Exploring Transformer Models}} with {{Interactive Visualization}}},
shorttitle = {Dodrio},
booktitle = {Proceedings of the 59th {{Annual Meeting}} of the {{Association}} for {{Computational Linguistics}} and the 11th {{International Joint Conference}} on {{Natural Language Processing}}: {{System Demonstrations}}},
author = {Wang, Zijie J. and Turko, Robert and Chau, Duen Horng},
year = {2021},
pages = {132--141},
publisher = {{Association for Computational Linguistics}},
address = {{Online}},
language = {en}
}

License

The software is available under the MIT License.

Contact

If you have any questions, feel free to open an issue or contact Jay Wang.

About

Exploring attention weights in transformer-based models with linguistic knowledge.

Topics

Resources

Readme

License

MIT license

Stars

Watchers

Forks

Releases

No releases published

Packages

Contributors