Why are Sensitive Functions Hard for Transformers?
|
ACL |
2024 |
0 |
More frequent verbs are associated with more diverse valency frames: Efficient principles at the lexicon-grammar interface.
|
ACL |
2024 |
0 |
How Do Syntactic Statistics and Semantic Plausibility Modulate Local Coherence Effects.
|
Cognitive Science |
2023 |
0 |
Explaining patterns of fusion in morphological paradigms using the memory-surprisal tradeoff.
|
Cognitive Science |
2022 |
0 |
Modeling Fixation Behavior in Reading with Character-level Neural Attention.
|
Cognitive Science |
2022 |
0 |
An Information-Theoretic Characterization of Morphological Fusion.
|
EMNLP |
2021 |
3 |
RNNs can generate bounded hierarchical languages with optimal memory.
|
EMNLP |
2020 |
21 |
Character-based Surprisal as a Model of Reading Difficulty in the Presence of Errors.
|
Cognitive Science |
2019 |
3 |
An Information-Theoretic Explanation of Adjective Ordering Preferences.
|
Cognitive Science |
2018 |
34 |
Modeling Human Reading with Neural Attention.
|
EMNLP |
2016 |
44 |