2021
Conference article  Open Access

How do BERT embeddings organize linguistic knowledge?

Puccetti G., Miaschi A., Dell'Orletta F.

NLP  Interpretability  Deep Learning 

Several studies investigated the linguistic information implicitly encoded in Neural Language Models. Most of these works focused on quantifying the amount and type of information available within their internal representations and across their layers. In line with this scenario, we proposed a different study, based on Lasso regression, aimed at understanding how the information encoded by BERT sentence-level representations is arrange within its hidden units. Using a suite of several probing tasks, we showed the existence of a relationship between the implicit knowledge learned by the model and the number of individual units involved in the encodings of this competence. Moreover, we found that it is possible to identify groups of hidden units more relevant for specific linguistic properties.



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:454440,
	title = {How do BERT embeddings organize linguistic knowledge?},
	author = {Puccetti G. and Miaschi A. and Dell'Orletta F.},
	year = {2021}
}