Recent work in Emergent Communication has shown that deep neural networks can develop successful
strategies of referential communication. Arguably, the extent to which they can do so
depends on the properties of the communication channel that is provided to them. Here, we test
this hypothesis by analyzing the protocols emerging from visual referential games with increasingly
complex communication channels. More specifically, we increase the complexity by varying the
vocabulary size and the length ...
Recent work in Emergent Communication has shown that deep neural networks can develop successful
strategies of referential communication. Arguably, the extent to which they can do so
depends on the properties of the communication channel that is provided to them. Here, we test
this hypothesis by analyzing the protocols emerging from visual referential games with increasingly
complex communication channels. More specifically, we increase the complexity by varying the
vocabulary size and the length of the messages exchanged. We show that this directly influences
communication success and profoundly impacts the chosen referential strategy. Our results evidence
a tendency of deep nets to performbetter with greater vocabulary sizes. Moreover, we found
that deep nets prefer symbolic communication, i.e. when messages consist of single symbols, rather
than combinatorial languages, i.e. when messages consist of sequences of two or more symbols. Finally,
we showthat the most successful communication strategy is achieved when the nets converge
to a protocol that approximates a one-to-one mapping between messages and images
+