As deep networks begin to be deployed as autonomous agents, the issue of how
they can communicate with each other becomes important. Here, we train two
deep nets from scratch to perform large-scale referent identification through unsupervised
emergent communication. We show that the partially interpretable
emergent protocol allows the nets to successfully communicate even about object
classes they did not see at training time. The visual representations induced as
a by-product of our training ...
As deep networks begin to be deployed as autonomous agents, the issue of how
they can communicate with each other becomes important. Here, we train two
deep nets from scratch to perform large-scale referent identification through unsupervised
emergent communication. We show that the partially interpretable
emergent protocol allows the nets to successfully communicate even about object
classes they did not see at training time. The visual representations induced as
a by-product of our training regime, moreover, when re-used as generic visual
features, show comparable quality to a recent self-supervised learning model. Our
results provide concrete evidence of the viability of (interpretable) emergent deep
net communication in a more realistic scenario than previously considered, as
well as establishing an intriguing link between this field and self-supervised visual
learning.
+