Collaborative spatial reuse in wireless networks via selfish multi-armed bandits

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Wilhelmi Roca, Francesc
  • dc.contributor.author Cano Bastidas, Cristina
  • dc.contributor.author Neu, Gergely
  • dc.contributor.author Bellalta, Boris
  • dc.contributor.author Jonsson, Anders, 1973-
  • dc.contributor.author Barrachina Muñoz, Sergio
  • dc.date.accessioned 2019-05-13T08:23:29Z
  • dc.date.issued 2019
  • dc.description.abstract Next-generation wireless deployments are characterized by being dense and uncoordinated, which often leads to inefficient use of resources and poor performance. To solve this, we envision the utilization of completely decentralized mechanisms to enable Spatial Reuse (SR). In particular, we focus on dynamic channel selection and Transmission Power Control (TPC). We rely on Reinforcement Learning (RL), and more specifically on Multi-Armed Bandits (MABs), to allow networks to learn their best configuration. In this work, we study the exploration-exploitation trade-off by means of the ε-greedy, EXP3, UCB and Thompson sampling action-selection, and compare their performance. In addition, we study the implications of selecting actions simultaneously in an adversarial setting (i.e., concurrently), and compare it with a sequential approach. Our results show that optimal proportional fairness can be achieved, even when no information about neighboring networks is available to the learners and Wireless Networks (WNs) operate selfishly. However, there is high temporal variability in the throughput experienced by the individual networks, especially for ε-greedy and EXP3. These strategies, contrary to UCB and Thompson sampling, base their operation on the absolute experienced reward, rather than on its distribution. We identify the cause of this variability to be the adversarial setting of our setup in which the set of most played actions provide intermittent good/poor performance depending on the neighboring decisions. We also show that learning sequentially, even if using a selfish strategy, contributes to minimize this variability. The sequential approach is therefore shown to effectively deal with the challenges posed by the adversarial settings that are typically found in decentralized WNs.
  • dc.description.sponsorship This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), by a Gift from CISCO University Research Program (CG#890107) & Silicon Valley Community Foundation, by the European Regional Development Fund under grant TEC2015-71303-R (MINECO/FEDER), and by the Catalan Government under grant SGR-2017-1188.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Wilhelmi F, Cano C, Neu G, Bellalta B, Jonsson A, Barrachina-Muñoz S. Collaborative spatial reuse in wireless networks via selfish multi-armed bandits. Ad Hoc Netw. 2019 May 15;88:129-41. DOI: 10.1016/j.adhoc.2019.01.006
  • dc.identifier.issn 1570-8705
  • dc.identifier.uri http://hdl.handle.net/10230/37212
  • dc.language.iso eng
  • dc.publisher Elsevier
  • dc.relation.ispartof Ad Hoc Networks. 2019 May 15;88:129-41
  • dc.relation.projectID info:eu-repo/grantAgreement/ES/1PE/TEC2015-71303-R
  • dc.rights © Elsevier http://dx.doi.org/10.1016/j.adhoc.2019.01.006
  • dc.rights.accessRights info:eu-repo/semantics/embargoedAccess
  • dc.subject.keyword High-Density wireless networks
  • dc.subject.keyword Spatial reuse
  • dc.subject.keyword Resource allocation
  • dc.subject.keyword Decentralized learning
  • dc.subject.keyword Multi-Armed Bandits
  • dc.title Collaborative spatial reuse in wireless networks via selfish multi-armed bandits
  • dc.type info:eu-repo/semantics/article
  • dc.type.version info:eu-repo/semantics/acceptedVersion