Link to the University of Pittsburgh Homepage
Link to the University Library System Homepage Link to the Contact Us Form

Learning without recall in directed circles and rooted trees

Rahimian, MA and Jadbabaie, A (2015) Learning without recall in directed circles and rooted trees. In: UNSPECIFIED.

[img]
Preview
PDF
Available under License : See the attached license file.

Download (288kB) | Preview
[img] Plain Text (licence)
Available under License : See the attached license file.

Download (1kB)

Abstract

This work investigates the case of a network of agents that attempt to learn some unknown state of the world amongst the finitely many possibilities. At each time step, agents all receive random, independently distributed private signals whose distributions are dependent on the unknown state of the world. However, it may be the case that some or any of the agents cannot distinguish between two or more of the possible states based only on their private observations, as when several states result in the same distribution of the private signals. In our model, the agents form some initial belief (probability distribution) about the unknown state and then refine their beliefs in accordance with their private observations, as well as the beliefs of their neighbors. An agent learns the unknown state when her belief converges to a point mass that is concentrated at the true state. A rational agent would use the Bayes' rule to incorporate her neighbors' beliefs and own private signals over time. While such repeated applications of the Bayes' rule in networks can become computationally intractable; in this paper, we show that in the canonical cases of directed star, circle or path networks and their combinations, one can derive a class of memoryless update rules that replicate that of a single Bayesian agent but replace the self beliefs with the beliefs of the neighbors. This way, one can realize an exponentially fast rate of learning similar to the case of Bayesian (fully rational) agents. The proposed rules are a special case of the Learning without Recall approach that we develop in a companion paper, and it has the advantage that while preserving essential features of the Bayesian inference, they are made tractable. In particular, the agents can rely on the observational abilities of their neighbors and their neighbors' neighbors etc. to learn the unknown state; even though they themselves cannot distinguish the truth.


Share

Citation/Export:
Social Networking:
Share |

Details

Item Type: Conference or Workshop Item (UNSPECIFIED)
Status: Published
Creators/Authors:
CreatorsEmailPitt UsernameORCID
Rahimian, MARAHIMIAN@pitt.eduRAHIMIAN0000-0001-9384-1041
Jadbabaie, A
Date: 28 July 2015
Date Type: Publication
Journal or Publication Title: Proceedings of the American Control Conference
Volume: 2015-J
Page Range: 4222 - 4227
Event Type: Conference
DOI or Unique Handle: 10.1109/acc.2015.7171992
Schools and Programs: Swanson School of Engineering > Industrial Engineering
Refereed: Yes
ISBN: 9781479986842
ISSN: 0743-1619
Date Deposited: 17 Aug 2020 17:03
Last Modified: 08 Sep 2024 11:55
URI: http://d-scholarship.pitt.edu/id/eprint/39620

Metrics

Monthly Views for the past 3 years

Plum Analytics

Altmetric.com


Actions (login required)

View Item View Item