There are 5 tutorials presented in 2 tracks in parallel. on Saturday 18, June 2016. Tracks start at 9 a.m. and end at 5.30 p.m.
Track 1 (meeting room: Ella Fitzgerald 1+2) | |||||||||||||
T1.A | Coffee break | T1.A | Lunch | T1.B | Coffee break | T1.C | |||||||
Track 2 (meeting room: Ella Fitzgerald 3+4) | |||||||||||||
T2.A | Coffee break | T2.A | Lunch | T2.B | Coffee break | T2.B |
Abstract: It is well-known that in a queueing system, customers, who mind only their selfish interests, join a queue at a rate which is higher than it is socially desired. The reason behind that is that when customers assess the costs and rewards associated with joining, they mind only their own, while ignoring the additional costs, known as externalities, they inflict on others due to their joining. A central planner, who acts on behalf of society at large, minds these extra costs. In particular, he usually wishes that some (but of course not all) of those who plan to join, will not do so. As reaching this goal by force or by brutally pushing some of the customers out of the queue is usually undesired, we look for other means to get the same effect. Moreover, in the case where this goal is achievable, we prefer those means which come with minimal measures as possible.
The tutorial will commence with describing the simplest queueing model of M/M/1 where all arriving customers are homogeneous with respect to their linear waiting cost parameter and their reward associated with service completion. We look into two cases which vary with the information possessed by the customers: (1) the unobservable case where the queue-length upon arrival is not known to the decision maker, and (2) the observable case, where it does. Customers need to decide whether to join or not. In the case where customers decide by themselves, they are in fact engaged in an non-cooperative game among themselves. The social planer is facing an optimization problem but he is aware of the fact that any decision he takes will results in some customers’ behavior, which in turn will determine the social utility.
Speaker’s biography: Moshe Haviv is a Professor of Statistics and Department Head at the Hebrew University in Jerusalem, Israel. He received his B.Sc. in Mathematics at Tel Aviv University, and his M.A. in Administrative Sciences and Ph.D. in Operations Research/Management Science both at Yale University. His research interests include Operations Research, Queueing Models, decision making and strategic behavior in queues, Markov decision processes, and large Markov chains. He is a member of the Center for Rationality at the Hebrew University, and is a visiting professor (summers) in Operations Management and Econometrics at the University of Sydney.
Abstract: “What are they doing with our data?” is what transparency ought to answer. The web giants – e.g. Google, Amazon, and Facebook, but also many other companies – leverage user data for personalization (recommendations, targeting advertisements, and increasingly often adjusting prices). Users currently have little insight, and at best coarse information, to monitor how and for which purposes their data are being used. What if instead one could tell you exactly which item - whether an email you wrote, a search you made, or a webpage you visit - has been used to decide on a targeted ad or a recommended product? But can we track our data at scale, and in environments we do not control?
This tutorial is a quick introduction to the emerging topic of web transparency. A few scenarios – unfortunately inspired from real example – will be shown to argue how personalization can lead to various forms of discrimination and predatory practices. The tutorial will then overview recent progress to make personalization transparent, and contrast its objectives and working assumptions with different related areas (privacy and fair machine learning). We will briefly cover task specific tools (e.g., $herif, Bobble, AdScape, AdReveal, Mobile, floodwatch), before describing the recent attempts at building general tools (Ad-Fisher, XRay, Sunlight) grounded on scalable algorithms and statistics. We will conclude with a list of open problems.
Speaker’s biography: Augustin Chaintreau is an Assistant Professor of Computer Science at Columbia University. His research, by experience in industry, is centered on real world impact and emerging computing trends, while his training, in mathematics and theoretical computer science, is focused on guiding principles. He designed and proved the first reliable, scalable and network-fair multicast architecture while working at IBM during his Ph.D. He conducted the first measurement experience of human mobility as a communication transport tool while working for Intel and, as member of the Technical Staff of Technicolor (formerly, Thomson), showed that opportunistic caching in mobile networks can optimally take advantage of social properties.
He is now working on internetworking social network services through distributed algorithms and opportunistic architecture, to vastly expand how your data and the web deal with everyday objects and your social environment. An ex student of the Ecole Normale Superieure in Paris, he earned a Ph.D. in mathematics and computer science in 2006. He has been an active member of the networking research community, serving in the program committee of ACM SIGCOMM, ACM CoNEXT, ACM SIGMETRICS, ACM MobiCom, ACM MobiHoc, ACM IMC, IEEE Infocom. He is also an editor for IEEE TMC, ACM SIGCOMM CCR, ACM SIGMOBILE MC2R.
Abstract: Most goods analyzed in traditional economic setting are exclusive and private. Only one person can eat an apple entirely: the utility I obtain from eating that apple remains only my own. But some goods are not exclusive, they benefit all members of society such as the provision of a park in your neighborhood, investment to reduce pollution in an area, or gifts made to charity that improves the human environment in which you live. Starting in the 1960s, economic analysis predicted that such “public” goods are typically undersupplied by voluntary contributions under utility maximizing strategy. This theory was recently revived following the following observation: most non exclusive goods are not globally available but they affects the outcome of various agents differently. A (local, or networked) social good is one in which the goods produced by others only affect me when these are neighbors in a given graph. In such a setting, not only are the various utilities of nodes to take into account, but also the topological properties which dictates who benefits from whose efforts.
This tutorial has three objectives: (1) through a set of examples, train the audience to spot networked public goods which are shown to pervade multiple problems (exchange economies with substitution and complementarity, auctions) and applications (tax design, crowdsourcing, privacy), (2) introduce them to a recent analytical formalism (based on response function, Bonacich centrality and network normality condition) that provides the most general conditions characterizing the outcome of networked public good games, (3) provide a brief overview of related open problems and current related research.
Abstract: The commoditization of key processing components coupled with virtualization of infrastructure functions will lead to a radical change in the economics of mobile networks. The latter will help network providers (e.g., MNO, MVNO) move from proprietary hardware and software platforms towards open and flexible cellular systems based on general-purpose cloud infrastructures. In this context, 5G systems will see a paradigm shift in three planes: the data-plane, control-plane, and management- plane, in support of higher performance, efficient signaling, flexible and intelligent control and coordination in heterogeneous networks.
This tutorial discusses all of these topics, identifying key challenges in software-defined 5G networks for future research as well as the standardization activities, while providing a comprehensive overview of the current literature. It is organized in four technical parts covering principles, challenges, key technologies, proof-of-concept prototypes and field trials of software-define 5G systems.
Speakers’ biography: Navid Nikaein is an assistant professor in mobile communication department at Eurecom since 2009, where he is exploring ideas stem from experimental system research related to radio access network (RAN) (L1.5/L2/L3) in cellular, adhoc/mesh, and cloud settings with realistic use-cases. He leads the development of the 4G->5G radio access layer (L2) of OpenAirInterface wireless technology platform, as well as its system-level emulation platform. He received his Ph.D. degree in communication systems from the Swiss Federal Institute of Technology EPFL in 2003, and his HDR (Habilitation) from UNSA. After a postdoc at Eurecom, he joined in a founding team of 3ROAM, a startup company in Sophia-Antipolis, France, pioneering a range of intelligent wireless backhaul routing products for private and public networks.
Raymond Knopp is a professor et EURECOM. His research interest is on wireless communication. He got his Master from McGill University, Montréal, Canada in 1992 and a PhD in communication system from EPFL in 1997.
Abstract: There is currently an urgent need for novel technologies that can partially mitigate the current explosion of wireless traffic volumes. While many existing communication technologies fail to scale with increasing network sizes, recent developments have revealed that caching, when properly transformed and boosted, can come a long way in augmenting the performance and efficiency of wireless networks. Our tutorial will seek to insightfully present the fundamental ingredients behind some recent breakthroughs, as well as describe the key challenges that remain in turning caching into a key ingredient for future wireless networks.
Speakers’ biography: Since Nov 2014, Georgios Paschos is a principal researcher at Huawei Technologies, Paris, France, leading the Network Control and Resource Allocation team. Previously, he spent two years at MIT in the team of Prof. Eytan Modiano. For the period June 2008-Nov 2014 he was affiliated with “The Center of Research and Technology Hellas - Informatics & Telematics Institute” CERTH- ITI, Greece, working with Prof. Leandros Tassiulas. He also taught in the University of Thessaly, Dept. of Electrical and Computer Engineering as an adjunct lecturer for the period 2009-2011. In 2007-2008 he was an ERCIM Postdoc Fellow in VTT, Finland, working on the team of Prof. Norros. He received his diploma in Electrical and Computer Engineering (2002) from Aristotle University of Thessaloniki, and his PhD degree in Wireless Networks (2006) from ECE dept. University of Patras (supervisor Prof. Stavros Kotsopoulos), both in Greece. Two of his papers won the best paper award, in GLOBECOM 07’ and IFIP Wireless Days 09’ respectively. He serves as an associate editor for IEEE/ACM Trans. on Networking, and as a TPC member of IEEE INFOCOM.
Petros Elia is an assistant professor at EURECOM. His research focuses mainly on wireless communications and signal processing. Particular emphasis is placed on understanding different elusive tradeoffs: the complexity-performance tradeoff (complexity here refers to computational algorithmic complexity, let say flops), the feedback-performance tradeoff, the highly unexplored feedback-complexity tradeoff, and of course, the holy-grail of understanding how these three crucial elements (complexity, feedback, and performance) are all tied together. Towards this, he employs approaches from different areas, such as mathematics, physics, and from information theory, complexity theory, lattices, optimization theory, and in the process design algorithms for centralized or decentralized communication networks.