A standard result from auction theory is that bidding truthfully in a second price auction is a weakly dominant strategy. The result, however, does not apply in the presence of Cost Per Action (CPA) constraints. Such constraints exist, for instance, in digital advertising, as some buyer may try to maximize the total number of clicks while keeping the empirical Cost Per Click (CPC) below a threshold. More generally the CPA constraint implies that the buyer has a maximal average cost per unit of value in mind.
We discuss how such constraints change some traditional results from auction theory.
Following the usual textbook narrative on auction theory, we focus specifically on the symmetric setting, We formalize the notion of CPA constrained auctions and derive a Nash equilibrium for second price auctions. We then extend this result to combinations of first and second price auctions. Further, we expose a revenue equivalence property and show that the seller's revenue-maximizing reserve price is zero.
In practice, CPA-constrained buyers may target an empirical CPA on a given time horizon, as the auction is repeated many times. Thus, his bidding behavior depends on past realization. We show that the resulting buyer dynamic optimization problem can be formalized with stochastic control tools and solved numerically with available solvers.
When the performance of a system is dictated by the behaviour of its users, self-interested choices can result in sub-optimal system operation, as is the case in road traffic networks. The inefficiency resulting from such selfish behaviour is commonly measured by the ratio between the emergent worst-case system cost and the minimum system cost, termed price-of-anarchy. As the degree of inefficiency can be significant even for relatively simple systems (e.g., affine congestion games), researchers have proposed a variety of approaches to align the emergent selfish behaviour with the desired system objective. A well-studied and promising method is that of altering users' perceived costs by means of taxes.
We propose a market design solution for a market for distributed data. The main challenges addressed by our solution are (1) different data providers produce different databases that can be joined to produce answers for users' queries; (2) data providers have high fixed costs for producing their databases; and (3) buyers and sellers can arrive dynamically to the market. Our design relies on using a Markov chain with states corresponding to different numbers of allocated databases. The transition probabilities between different states are governed by the payments suggested by the market platform to the data providers. The main challenge in this setting is to guarantee dynamic incentive compatibility, i.e., to ensure that buyers and sellers are not incentivized to arrive late to the market or to misreport their costs or values. To achieve this, we disentangle the payments suggested by the market platform to the sellers from the posted prices exposed to the buyers. We prove that the buyer-optimal payments that are exposed to sellers are non-increasing which prevents late arrivals of sellers. Further, we demonstrate that the posted prices exposed to buyers constitute a martingale process (i.e., late arrivals lead to the same expected price). Finally, we show that our design guarantees zero expected average budget deficit and we perform a number of simulations to validate our model.
Personal information and other types of private data are valuable for both data owners and institutions interested in providing targeted and customized services that require analyzing such data. In this context, privacy is sometimes seen as a commodity: institutions (data buyers) pay individuals (or data sellers) in exchange for private data. In this study, we examine the problem of designing such data contracts, through which a buyer aims to minimize his payment to the sellers for a desired level of data quality, while the latter aim to obtain adequate compensation for giving up a certain amount of privacy. Specifically, we use the concept of differential privacy and examine a model of linear and nonlinear queries on private data. We show that conventional algorithms that introduce differential privacy via zero-mean noise fall short for the purpose of such transactions as they do not provide sufficient degree of freedom for the contract designer to negotiate between the competing interests of the buyer and the sellers. Instead, we propose a biased differentially private algorithm which allows us to customize the privacy-accuracy tradeoff for each individual. We use a contract design approach to find the optimal contracts when using this biased algorithm to provide privacy, and show that under this combination the buyer can achieve the same level of accuracy with a lower payment as compared to using the unbiased algorithms, while incurring lower privacy loss for the sellers.
We consider an InterDependent Security (IDS) game with networked agents and positive externality where each agent chooses an effort/investment level for securing itself. The agents are interdependent in that the state of security of one agent depends not only on its own investment but also on the other agents' effort/investment. Due to the positive externality, the agents under-invest in security which leads to an inefficient Nash equilibrium (NE). While much has been analyzed in the literature on the under-investment issue, in this study we take a different angle. Specifically, we consider the possibility of allowing agents to pool their resources, i.e., allowing agents to have the ability to both invest in themselves as well as in other agents. We show that the interaction of strategic and selfish agents under resource pooling (RP) improves the agents' effort/investment level as well as their utility as compared to a scenario without resource pooling. We show that the social welfare (total utility) at the NE of the game with resource pooling is higher than the maximum social welfare attainable in a game without resource pooling but by using an optimal incentive mechanism. Furthermore, we show that while voluntary participation in this latter scenario is not generally true, it is guaranteed under resource pooling.
The consumption of several modern products is marked by strong network effects, i.e., the utility to a consumer improves with the overall adoption in the market. Examples include mobile platforms that significantly rely on user-generated content, social media platforms, massively multiplayer games, cryptocurrencies, etc. For a firm that intends to launch a new product in this space, the biggest initial challenge is to stimulate sufficient adoption through appropriately designed incentives - a problem that is exacerbated by the presence of strategic consumers who anticipate the informational benefits of delaying their adoption decisions.
This paper explores conditions under which players cooperate in a dynamic network game. Historically, folk theorems have provided a speckled perspective by showing that there exists equilibria where players cooperate, do not cooperate, as well as a myriad of equilibria between these extremes. Our main contribution is identifying a necessary and sufficient equilibrium refinement such that, for all equilibria, all players cooperate in order to reach a strictly Pareto dominant graph. We base our results on a class of games that subsume forward-looking extensions of exchange economies with indivisible goods.
Cloud computing providers must handle heterogeneous customer workloads for resources such as (virtual) CPU or GPU cores. This is particularly challenging if customers, who are already running a job on a cluster, scale their resource usage up and down over time. The provider therefore has to continuously decide whether she can add additional workloads to a given cluster or if doing so would impact existing workloads' ability to scale. Currently, this is often done using simple threshold policies to reserve large parts of each cluster, which leads to low average utilization of the cluster. In this paper, we propose more sophisticated policies for controlling admission to a cluster and demonstrate that they significantly increase cluster utilization. We first introduce the cluster admission problem and formalize it as a constrained Partially Observable Markov Decision Process (POMDP). As it is infeasible to solve the POMDP optimally, we then systematically design heuristic admission policies that estimate moments of each workload's distribution of future resource usage. Via simulations we show that our admission policies lead to a substantial improvement over the simple threshold policy. We then evaluate how much further this can be improved with learned or elicited prior information and how to incentivize users to provide this information.
In social networks, agents use information from (a) private signals (knowledge) they have, (b) learning past agents actions (observations), and (c) comments from contactable experienced agents (experts) to guide their own decisions. With fully observable history and bounded likelihood ratio of signal, Information Cascade occurs almost surely when it is optimal for agents to ignore their private signals for decision making after observing the history. Though individually optimal, this may lead to a socially sub-optimal outcome. Literature studying social learning, i.e., making socially optimal decisions, is mainly focused on using channels (a) and (b) above for Bayes-rational agents by either relaxing the assumption of bounded signal strength or allowing the distortion of the history. In this work, we enable a limited communication capacity to let Bayes-rational agents querying their predecessors, motivated by the real-world behavior that people usually consult several friends before making decisions. We allow each Bayes-rational agent to ask a single, private and finite-capacity (for response) question of each among a subset of predecessors. Note that the Maximum Aposteriori Probability (MAP) rule is still individually optimally and will be used by each agent for her decision. With an endowed communication capacity, we want to answer the following two questions: 1) What is the suitable framework to model the help that questions provide on information aggregation? 2) Can we construct a set of questions that will achieve learning by querying the minimum set of agents with the minimum capacity requirements (in terms of bits)?
There has been much recent interest in the online bipartite matching problem of Karp, Vazirani and Vazirani , and variations of it, due to its applicability to allocation problems in certain economic settings. A prominent example is online advertising; for more details, see the survey by Metha . The new problems are both theoretically elegant and practically relevant.