S broker depot32 comments
Largest brokerage firms by tradeking
We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders' distributions over valuations is given. Here, our algorithms require polynomially many samples in the number of items and bidders. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.
Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest. Here's a link to the paper Bio: He received his Ph.
He was a postdoctoral researcher at UC Berkeley. Yang's research interests include economics and computation, learning, statistics and probability, as well as online algorithms. We present a general framework for stochastic online maximization problems with combinatorial feasibility constraints. The framework establishes prophet inequalities by constructing price-based online approximation algorithms, a natural extension of threshold algorithms for settings beyond binary selection.
Our analysis takes the form of an extension theorem: Our framework unifies and simplifies much of the existing literature on prophet inequalities and posted price mechanisms, and is used to derive new and improved results for combinatorial markets with and without complements , multi-dimensional matroids, and sparse packing problems. Finally, we highlight a surprising connection between the smoothness framework for bounding the price of anarchy of mechanisms and our framework, and show that many smooth mechanisms can be recast as posted price mechanisms with comparable performance guarantees.
Her research focuses on the intersection of computer science, game theory and microeconomics. She received her Ph. She was a faculty member in the School of Business Administration and the Center for the study of rationality at the Hebrew University , and a visiting professor at Harvard University and Microsoft Research New England Most proposed fairness measures for machine learning are observational: They depend only on the joint distribution of the features, predictor, and outcome.
I will highlight a few useful observational criteria, before arguing why observational criteria in general are unable to resolve questions of fairness conclusively.
Moving beyond observational criteria, I will outline a causal framework for reasoning about discrimination based on sensitive characteristics. Hardt's research aims to make the practice of machine learning more robust, reliable, and aligned with societal values. We investigate how information goods are priced and diffused over links in a network.
Buyers have idiosyncratic consumption values for information and, after acquiring it, can replicate it and resell copies to uninformed neighbors. A partition of the network captures the effects of network architecture and locations of information sellers on player profits and the structure of competing diffusion paths.
Sellers indirectly appropriate profits over intermediation chains from buyers in their block of the partition. Links within blocks are critical for connecting the network and constitute bottlenecks for information diffusion. Links bridging distinct blocks are redundant for diffusion and impose negative externalities on sellers. Information enters each block not containing a seller via a single node - the dealer of the block. Dealers can receive information over redundant links from multiple neighbors and benefit from competitive pricing.
Every non-dealer buyer can acquire information from a single neighbor via a bottleneck link and is subject to a monopoly. In dense networks, competition limits the scope of indirect appropriability, and intellectual property rights foster innovation. Mihai Manea is a game theorist focusing on social and economic networks. Algorithmic decision making is increasingly being used to assist or replace humans in many online as well as offline settings. These systems often rely on historical decisions, often taken by humans, to optimize functionality, satisfaction of the end user and profitability.
However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i. In this talk, I will first discuss several notions of fairness in the context of supervised learning and then introduce a flexible mechanism to design fair boundary-based classifiers by leveraging an intuitive measure of decision boundary un fairness.
I will then show that this mechanism can be easily incorporated into the formulation of several well-known margin based classifiers as convex or convex-concave constraints and it allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Manuel develops machine learning and large-scale data mining methods for the analysis, modeling and control of large social and information online systems. You can find more about him at http: We discuss the framework of Blind Regression also known as Latent Variable Model motivated by the problem of Matrix Completion for recommendation systems: We posit that each user and movie is associated with their latent feature, and the rating of a user for a movie equals the noisy version of latent function applied to the associated latent features.
Therefore, completing the matrix boils down to predicting the latent function value for user-movie pairs for which ratings are unknown, just like the classical regression setting. However, unlike the setting of regression, features are not observed here -- hence "Blind" Regression. Such a model arises as a canonical characterization due to multi-dimensional exchangeability property a la Aldous and Hoover early s.
In this talk, using inspiration from the classical Taylor's expansion for differentiable functions, we shall propose a prediction algorithm that is consistent for all Lipschitz continuous functions. We provide finite sample analysis that suggests that even when observing a vanishing fraction of the matrix, the algorithm produces accurate predictions.
We discuss relationship with spectral algorithm for matrix completion, and the collaborative filtering. His current research interests are at the interface of Statistical Inference and Social Data Processing. He is a distinguished young alumni of his alma mater IIT Bombay. This talk will give an overview of a range of data-driven products that the Core Data Science group helped building, mostly by applying machine learning and statistical methods on large-scale data.
We'll talk about analysis of cascades and product adoption, identifying trends in real-time, fighting scams, understanding the true meanings behind emoji, and figuring out how people laugh online. The group helps product teams across Facebook to tackle difficult product problems and deliver new features by leveraging vast amounts of data together with a range of machine learning techniques.
Before Facebook, Udi was a senior researcher at Technicolor, working on privacy in machine learning using cryptographic methods. Large-scale machine learning requires blending computational thinking with statistical frameworks.
Designing fast, efficient and distributed learning algorithms with statistical guarantees is an outstanding grand challenge. I will present perspectives from theory and practice.
I will demonstrate how spectral optimization can reach the globally optimal solution for many learning problems despite being non-convex. This includes unsupervised learning of latent variable models, training neural networks and reinforcement learning of partially observable Markov decision processes. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods such as variational inference.
I will then talk about the recent advances in large-scale deep learning methods. It is a highly flexible and developer-friendly open-source deep learning framework designed for both efficiency and flexibility. It is based on the distributed parameter-server framework. I will conclude on outstanding challenges on how we can bridge the gaps between theory and practice, and how we can design and analyze large-scale learning algorithms.
Anima Anandkumar is currently a principal scientist at Amazon Web Services. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred.
She received her B. She was a postdoctoral researcher at MIT from to , an assistant professor at U. Irvine between and , and a visiting researcher at Microsoft Research New England in and Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on how to quantify the impact of strategic user behavior on the overall performance in many games including traffic routing as well as online auctions.
In this talk we will focus on games where players use a form of learning that helps them adapt to the environment, and consider two closely related questions: What are broad classes of learning behaviors that guarantee that game outcomes converge to the quality guaranteed by the price of anarchy, and how fast is this convergence.
Or asking these questions more broadly: She joined the faculty at Cornell in Tardos's research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms, approximation algorithms, and quantifying the efficiency of selfish routing. Exploiting the exogenous and regional nature of the Great East Japan Earthquake of , this paper provides a systematic quantification of the role of input-output linkages as a mechanism for the propagation and amplification of shocks.
We document that the disruption caused by the earthquake and its aftermaths propagated upstream and downstream supply chains, affecting the direct and indirect suppliers and customers of disaster-stricken firms. We then use our empirical findings to obtain an estimate for the overall macroeconomic impact of the shock by taking these propagation effects into account. We find that the propagation of the shock over input-output linkages can account for a 1. We interpret these findings in the context of a general equilibrium model that takes the firm-to-firm linkages into account explicitly.
In this talk, I will discuss our recent efforts to formalize a particular notion of "fairness" in online decision making problems, and study its costs on the achievable learning rate of the algorithm. Our focus for most of the talk will be on the "contextual bandit" problem, which models the following scenario. Every day, applicants from different populations submit loan applications to a lender, who must select a subset of them to give loans to.
Each population has a potentially different mapping from applications to credit-worthiness that is initially unknown to the lender. The fairness constraint we impose roughly translates to: Despite the fact that this constraint seems consistent with the profit motivation of the bank, we show that imposing a fairness constraint provably slows down learning sometimes only mildly, but sometimes substantially, depending on the structure of the problem.
Time permitting, we will mention recent extensions to the reinforcement learning setting in which the actions of the learner can affect its environment, and to economic settings in which the learner must be incentivized by a principal to act fairly. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory and mechanism design, learning theory, and the intersections of these topics.
Growing availability of data has enabled practitioners to tailor decisions at the individual level. This involves learning a model of decision outcomes conditional on individual-specific covariates contexts.
Recently, contextual bandits have been introduced as a framework to study these online and sequential decision making problems. This literature predominantly focuses on algorithms that balance an exploration-exploitation tradeoff, since greedy policies that exploit current estimates without any exploration may be sub-optimal in general.
However, exploration-free greedy policies are desirable in many practical settings where experimentation may be prohibitively costly or unethical e.