Monday, March 29, 2010

Repast: An Agent Simulation Toolkit, March 17, 2010

In this meeting Milan gave an introductory presentation about Repast Simphony (Repast S) agent-based modeling toolkit. During the meeting, Milan showcased the official Repast S Predator Prey tutorial and provided links for software download, documentation and tutorials. A brief introduction to agent-based modeling (ABM) paradigm and potential applications domains have also been given.


The advantages of a specialized toolkit for ABM, instead of using a general (usually object-oriented) programming language, are firstly in the collection of libraries for agent-based design, which covers features such as scheduling, communication mechanisms, interaction topologies (networks, grids, GIS), facilities for storing and displaying agent-states etc. Another advantage of an agent-based toolkit such as Repast S is the runtime environment, which allows us to set up simulation parameters, to execute and visualize simulations, to probe agent states, to generate graphical displays of the outputs etc. Repast S has particularly rich support for linking the results to various external programs, as well as libraries for advanced computational techniques (genetic algorithms, neural networks, regression models, Monte Carlo method etc.).

One of the most interesting features of Repast S, however, is the graphical development environment which allows us to create agents by drawing (drag-and-drop) flowcharts on top of which groovy code is then automatically generated. Also, by setting properties of various model elements and by using wizards it is possible to make further adjustments to the model without or with little actual coding, which seams an interesting features for those who would like to develop agent-based models but still possess limited programming skills. Of course, an experienced Java programmer can also write Java code from scratch, without using this graphical user interface.

Wednesday, March 3, 2010

Modeling Competitive Bidding: a Critical Essay, March 3rd 2010

Today I (Meditya Wasesa) presented a paper review in LARGE group. The paper is from Management Science (1994) entitled “Modeling Competitive Bidding: a Critical Essay” written by Michel Rothkopf (RIP) and Ron Harstad. These 2 big economists in auction area write an essay which explains that there is big discrepancy between the existing (though its 1994 paper, I think it’s still relevant to the current research context) auction theory and the real practice, so that the developed theory is having limited application to the real live condition. My main interpretation of the whole paper is bundled by the figure shown below:



From this paper, we learn that the theories which try to model the competitive bidding behavior in auctions (both decision theory and game theoretical model) are mostly written in a single and isolated auction context. Moreover the assumptions which frame the theories (e.g. single isolated auctions, fixed number of bidders, symmetric Nash equilibrium, risk neutral bidders, etc) also limit the real world applicability. They suggest the researchers to adjust and enrich the elegant but simplified existing models so that the bidding model could be utilized in the real world situations.

There were a was discussion on how should we develop an analytic model which will fit to our research interest, which is the Dutch Flower Auction. Our research context is unique, since we are dealing with multi unit, multi bidders, multi suppliers, multi attributes, and highly interdependent auctions. We agreed that this paper would be considered as our main reference since it brings a lot of insights on how to make a realistic an applicable model, not just an elegant model which is good for theoretical sake but could not be applied in reality.

Thursday, February 4, 2010

Why and When Preferences Convex? Threshold Effects and Uncertain Quality: February 3rd, 2010

In the LARGE meeting, Yixin presented the paper by Trenton G. Smith and Attila Tasnadi.

The authors discuss the circumstances under which convexity of preferences are beneficial. Particularly, they investigate a setting in which goods possess some hidden quality with known distribution, and the consumer chooses a bundle of goods that maximizes the probability that he receives some threshold level of this quality. It is shown that if the threshold is small relative to consumption levels, preferences will tend to be convex; whereas the opposite holds if the threshold is large.

The proposed theory helps explain a broad spectrum of economic behavior (including, in particular, certain common commercial advertising strategies), to some extent. However, some of the assumptions used in developing this theory are often violated in real-world problems. Further, the cases discussed in this paper are extremely simple: the consumer only need to make a decision about his/her consumption of two products with respect to a single attribute quality measure. Therefore, the generalizability of the theory to more complicated real-world cases is not clear.

Nevertheless, this paper does add some insights to the existing work on preferences and suggests that we need to rethink when taking the convexity assumption of preferences for granted.

Wednesday, November 18, 2009

Representing Eliciting and Reasoning with Preferences: Nov. 11 and Nov. 18, 2009

In the LARGE meeting, Yixin presented the tutorial paper about preference handeling by Brafman&Domshlak.

In the first session (Nov. 11, 2009), we focused on models and languages for representing preferences. Starting with the quantitative languages, we discussed about two important classes of value functions: Additively Independent (AI) and Generalized Additively Independent (GAI) functions, as well as some nice properties of the corresponding representation structure. Despite of the computational efficiency for preference comparison or ordering, those quantitative languages put too much cognitive burden on the users' slide and they are usually difficult for users to reflect upon. Hence we need somewhat more "easy'' languages which, hopefully, carry with them desirable properties from both cognition and computation aspects. Given these considerations, a natural choice would be taking generalizing preference expressions. When equipped with CP-nets, such quanlitative language can handel different kinds of queries very efficiently.

In the second session (Nov. 18, 2009), we started with some discussion about preference compilation techniques, namely, structure-based and structure-free. Both aim to combine the quantitative and qualitative models in such a way that we can map diverse statements into a single, well-understood representation. Then, we had a quick look at uncertainty and utility functions. During the rest part of the meeting, we focused on preference specification and elicitation, discuessed about different methods which include "prior-based'' ones such as maximum likelihood and Bayesian reasoning, as well as ''prior-free" minmax regret method. For elicitation, since it usually works in a sequential way, we need to come up with either nice heuristics to determine the optimal sequence of queries. Or if the computational price is not a big concern, we can use the elegant partially observable Markov decision process (POMDP) model.

The paper provides a very nice framework of preference handeling. In the nex few months, we will study the existing methods for preference elicitation and representation and ideally be able to construct a more concrete framework to model preferences in energy market.

Tuesday, November 3, 2009

Dutch Flower Auction Recommender Agent

Last LARGE meeting, I (Meditya Wasesa) presented the status of the Recommender Agent development in the Dutch Flower Auction domain.

I reported that we have finalized the first step of data exploration, that is meant to elicit all determining factors which significantly influence the revenue of a good in Dutch Flower Auction domain. Our exploratory research infers that there are several auction design parameters which the auctioneer can adjust to control the level of revenue.

Knowing these auction design parameters, we have developed an analytical that we will drive the logic of the future recommender agent. At future we will implement our analytical model to a recommendation agent and will test it with simulation, lab experiments, and field experiments.

Tuesday, October 20, 2009

Decision coordination in a supply-chain agent

Why is decision coordination an important problem? Decision support is especially critical when human decision makers are faced with combinatorial problems, uncertain and partially-visible data, and our built-in cognitive biases. When the results of decisions in different domains interact with each other, we have an additional level of complexity that is easy to ignore.

We use the TAC SCM simulation to model complex, interacting decisions, and the fully-autonomous MinneTAC trading agent to test our ideas for decision support. The agent must trade in two competitive markets simultaneously, while managing its own inventory and production facility. A successful agent must buy parts it needs to manufacture and sell finished products, and it must sell its finished products at a profit. Inventory is relatively costly, competition is stiff, and agents that cannot effectively coordinate their decisions are easily defeated.

We reviewed the coordination strategies employed by a number of successful agents. These include a "sales pull" method, inventory-centric approaches, a production-centric approach that fills its future production schedule with the products that are expected to give the highest marginal profit, an approach that projects a "target demand" into the future that is expected to satisfy profit targets, and a multi-layer system of internal "markets" in which projected customer demand bids on products, which in turn bid on parts and production capacity, etc.

We wrapped up by looking in some detail at the behaviors of the two top agents in the 2009 competition, DeepMaize from the University of Michigan and TacTex from the University of Texas. They were very nearly tied, although TacTex bought and sold considerably more volume and carried much larger inventories. We looked at an example where TacTex built up a large finished-goods inventory during a period of low customer demand, when parts are inexpensive, and used that inventory to keep prices depressed during a later period of high demand, when other agents were competing for parts and were consequently squeezed for profits. This is clearly a very risky strategy, but we assume that the TacTex team has used its machine-learning expertise to recognize the market signal patterns that indicate a reasonable probability of such a situation occuring. The regime model used by MinneTAC could presumably predict such a situation also, but so far it's not being used to drive strategic decisions.

Friday, May 29, 2009

Meeting Minutes - 27 May 2009

At 27th May 2009, we have 2 guest researchers, Xiaoyu Mao and Ducco Ferro, from Almende, a research company that focusses it research in multi-agent system. Both were discussing their papers that they will present at AAMAS.

Xiaoyu Mau presented an application of agent technology in the airport operation entitled "Heterogeneous MAS Scheduling for Airport Ground Handling", while Ducco presented his paper entitled "The Windmill Method for Setting up Support for Resolving Sparse Incidents in Communication Networks". That was a great meeting with great crowd also.