Wednesday, November 18, 2009

Representing Eliciting and Reasoning with Preferences: Nov. 11 and Nov. 18, 2009

In the LARGE meeting, Yixin presented the tutorial paper about preference handeling by Brafman&Domshlak.

In the first session (Nov. 11, 2009), we focused on models and languages for representing preferences. Starting with the quantitative languages, we discussed about two important classes of value functions: Additively Independent (AI) and Generalized Additively Independent (GAI) functions, as well as some nice properties of the corresponding representation structure. Despite of the computational efficiency for preference comparison or ordering, those quantitative languages put too much cognitive burden on the users' slide and they are usually difficult for users to reflect upon. Hence we need somewhat more "easy'' languages which, hopefully, carry with them desirable properties from both cognition and computation aspects. Given these considerations, a natural choice would be taking generalizing preference expressions. When equipped with CP-nets, such quanlitative language can handel different kinds of queries very efficiently.

In the second session (Nov. 18, 2009), we started with some discussion about preference compilation techniques, namely, structure-based and structure-free. Both aim to combine the quantitative and qualitative models in such a way that we can map diverse statements into a single, well-understood representation. Then, we had a quick look at uncertainty and utility functions. During the rest part of the meeting, we focused on preference specification and elicitation, discuessed about different methods which include "prior-based'' ones such as maximum likelihood and Bayesian reasoning, as well as ''prior-free" minmax regret method. For elicitation, since it usually works in a sequential way, we need to come up with either nice heuristics to determine the optimal sequence of queries. Or if the computational price is not a big concern, we can use the elegant partially observable Markov decision process (POMDP) model.

The paper provides a very nice framework of preference handeling. In the nex few months, we will study the existing methods for preference elicitation and representation and ideally be able to construct a more concrete framework to model preferences in energy market.

Tuesday, November 3, 2009

Dutch Flower Auction Recommender Agent

Last LARGE meeting, I (Meditya Wasesa) presented the status of the Recommender Agent development in the Dutch Flower Auction domain.

I reported that we have finalized the first step of data exploration, that is meant to elicit all determining factors which significantly influence the revenue of a good in Dutch Flower Auction domain. Our exploratory research infers that there are several auction design parameters which the auctioneer can adjust to control the level of revenue.

Knowing these auction design parameters, we have developed an analytical that we will drive the logic of the future recommender agent. At future we will implement our analytical model to a recommendation agent and will test it with simulation, lab experiments, and field experiments.

Tuesday, October 20, 2009

Decision coordination in a supply-chain agent

Why is decision coordination an important problem? Decision support is especially critical when human decision makers are faced with combinatorial problems, uncertain and partially-visible data, and our built-in cognitive biases. When the results of decisions in different domains interact with each other, we have an additional level of complexity that is easy to ignore.

We use the TAC SCM simulation to model complex, interacting decisions, and the fully-autonomous MinneTAC trading agent to test our ideas for decision support. The agent must trade in two competitive markets simultaneously, while managing its own inventory and production facility. A successful agent must buy parts it needs to manufacture and sell finished products, and it must sell its finished products at a profit. Inventory is relatively costly, competition is stiff, and agents that cannot effectively coordinate their decisions are easily defeated.

We reviewed the coordination strategies employed by a number of successful agents. These include a "sales pull" method, inventory-centric approaches, a production-centric approach that fills its future production schedule with the products that are expected to give the highest marginal profit, an approach that projects a "target demand" into the future that is expected to satisfy profit targets, and a multi-layer system of internal "markets" in which projected customer demand bids on products, which in turn bid on parts and production capacity, etc.

We wrapped up by looking in some detail at the behaviors of the two top agents in the 2009 competition, DeepMaize from the University of Michigan and TacTex from the University of Texas. They were very nearly tied, although TacTex bought and sold considerably more volume and carried much larger inventories. We looked at an example where TacTex built up a large finished-goods inventory during a period of low customer demand, when parts are inexpensive, and used that inventory to keep prices depressed during a later period of high demand, when other agents were competing for parts and were consequently squeezed for profits. This is clearly a very risky strategy, but we assume that the TacTex team has used its machine-learning expertise to recognize the market signal patterns that indicate a reasonable probability of such a situation occuring. The regime model used by MinneTAC could presumably predict such a situation also, but so far it's not being used to drive strategic decisions.

Friday, May 29, 2009

Meeting Minutes - 27 May 2009

At 27th May 2009, we have 2 guest researchers, Xiaoyu Mao and Ducco Ferro, from Almende, a research company that focusses it research in multi-agent system. Both were discussing their papers that they will present at AAMAS.

Xiaoyu Mau presented an application of agent technology in the airport operation entitled "Heterogeneous MAS Scheduling for Airport Ground Handling", while Ducco presented his paper entitled "The Windmill Method for Setting up Support for Resolving Sparse Incidents in Communication Networks". That was a great meeting with great crowd also.

Thursday, May 7, 2009

Meeting Minutes - 6 May 2009

At 6th of May, we have a guest speaket from TU-Delft. His name is Mathijs de Weerdt. He presented his ACM paper regarding the Qualitative Vickrey Auction concept. A concept at which bidders and auctioneers (centers by his definition) put the agreement of the bid not based on the monetary value (highest or lowest price) but based on the offer demand attributes suitability rank.

He explained this concept in the reversed auction environment example. A concept at which the auctioneer has already defined the rank of offers preferences (the offer contains of many attributes). And the bidder that submits the offer of highest rank wins. The winner bidder then can choose to deliver the offer ranges from his bid point offer to the second highest bidder point offer (Vickrey Auction).

Nice concept, based on the attributes rank preferences. For more explanation you are welcomed to browse our site to download the paper directly (http://large.rsm.nl/meetings.xml).

Wednesday, April 22, 2009

Meeting Minutes - 22 April 2009

Today I (Meditya) presented a Herbert A. Simon's Paper entitled "Machine as Mind" appeared in "Android Epistemology" by C.Glymour, K.Ford, and P.Hayes.

Defining that the primitives of mind consist of, Symbols, Complex Structures of Symbols, and Processes that Operate on Symbols (Newell & Simon, 1976), the central thesis of his writings states, "Conventional computers can be, and have been programmed to represent symbol structures and carry out processes on those structures in a manner that parallels, step by step, the way human brain does it.".

In his writings he (Simon) basically argued that every angle of human mind is translable into definable representations, and the definition can be translated and embedded to a thinking machine. In order to defend his thesis, he explained several important points of disscussion such as:

- The Concept of Decomposable System
- Two Approaches of Artificial Intelligence (humanoid and non humanoid)
- The Concept of Mind from Psychological Perspective
- Selective Heuristic Search
- Recognition: The Indexed Memory
- Seriality: The Limits of Attention (short term memory)
- The Architecture of Expert System

In addition he also responded and defended his thesis to the disputes that people usually have regarding the angle of thinking that machine could not copy such as:

- Semantics
- Intention
- "Ill Structured" Tasks
- Language Processing
- Intuition
- Insight
- Creativity

At the end of this meeting we have a nice discussion about the state of the art of this writing (1995) and the condition that is happening now, and the extent of the applicability of the Simon's vision in the present and the future.

Wednesday, April 15, 2009

Meeting Minutes - 15 April 2009

Today I presented the paper “Computational Intelligence in Economic Games and Policy Design” by Herbert Dawid, Han La Poutré, and Xin Yao (IEEE Computational Intelligence Magazine, 3(4), 22–26, 2008). The paper provides an overview of applications of computational intelligence techniques in economics. Both strong and weak points of the use of computational intelligence techniques are discussed. According to the authors, the two most important weak point are the issue of empirical validation and the issue of robustness.

I also discussed my own view on the relation between mainstream economics on the one hand and agent-based computational economics on the other hand. Due to the increasing popularity of experimental economics and evolutionary game theory, mainstream economics focuses more and more on bounded rationality and dynamic (rather than static) analysis. From this perspective, the difference between mainstream economics and agent-based computational economics is smaller than is sometimes thought. I argued that the main difference is between following a mathematical approach (as mainstream economics does) and following a simulation approach (as agent-based computational economics does). There is much to be gained by combining these two approaches. Today’s meeting ended with a discussion of the difference between mathematical analysis and computer simulation, and how this difference relates to the difference between deduction and induction in science. We also discussed the issue of implicit assumptions that are hidden in technical details in agent-based computational economics research.

Thursday, April 9, 2009

Meeting Minutes - 25 March 2009

In the presentation, Romke and Otto presented the outcomes from the master seminar in computational economics. The research was focused on improving the price prediction mechanism of the MinneTAC agent. The MinneTAC agent is an agent-based computer model that is developed by the university of Minnesota in cooperation with the Erasmus University.

The MinneTAC agent competes with other Agent based computer models in the Trading Agent Competition for Supply Chain Management (TAC SCM). The TAC SCM game was designed to come to the best solution for an Agent based computer model that is capable of dealing with the problems of a dynamic supply chain.

In the MinneTAC agent, there is an ensemble, consisting of multiple price predictors used to predict the future market prices. The function of the model selection mechanism is to determine the most accurate price based on the predictions from all the individual predictors making up the ensemble. The advantage of using multiple predictors is the ability to capture more features in the data then a single predictor. The disadvantage of using multiple predictors is that different features are captured that causes different predictions. A second disadvantage is that not every price predictor is performing optimal for every time horizon and quantity of training data. To overcome these disadvantages, there is a dynamic weighting mechanism with adaptive weights developed for the MinneTAC agent. This weighting mechanism has to find to the optimal weights for every price predictor for every time horizon. The weights are learned during the game, while the agent is competing with its competitors for customer orders. When the agent starts, every price predictor has an equal weight. During the game, the MinneTAC agent starts using the optimal weights. This means that the price prediction mechanism is not working with the optimal weights during the first phase of the game.

In our seminar research, we found the optimal weights for every price predictor during the game. This data is used to bootstrap the agent to increase the performance in the first phase of the game.

R.J. Romke de Vries
O.B. ter Haar

Thursday, March 12, 2009

Meeting Minutes - 11 March 2009

At this meeting Milan presented his paper "Overconfident Investors in the LLS Agent-Based Artificial Financial Market," as a preparatory talk for the upcoming IEEE SSCI CIFEr 2009 conference.

This paper is a part of the relatively new research stream where agent-based models of financial markets are used to study various topics of behavioral finance. It has been recognized in the literature that such models could be very suitable to make a link between the behavioral biases of individual investors and the aggregate market phenomena, such as the dynamics of the market prices.

In this paper we focus on overconfident investors, and model overconfidence as miscalibration, in such a way that investors are too certain in their predictions of future returns of a risky asset. In the methodological sense, this paper follows an incremental approach, where an existing model (Levy, Levy, Solomon, 2000) is modified to study the consequences of introducing a variation in investor behavior, namely the overconfidence bias.

We find that overconfident investors create less frequent, but more extreme bubbles and crashes in the market, compared to the original model. Furthermore, more overconfident investor introduce more excess volatility of the market price (over the volatility of the fundamental price), and also reduce the trading volume (as they are highly invested in stock during bubbles). Since this is a rising market, overconfident investors tend to take a larger share in the total wealth of all the market participants.

Furthermore, we study the emergence of overconfidence through biased self-attribution. Investors, who attribute successful predictions to their own skill and unsuccessful predictions to bad luck, learn quickly to be overconfident, and remain at such a high level of overconfidence. For unbiased self-attribution, the level of overconfidence varies greatly depending on the success of predictions.

During the meeting we had a fruitful discussion about the implications of these results, and there were also very interesting suggestions for the future research. In the light of the recent developments of the real-world financial markets, it would be particularly interesting to study the consequences of investor overconfidence in a declining market.

Thursday, February 26, 2009

Meeting Minutes - 25 February 2009

Today I presented my research done in the context of my master's thesis. I presented a novel product pricing approach for the TAC SCM game, where products are sold through reverse auctions with sealed bids (i.e., traders bid on customer requests for quotes and cannot observe their competitors' pricing behavior).

The new approach is based on price distribution estimations, where the relation between on-line available data and distribution parameters is dynamically modeled using economic regimes (characterizing market conditions) and error terms (accounting for customer feedback). Given the parametric approximations of price distributions, acceptance probabilities are estimated using a closed-form mathematical expression. These probabilities that a customer accepts a price offered by a trader can be used to determine the price yielding a desired quota. The approach has been implemented in the MinneTAC agent and tested against a price-following product pricing method in the TAC SCM game. The novel approach significantly improves performance; more orders are obtained against higher prices. Profits more than double.

During the presentation, (adaptations to) the game specifications of the TAC SCM game were discussed in the group. We briefly discussed the possible impact of new entrants during a game, more competitors, and a randomized game length. Apparently, according to game theory, end-of-game effects would be reduced when the game end is randomized.

Alexander

Thursday, February 19, 2009

Meeting Minutes - 18 February 2009

Today I discussed the main results of my master's thesis, in which I try to incorporate procurement information into an economic regime model based on sales information. This model is used in the MinneTAC agent, which is an artifical trading agent that competes in the TAC SCM game.

First, I gave a brief description of the TAC SCM game, after which I introduced the regime model as it is currently used in the MinneTAC agent. Then, I introduced a new procurement variable, i.e., offer prices, after which I elaborated on both regime identification and regime prediction. Finally, I discussed some experimental results.

During the last part of the presentation where I introduced some experimental results, we had a discussion on the causes of these results. As it seems, implementing procurement information into the regime model does not lead to better performance of the MinneTAC agent in TAC SCM games. In fact, the agent gets more orders from customers, but generates lower profits. Other competitors seem to take advantage of the situation, since their profits increase when MinneTAC uses the new regime model.

A suggested cause is the fact that there could be a delay regarding the procurement information. Procurement information might be a leading indicator for regimes, so perhaps creating a regime model based on the sales price of yesterday and the procurement offer price of for instance three days earlier could improve the performance of the agent. However, this is dependent on the cost allocation of the agent. Furthermore, there is a lack of adaptivity in the regime model and therefore, the model cannot adjust properly during a game. Finally, regime information is not used for price setting in the experiments discussed today. Establishing this connection between regimes and price setting could improve the overall results as well.

Frederik

Thursday, February 12, 2009

Meeting Minutes – 11 February 2009

Today I presented some preliminary results of a research project that is intended to be part of my PhD thesis. I discussed an economic model of price competition between spatially organized firms. I discussed the Nash equilibrium solution of the model, but most of the time I focused on the outcomes achieved by so-called evolutionary dynamics. These outcomes were analyzed mathematically (partly analytically, partly numerically) but also through computer simulations. Interestingly, on average firms turned out to cooperate (or collude) with each other under the evolutionary dynamics.

We had a very fruitful discussion about the conditions necessary/sufficient to achieve the cooperative outcome, focusing in particular on the plausibility of various assumptions. We also discussed on a general level the methodology typically used in economic/game-theoretic research.

Ludo

Monday, January 26, 2009

Meeting Minutes - 21 January 2009

The 21st of January Peter Berends presented "The Impact of Clock Speeds on Bidders’ Arousal in Dutch Auctions". A paper by M. Adam, J. Krämer, C. Weinhardt (2008).

In this study an experiment was conducted to show the impact of clock speed on "competitive arousal" and seller revenue. This paper was of interest to us, since in order to create intelligent agents that support human actors in the Dutch Flower Network we need to investigate the influence of speed in Dutch auctions.

When considering Adam's et al. study, the results in regard to seller revenue in this experiment are somewhat contradictory to prior experiments. It was found that seller revenue in a slower auction was not significantly different from fast auctions. Furthermore, differences in the variance of seller revenue between the slow and fast auctions were observed. Faster auctions have greater variance. This led to authors to believe that competitive arousal is not a function of elapsed time alone. This was previously asserted by Katok and Kwasnica (2008). The results clearly prove that, higher levels of skin conductivity are found in fast auctions compared to low. This means that arousal is higher for fast auctions.

In the discussion between the participants at LARGE we discussed whether the variance between the slow and the fast auction could be explained by the fact that subjects might become bored when participating in the auction experiment. They might become bored, because they play against only one other participant and in the slow auction the maximum length is theoretically 16:40min.

Thursday, January 15, 2009

Meeting Minutes - 14 January 2009

Today I presented an overview of the priorly discussed articles on Dutch auctions. The following articles were discussed:
  • Van den Berg and Van der Klauw, 2008. A Structural Empirical Analysis of Dutch Flower Auctions
  • Katok and Kwasnica, 2008. Time is money: The effect of clock speed on seller’s revenue in Dutch auctions
  • Carare and Rothkopf, 2005. Slow Dutch Auctions

There was a discussion on what we can learn from the theoretic models as modeled in these articles when building a theoretic model for super fast auctions as in the DFA.
  • It was noted that the DFA has sequential auctions, which means that the cost of returning to the auction exists but not at the level of an individual auction.
  • It was noted that 'Competitive Arousal' as discussed in Katok and Kwasnica exists in a different form in super fast auctions, since it is unlikely that within an individual auction levels of arousal differ.
  • It was agreed that 'competitive arousal' in fast auctions could very well be a function of time, possibly not linear per se but this could be a good first start for building a model. Futhermore, the issue of arousal for experience was discussed because bidders who have been trading for years might not be influence very much by emotions.
Peter

Wednesday, January 7, 2009

Meeting Minutes - 7 January 2009

Today I discussed the paper entitled "Slow Dutch Auctions" by Octavian Carare and Michael Rothkopf (Management Science, 51(3), 365-373, 2005). These are a few of the issues that were raised during the discussion:
  • Is it realistic to think that the results of Lucking-Reiley (1999) may be due to transaction costs? Aren't there other more plausible explanations (e.g., some form of bounded rationality)? Aren't transaction costs too low to have a significant effect?
  • Why do the authors assume randomly arriving bidders in their game-theoretic analysis? What are the implications of this assumption?
  • How would the analysis in the paper change if we take into account that there can be many auctions (simultaneously or sequentially) of similar objects? This may create competition among auctioneers. What effect can be expected from this?
  • Should we regard the results in the paper as surprising or not? Do they make sense intuitively? Can the analysis be generalized?
  • In what way can agent-based research contribute to our understanding of auction mechanisms? What could be the added value of agent-based research over game-theoretic and experimental research?

Ludo