Friday, May 27, 2005

Integration Trick

I have not posted for a few months now, but this has nothing to do with my workload in university. My workload has been a joke. Now, I am alive and well in Halifax, stumbling through a 16-week work term in the Dahn Group. I will be posting more of those long winded IPhO stories shortly.

Before that, however, let me digress and tell you about a neat integration trick I learned from working in the research group.

I was plotting some voltage-current graphs, and wanted to integrate them. These are not your simple Ohmic V-I graphs where the slope is constant, so the task is quite a bit more difficult.

Anyway, the lab here has many computer programs that can do numerical integration, but I needed to select only a portion of the graph, and editing the data file proved to be too much trouble. So, the professor walks in and tells me about a trick.

You see, we have these extremely accurate electronic balances - they are accurate to +/- 1 microgram. Jeff, who is the professor, explains to me how I can make use of this fantastic piece of lab equipment to integrate “as he did back in his day.” Basically, I was to print two copies of the graph in question. I can cut out a rectangle that includes the entire plot and the axes from the first copy, and the area of interest from the second copy. After weighing both cutouts on the balance and calculating some ratios, the result emerges! Much better than using Simpson’s method, right?

Now I just have to petition for the university to let me bring an electronic balance to my final exam instead of a calculator…

Posted by Tout Wang, 03:31 PM / Comments (4)

Tuesday, March 22, 2005

Logic: Deductive and Inductive

I have never bothered to understand the fine line between inductive and deductive logic, but my physics and philosophy teachers forced me to reconsider… So what do official definitions say?

“Deductive logic = this refers to the application of general rules to specific cases. Begins with general statements and moves to the particulars. Inductive Logic = Examines the particulars or specifics in an attempt to develop generalization.”

However, this misses out paradoxes, probabilistic statements and specific cases. A better definition of the the two would be as follows: Deductive logic = you lose information, Inductive logic = you gain information. Part of the definition is that it dispels myths that deductive logic is somehow “better” than inductive logic. Read on for my analysis.

In deductive logic, you lose information. Deductive logic relies on the validity of its base propositions. Many claim that deductive logic arguments are the only ones “full-proof,” but this is only a red herring, for we are concerned with the truth and not with the validity of the argument. Deductive logic may be a localization of a well-established fact, but it can also be seen as a logical interpolation of two (or more) assertions into one. This itself is tricky process which may require typographical properties of logical quantifiers (and, or, for all) as well as jumps outside the system (which cannot be definitely explained in the system). It is not merely inductive logic in reverse, where the premises follow the conclusion.

In inductive logic, you gain information. You extrapolate the qualities of the particulars onto a more general subset of this universal. To do this, you make a leap forward, even in the case when you have a base object and an ordering schemata (such as mathematical induction), you may still suffer from ω-incompleteness (some may reject arguments based on indefinite existence, meaning a successive pyramid of true statements may not lead to a true statement in general, this gets much more complicated with unprovable statements from Gödelian incompleteness).

Another foundation of induction is Limit Theory, which states that objects tend to change little over little time. For instance, “if the sun rises today, most likely it will rise tomorrow”. In fact - it definitely will, for the “sun rising today” is not a specific event, meaning that the line between it happening and not happening is blurred (the famous the chicken and the egg paradox). But the two propositions are different, and perhaps even conflicting. One describes timeless attributes the other describes state-of-being. Yet (its arguable that) both are able to provide absolute certainty (or the same certainty) of knowing the truth as deductive logic.

Posted by Oleg Ivrii, 12:13 PM / Comments (2)

Saturday, January 1, 2005

Alpha-beta Pruning

The idea behind alpha-beta pruning is that instead of looking for good moves, we look for “good-enough” moves. Originally, “good-enough” is negative infinity. This lower bound is the beta. At this point, any move is good for Sean. However, once Sean finds better moves, this beta increases. Sean doesn’t need moves worse than what he already found. He doesn’t need to find their exact values (examine them fully) to decide that these moves are inferior.

Good moves for Sean, are bad for Vlad. Sean’s lower bound becomes Vlad’s upper bound - the alpha. If Vlad finds a move better than alpha (than the beta comes bigger than alpha), the position cannot result from optimal play and no longer needs to be searched. This is called an alpha-beta cutoff.

The only “interesting” moves are the ones which have the values between alpha and beta. As the search progresses, the “window” between the beta and value becomes smaller. Alpha-beta pruning saves a lot of the search and typically reduces the effective branching factor to its square root - hence, doubling the depth of the search.

Another bonus of alpha-beta pruning is that it has a decent move to return even after an incomplete search. However, efficiency of the algorithm heavily depends on how fast we find moves which turn out to be good (so we achieve cutoffs quickly). Some moves just “look” better than others - thus it would make sense to search them first. This concept is called ordering heuristics, but more on this later.

Posted by Oleg Ivrii, 05:03 PM /

Friday, December 31, 2004

The Minimax Algorithm

A while ago, I built a home-made chess engine. It was not very strong, but was a “worthy” opponent. In a few posts, I will describe the basics of how its done (and more!). Chess is a “zero-sum” game meaning that a player can only gain at someone else’s expense. The core of the chess engine is the famous Minimax algorithm. In chess, there are two players - lets call them Sean and Vlad.

How does Sean evaluate a position? If he has mate, he evaluates the position to positive infinity. If he is getting mated; however, he evaluates the position to negative infinity. Sean wants to maximize this value - but Vlad also gets a say in this. Vlad, clearly wants to minimize this value. Hence, Sean’s value is the minimum of the values resulting from each of Vlad’s replies. Here, Sean is whats known to be the maximizing player, and Vlad as the minimizing player.

However, Sean is unable to calculate all the possibilities till the end of the game, so at one point he would have to “guess” the value of the position. At one point, he has to intuitively look at a position and decide how good it is. He will examine factors such as material, king safety and pawn structure and eventually decide: “Okay, I am winning by +2.33 pawns”. This is known as a heuristic evaluation. Of course, this is still guessing and Sean would have to hope that he’s guessing well.

In this case, Sean looks only so many moves ahead. This is called the depth. The number of possibilities increases exponentially with each half-move or ply. The rate of increase is called the effective branching factor. However, Minimax all by itself is not very efficient. One could greatly increase depth by means of a pruning algorithm. But more on that next time.

Posted by Oleg Ivrii, 01:12 PM / Comments (2)

   Copyright © 2004-2005 Oleg Ivrii, Liscensed under: Creative Commons.
   RSS: Big Party, RSS: Linklist.