John and I attended the morning technical sessions on the COIN-OR project. I enjoyed the overview talk by Robert Fourer (“COIN-OR Triennial Update”). It helped me understand what is necessary to get a project into COIN (both technical and legal requirements). This made me wonder if that’s the right way to go with SIMPL. My first reaction is to say “yes”, but I’ll have to think about it and talk to my co-authors. The second talk was given by Laszlo Ladanyi, and it was entitled “Bigger, Better, Faster: Update on the World’s Fastest Open-Source MIP Solver”. Although the latest results I’ve seen place CBC behind SCIP, I guess it’s OK for them to say that because, although SCIP ‘s code is freely available, it’s only free for academic use. This means SCIP cannot call itself “open-source” (Ah…the world of legal terminology). The improvements to CBC have indeed been impressive: new heuristics, 2x increase in pre-processing speed, and 55% improvement on the geometric mean of solution times for problems in the Mittelmann benchmarks, to name a few. The final talk in the COIN-OR session was given by Bjarni Kristjansson of Maximal Software. He talked about new developments to CoinMP, which is a library that gives a high-level user easy access to many COIN solvers from both Windows and Linux applications.

An added (positive) side effect of attending the COIN-OR session was that I had a chance to visit Gleacher Center at the University of Chicago. Until Thursday, all the sessions I attended were held at the Marriott hotel. The talks were in what appeared to be an MBA classroom. Pretty neat room, nice tables and nice chairs, but no leather. If you want to get an excellent MBA degree, while attending classrooms with leather chairs, you will have to come to the University of Miami :-)

Unfortunately, there was also a negative side-effect: I had to miss Michel Goemans‘s talk on the “Smallest Compact Formulation of the Permutahedron”. The permutahedron in n dimensions is the convex hull of the points that represent all possible permutations of the numbers from 1 to n. Michel found a formulation with O(nlog(n)) variables and O(nlog(n)) constraints and proved that no smaller compact formulation exists (up to constant factors). This may have some implications in hybrid methods because the permutahedron is related to the alldifferent global constraint of Constraint Programming. I’ll have to see if I can find Michel’s paper online somewhere, or ask him for a copy. In his abstract, he mentions that this result generalizes easily to variants of the permutahedron, so I’m curious to see what those variants are.

During the lunch break I took some time off to go to Millenium Park. It’s gorgeous. My trip to Chicago wouldn’t have been complete without a visit to the Cloud Gate (a.k.a the bean), the Crown Fountain, and the Jay Pritzker Pavilion. I’ve included some photos below. On my way back, I stopped for a quick lunch (paninni + soup) and bought a 3-pack of Starbucks Via (their brand of instant coffee; only sold in Chicago, Seattle and London). I know what you’re going to say: instant coffee tastes terrible. However, there’s been so much talk about Via that I wanted to give it a try.

After lunch I went to François Soumis‘s talk on “Dantzig-Wolfe Decomposition Putting in the Subproblem the Degenerated Constraints of a Linear Problem”. He explained the Improved Primal Simplex (IPS) algorithm and how it can be used in a column-generation style. The results are really impressive. For highly degenerate problems, the LPs solve 10x faster (compared to CPLEX’s primal simplex) when only lower bounds are taken into account, and 20x faster when both lower and upper bounds are used. For a class of degenerate set partitioning problems, LPs solve 20 to 40 times faster, while the IPs solve 50 to 100 times faster. They’re working on a number of extensions, including generalizations of the same technique to non-linear problems (through active set methods).

The last two talks I attended had one thing in common: both dealt with scheduling problems with long time horizons. Although the typical time-indexed MIP formulation for machine scheduling (x_jt = 1 if job j starts at time t) provides strong lower bounds for short time horizons, it is ineffective at solving large horizon problems because the number of variables grows too much. In the first talk of that session, Christos Maravelias talked about “The Polyhedral Properties of a Discrete-Time MIP Formulation for Production Planning”. Christos’s models were suffering from the “curse of long time horizons” and they tried many variants of the model to no avail. Their successful idea consists of looking at the problem as the intersection of two polyhedra P1 and P2 and trying to strengthen the individual P1 and P2 formulations. It turns out that the coefficient matrix of P1 was totally unimodular (hence no strengthening needed), but the matrix of P2 wasn’t. Nevertheless, by playing around with the basic time unit of their time horizon (and properly adjusting the right-hand side coefficients), they managed to rescale the coefficient matrix of P2 into a k-regular matrix (k-regularity is a generalization of total unimodularity). Interestingly, the rank-1 closure (w.r.t. mod-k cuts) can be obtained easily for k-regular matrices. In addition, that k-regularity had other positive effects on the way that CPLEX treats the problem (I didn’t fully get that part of the talk). The result: problems that used to take hundreds of thousands of nodes to solve were being solved with just a few nodes, and some times at the root node. The final talk, entitled “Single-Machine Scheduling over Long Time Horizons by Logic-based Benders Decomposition”, was given by Elvin Coban. She implemented three different models for the problem (MIP, CP and Benders) and looked at three objectives: feasibility only, min makespan, and min tardiness. The MIP and CP models were pretty simple and straightforward. The Benders decomposition model consisted of a MIP master problem that assigned groups of jobs to time intervals in the scheduling horizon, and a subproblem for each interval that took care of scheduling the jobs assigned to it. Both the master and the subproblems had their relaxations enriched with extra constraints. The Benders cuts were strengthened by looking at smaller-than-obvious causes of infeasibility (or bounds, for the makespan and tardiness objectives) through multiple solves of each subproblem. The results were interesting: for problems in which the jobs have tight time windows, CP is the best approach, followed by Benders and then (far, far away, 100 times slower) MIP. As the time horizon increases and/or job time windows become looser, the Benders approach remains robust and becomes better than CP, which goes all over the place (MIP is still far, far away). Here’s an idea of the sizes of their problems: # jobs: 60 to 120, time horizon: 170 to 4000 units. I was surprised to learn that they simply divided the time horizon into 10 equal intervals. I suspect that a more careful choice of intervals can make a significant difference (consider, for instance, the release dates and due dates as possible breakpoints).

That was the end of the conference. I headed back to my hotel, picked up my bag and took the Red Line -> Orange Line -> Midway. Total cost of ground transportation for the whole trip = $4.50. (The folks paying my reimbursement will be proud of me.) Unfortunately, upon arrival at the airport, I found out that my Southwest flight was delayed by 2 hours! Thankfully, I managed to arrive home safe and sound (and tired). I had a great time in Chicago, but I’m happy to be back home in sunny south Florida. I hope these ISMP blog posts have been useful to my blog readers. It was a pleasure to contribute my two cents to the O.R. community.

By the way, if you want a view of the ISMP more focused on Interior Point Methods, take a look at Nathan Brixius blog.