# Category Archives: Research

## Semantic Typing: When Is It Not Enough To Say That X Is Integer?

Andre Cire, John Hooker, and I recently finished a paper on an interesting, and somewhat controversial, topic that relates to high-level modeling of optimization problems. The paper is entitled “Modeling with Metaconstraints and Semantic Typing of Variables“, and its current version can be downloaded from here.

Here’s the abstract:

Recent research in the area of hybrid optimization shows that the right combination of different technologies, which exploits their complementary strengths, simplifies modeling and speeds up computation significantly. A substantial share of these computational gains comes from better communicating problem structure to solvers. Metaconstraints, which can be simple (e.g. linear) or complex (e.g. global) constraints endowed with extra behavioral parameters, allow for such richer representation of problem structure. They do, nevertheless, come with their own share of complicating issues, one of which is the identification of relationships between auxiliary variables of distinct constraint relaxations. We propose the use of additional semantic information in the declaration of decision variables as a generic solution to this issue. We present a series of examples to illustrate our ideas over a wide variety of applications.

Optimization models typically declare a variable by giving it a name and a canonical type, such as real, integer, binary, or string. However, stating that variable $x$ is integer does not indicate whether that integer is the ID of a machine, the start time of an operation, or a production quantity. In other words, variable declarations say little about what the variable means. In the paper, we argue that giving a more specific meaning to variables through semantic typing can be beneficial for a number of reasons. For example, let’s say you need an integer variable $x_j$ to represent the machine assigned to job $j$. Instead of writing something like this in your modeling language (e.g. AMPL):

var x{j in jobs} integer;

it would be beneficial to have a language that allows you to write something like this

x[j] is which machine assign(job j);

To see why, take a look at the paper ;-)

Filed under Modeling, Research

## Improving Traveling Umpire Solutions the Miami Heat Way: Not one, not two, not three…

Those who know me are aware of my strong passion for basketball, so I had to find a way to relate this post to my favorite sport. Fans of basketball in general, and of the Miami Heat in particular, might be familiar with this video clip in which LeBron James makes a bold prediction. Back in 2010, when asked how many titles the Heat’s big three would win together, he replies “Not one, not two, not three, not four, not five, not six, not seven, …” While I’d love to see them win 8 titles, it sounds a bit (a lot) unlikely. But I can’t complain about their record so far. Winning 2 titles in 3 finals’ appearances isn’t bad at all. But what does this have to do with baseball umpires? Let’s get back to OR for a moment.

A couple of years ago, I wrote a post about scheduling baseball umpires. In that same article I co-authored with Hakan Yildiz and Michael Trick, we talked about a problem called the Traveling Umpire Problem (TUP), which doesn’t include all the details from the real problem faced by MLB but captures the most important features that make the problem difficult. Here’s a short description (detailed description here):

Given a double round-robin tournament with 2N teams, the traveling umpire problem consists of determining which games will be handled by each one of N umpire crews during the tournament. The objective is to minimize the total distance traveled by the umpires, while respecting constraints that include visiting every team at home, and not seeing a team or venue too often.

And when I say difficult, let me tell you something, it’s really hard to solve. For example, there are 16-team instances (only 8 umpires) for which no feasible solution is known.

Two of my Brazilian colleagues, Lucas de Oliveira and Cid de Souza, got interested in the TUP and asked me to join them in an effort to try to improve the quality of some of the best-known solutions in the TUP benchmark. There are 25 instances in the benchmark for which we know a feasible solution (upper bound) and a lower bound, but not the optimal value. Today, we’re very happy to report that we managed to improve the quality of many of those feasible solutions. How many, you ask? I’ll let LeBron James himself answer that question:

“Not one, not two, not three, … not ten, … not eighteen, … not twenty-three, but 24 out of 25.”

OK, LeBron got a bit carried away there. And he forgot to say we improved 25 out of the 25 best-known lower bounds too. This means those pesky optimal solutions are now sandwiched between numbers much closer to each other.

Here’s the approach we took. First, we strengthened a known optimization model for the TUP, making it capable of producing better bounds and better solutions in less time. Then, we used this stronger model to implement a relax-and-fix heuristic. It works as follows. Waiting for the optimization model to find the optimal solution would take forever because there are too many binary decision variables (they tell you which venues each umpire visits in each round of the tournament). At first, we require that only the decisions in round 1 of the tournament be binary (i.e. which games the umpires will be assigned to in round 1) and solve the problem. This solves pretty fast, but allows for umpires to be figuratively cut into pieces and spread over multiple venues in later rounds. Not a problem. That’s the beauty of math models: we test crazy ideas on a computer and don’t slice people in real life. We fix those round-1 decisions, require that only round-2 variables be binary, and solve again. This process gets repeated until the last round. In the end, we are not guaranteed to find the very best solution, but we typically find a pretty good one.

Some possible variations of the above would be to work with two (or more) rounds of binary variables at a time, start from the middle or from the end of the tournament, etc. If you’re interested in more details, our paper can be downloaded here. Our best solutions and lower bounds appear in Table 10 on page 22.

We had a lot of fun working on the TUP, and we hope these new results can help get more people excited about working on this very challenging problem.

## There and Back Again: A Thank You Note

There were sub-freezing temperatures, there were snow flurries, there was a hail storm, and there was a tornado watch. No, I’m not claiming that my visit to Pittsburgh last week was as full of adventures as Bilbo Baggins’s journey, but it was very nice indeed.

I had the great pleasure of being invited by John Hooker and Willem-Jan van Hoeve to give a talk at the Operations Research seminar at the Tepper School of Business. Since John, André Ciré, and I are working together on some interesting things, I took the opportunity to spend the entire week (Mon-Fri) at CMU; and what a joy it was.

The Tepper School was kind enough to have a limo service pick me up from, and take me back to, the airport. I guess this is how the top business schools roll. It’s a great way to make a speaker feel welcome. Besides, my driver turned out to be an extremely friendly and easy-to-talk-to fellow. Thanks to him (and his knowledge of off-the-beaten-path roads), I managed to catch my return flight. Otherwise, a cab driver would have sat through miles of Friday rush hour, and I’d certainly have missed the flight.

I walked to campus every day and actually enjoyed the few minutes of cold weather (wow! I can’t believe I just said that!). Stopping at the Kiva Han to grab an almond biscotto and a small coffee, right across the street from Starbucks, was a daily treat. Walking around campus brought back great memories from my PhD-student days. It’s nice to see all the improvements, and all the good things that remain good. Upon leaving Miami, I had the goal of having Indian food for 10 out of my 10 meals (excluding breakfast). Although I managed to do it only 4 times, I’m pretty happy with my gastronomic adventures in Pittsburgh. The delicious semolina gnocchi served at Eleven is definitely praiseworthy.

Work-wise, it was a very productive week. We had interesting ideas and conversations. I’m very grateful to all of those who took time off their busy schedules to meet with me, be it to catch up on life, talk about research (including some excellent feedback on my talk), or both. Thank you (in no particular order) to Alan Scheller-Wolf, Javier Peña, Michael Trick, Egon Balas, Sridhar Tayur, Masha Shunko, Valerie Tardif, Lawrence Rapp, and of course John and Willem. Many thanks also go to André, David, and all the other PhD students who joined me for lunch on Friday. I really enjoyed meeting all of you and learning a bit about your current projects.

I noticed that John got rid of his chalk board and painted two of his office walls with some kind of glossy white-board paint. It’s pretty cool because it allows you to literally write on your wall and erase everything with a regular white-board eraser. Now I want to do the same in my office! (My white board is pretty small.) But I’m not sure if they’ll let me. Gotta check on that!

Overall, it was an awesome week and I hope I can do this again some time.

1 Comment

Filed under People, Research, Travel

## Better Traffic Networks Through Vehicle and Signal Coordination

My friend Phil Spadaro pointed me to two interesting articles on traffic management techniques being studied by BMW and Audi here and here. The idea is to allow traffic lights and cars to communicate, which would yield better traffic flow, reducing time spent at red lights and, as a consequence, reducing fuel consumption. From the Audi article:

The results obtained during the first travolution project in 2006 were immediate and dramatic: reduced waiting times at traffic signals cut fuel consumption by 17 percent…The secret of this success: the traffic signals in Ingolstadt are controlled by a new, adaptive computing algorithm that Audi developed in cooperation with partners at colleges of advanced engineering and in business and industry. Audi has now developed travolution still further, by enabling vehicles to communicate directly with traffic light systems, using wireless LAN and UMTS links…The traffic signals transmit data that are processed into graphic form and shown on the car’s driver information display screen. The graphics tell the driver for instance what speed to adopt so that the next traffic light changes to green before the car reaches it. This speed, which keeps the traffic flowing as smoothly as possible, can then be selected at the adaptive cruise control (ACC) – but the driver can also delegate this task to the car’s control system.

The savings are significant:

When the car is part of a network in this way, the driver can reduce the amount of time spent at a standstill and cut fuel consumption by 0.02 of a litre for every traffic-light stop and subsequent acceleration phase that can be avoided. The potential is enormous: if this new technology were applied throughout Germany, exhaust emissions could be lowered by about two million tonnes of CO2 annually, equivalent to a reduction of approximately 15 percent in CO2 from motor vehicles in urban traffic.

I am sure that there are many parts of this whole coordination process that involve some OR. It must be really cool to work on a project like this. On a different, but related, note I also believe that a lot of traffic jams have psychological reasons. People’s curiosity and lack of advance planning can severely influence their driving behavior. One great example of this is the Golden Glades fork here in Miami:

People going north on I-95 (traffic pattern on the right, going upward in the picture) have to decide among one of three directions: taking the Turnpike (letter A in the picture), continuing on I-95 (letter B), or taking the rightmost exit (letter C). It just so happens that most drivers realize that they have to change lanes at the last minute and this fork is constantly congested (even without accidents). There’s a very simple simulation experiment I always wanted to run but never had the time to: simulate the traffic flow when most people decide to change lanes very close to the fork versus the situation when people change lanes at uniformly distributed points way ahead of the fork. I bet you’d see much better flow in the second case. I hope that one day, when computers can drive for us, the driving algorithms will take care of these issues.