Category Archives: Applications

Did You See Any OR During Apple’s iPhone 5 Announcement? I did!

On September 12, Apple finally announced its much-awaited iPhone 5. I didn’t have time to watch the keynote speech, but I watched the shorter 7-minute video that’s posted on Apple’s web site featuring Jony Ive, Apple’s Senior Vice President, Design. In that video, at around the 5-minute, 26-second mark, something they said caught my attention: the way they put parts together during the assembly process. I encourage you to watch that part of the video before reading on.

Jony Ive says:

Never before, have we built a product with this extraordinary level of fit and finish. We’ve developed manufacturing processes that are our most complex and ambitious.

And on Apple’s web site, they say this:

During manufacturing, each iPhone 5 aluminum housing is photographed by two high-powered 29MP cameras. A machine then examines the images and compares them against 725 unique inlays to find the most precise match for every single iPhone.

So let’s see if I understood this correctly. In a typical manufacturing operation, the multiple parts that get put together to create a product are put together without much fuss. A machine makes part A, another machine makes part B, and perhaps a robotic arm or a third machine takes any one of the many part A’s that are coming down a conveyor belt and attaches it to any one of the many part B’s that are coming down another conveyor belt. What Apple did was to improve on the “any one” choice. I don’t know if Apple pioneered this idea, I’d say probably not, but this is the first time I hear about something like this. If you’ve seen this before, let me know in the comments.

Before OR comes into play, Computer Science does its job in the form of computer vision / image processing algorithms. The photographs of the parts are analyzed and (I’m guessing) a fitness score is calculated for every possible matching pair of parts A (the housing) and B (the inlay). What happens next? How do they pick the winning match? Here are some possibilities:

  1. Each part A is matched with the part B, among the 725 candidates, that produces the best matching score.
  2. A 725 by 725 matrix of fitness scores is created between 725 parts of type A and 725 parts of type B, and the best 725 matches are chosen so as to maximize the overall fitness score (i.e. the sum of the fitness scores of all the chosen matches).
  3. Proceed as in the previous case, but pick the 725 matches that maximize the minimum fitness score. That is, we worry about the worst case and don’t let the worst match be too bad when compared to the best match.

After these 725 pairs are put together, new sets of parts A and B come down the conveyor belt and the matching process is repeated. Possibility number 1 is the fastest (e.g. do a binary search, or build a priority queue), but not necessarily the best because every now and then a bad match will have to be made. Possibilities 2 (an assignment problem) and 3 (assignment problem with a max-min objective) are better, in my opinion, with the third one being my favorite. They are, however, more time consuming than possibility 1. Jony Ive says the choice is made “instantaneously”, which doesn’t preclude something fancier than possibility 1 from being used given the assignment problems are pretty small.

The result? In the words of Jony Ive:

The variances from product to product, we now measure in microns.

It is well-known that OR plays a very important role in manufacturing (facility layout, machine/job scheduling, etc.) but it’s not every day that people stop to think about what happens in a manufacturing plant. This highly-popular announcement being watched by so many people around the world painted a very clear picture of the kinds of problems high-tech manufacturing facilities face. I think it’s a great example of what OR can do, and how relevant it is to our companies and our lives.


Leave a comment

Filed under Applications, iPhone, Motivation, Promoting OR

Adjusting Microwave Cook Times with OR: Inspired By An xkcd Comic

I’m a big fan of, a webcomic written by Randall Munroe. Last Monday’s comic, entitled “Nine”, became the inspiration for this post. Here it is:

The alt-text reads:

FYI: If you get curious and start trying to calculate the time adjustment function that minimizes the gap between the most-used and least-used digit (for a representative sample of common cook times) without altering any time by more than 10%, and someone asks you what you’re doing, it’s easier to just lie.

It seems Randall is trying to find (or has already found) a closed-form function to accomplish this task. I don’t endorse number discrimination either; unlike my wife who insists on adjusting restaurant tips so that the cents portion of the total amount is either zero or 50, but I digress… I’m not sure exactly how to find these adjusted cook times with a closed-form function, but I can certainly do it with OR, more specifically with an integer program. So here we go. For simplicity, I’ll restrict myself to cook times under 10 minutes.

Let’s begin with an example. As I think about the microwave usage in my house, I end up with the following cook times and how often each one is used:

\begin{tabular}{c|c}  {\bf Cook Time} & {\bf Usage Frequency (\%)} \\  \hline  :30 & 20 \\  1:00 & 30 \\  1:30 & 10 \\  2:00 & 30 \\  4:30 & 10  \end{tabular}

So let’s first calculate the usage frequency of each digit from zero to nine. The above table can be interpreted in the following way. For every 100 times I use the microwave, 20 of those times I type a 3 followed by a zero (to input a 30-second cook time), 30 of those times I type a 1 followed by two zeroes, etc. Therefore, during these 100 uses of the microwave, I type a total of 280 digits. Out of those 280 digits, 160 are zeroes, 40 are 1’s, 30 are 2’s, and so on. Hence, the usage frequency of zero—the most-used digit—is \frac{160}{280} \approx 57.1\%. (Usage frequencies for the remaining digits can be calculated in a similar way.) Digits 5 through 9 are apparently never used in my house, so the current difference between the most-used and least-used digit in my house is (57.1-0)%.

If, as Randall suggests, I’m allowed to adjust cook times by no more than 10% (up or down) and I want to minimize the difference in usage between the most-used and least-used digit, here’s one possible adjustment table:

\begin{tabular}{c|c}  {\bf Original Cook Time} & {\bf Adjusted Cook Time} \\  \hline  :30 & :31 \\  1:00 & 0:58 \\  1:30 & 1:36 \\  2:00 & 2:09 \\  4:30 & 4:47  \end{tabular}

Now let’s compare the usage frequency of each digit before and after the adjustment:

\begin{tabular}{c|cc}  & \multicolumn{2}{c}{\bf Usage Frequency (\%)}\\  {\bf Digit} & {\bf Before Adjustment} & {\bf After Adjustment} \\  \hline  0 & 57.1 & 12 \\  1 & 14.3 & 12 \\  2 & 10.7 & 12 \\  3 & 14.3 & 12 \\  4 & 3.6 & 8 \\  5 & 0 & 12 \\  6 & 0 & 4 \\  7 & 0 & 4 \\  8 & 0 & 12 \\  9 & 0 & 12  \end{tabular}

After the adjustment, the most frequently used digits are 0, 1, 2, 3, 5, 8, and 9 (12% of the time), whereas the least frequently used digits are 6 and 7 (4% of the time). The difference now is 12-4=8%, which is significantly less than 57.1%. In my household, there’s absolutely no way to do better than that. Guaranteed! (Note: this doesn’t mean there aren’t other adjustment tables that achieve the same 8% difference. In fact, there are many other ways to achieve the 8% difference.)

If you’re curious about how I computed the time-adjustment table, read on. I’ll explain the optimization model that was run behind the scenes and I’ll even provide you with an Excel spreadsheet that allows you to compute your own adjusted cook times. Let the fun begin!

Let T be the set of typical cook times for the household in question. In my case, T=\{\text{:30, 1:00, 1:30, 2:00, 4:30}\}. For each i \in T, let R(i) be the set of cook times that fall within 10%—or any other range you want—of i. For example, R(\text{:30})=\{\text{:27, :28, :29, :30, :31, :32, :33}\}. In addition, for each i \in T, let f(i) be the usage frequency of cook time i. In my example, f(\text{:30}) = 20, f(\text{1:00})=30, and so on.

For each i \in T and j \in R(i), create a binary variable y_{ij} that is equal to 1 if cook time i is to be adjusted to cook time j, and equal to zero otherwise. There are \sum_{i \in T} |R(i)| such variables. In my example, 119 of them. Because each original cook time has to be adjusted to (or mapped to) a unique (likely different) cook time, the first constraints of our optimization model are

\displaystyle \sum_{j \in R(i)} y_{ij} = 1, \enspace \text{for all} \; i \in T

To be able to calculate the difference between the most-used and least-used digit (in order to minimize it), we need to know how many times each digit is used. Let this quantity be represented by variable z_d, for all d \in \{0,1,\ldots,9\}. In my house, before the adjustment, z_0=160 and z_1=40. We now need to relate variables z_d and y_{ij}.

For each d \in \{0,\ldots,9\} and j \in \bigcup_{i \in T} R(i), let c_d(j) equal the number of times digit d appears in cook time j. For example, c_0(\text{1:00})=2, c_3(\text{1:30})=1, and c_2(\text{4:30})=0. We are now ready to write the following constraint

\displaystyle z_d = \sum_{i \in T} \sum_{j \in R(i)} f(i) c_d(j) y_{ij}, \enspace \text{for all} \; d \in \{0,\ldots,9\}

Once the adjusted cook times are chosen by setting the appropriate y_{ij} variables to 1, the above constraint will count the total number of times each digit d is used, storing that value in z_d.

Our goal is to minimize the maximum difference, in absolute value, between all distinct pairs of z_d variables. Because the absolute value function is not linear, and we want to preserve linearity in our optimization model (why?), I’m going to use a standard modeling trick. Let w be a new variable representing the maximum difference between any distinct pair of z_d variables. The objective function is simple: \text{minimize} \; w. To create a connection between z_d and w, we include the following constraints in the model

\displaystyle z_{d_1} - z_{d_2} \leq w, \enspace \text{for all} \; d_1 \neq d_2 \in \{0,\ldots,9\}

With 10 digits, we end up with 90 such constraints, for a grand total of 105 constraints, plus 130 variables. This model is small enough to be solved with the student version of Excel Solver. I would, however, recommend using OpenSolver, if you can.

Here’s my Excel sheet that implements the model described above. It shouldn’t be too hard to understand, but feel free to ask me questions about it in the comments below. Variable cells are painted gray. Variables y_{ij} are in column E, variables z_d are in the range K3:T3, and variable w is in cell K96. The c_d(j) values and the formulas relating z_d with y_{ij} are calculated with the help of SUMIF functions in the range K3:T3. The differences between all pairs of z_d variables are in column U. All Solver parameters are already typed in. The final values assigned to z_d variables (range K3:T3) represent absolute counts. To obtain the percentages I list above, divide those values by the sum of all z_d‘s. (The same applies to the value of w in cell K96.)

Feel free to modify this spreadsheet with your own favorite cook times and help end number discrimination in the microwaving world! Enjoy!


Filed under Applications, Integer Programming, Modeling

Enforcing a Restricted Smoking Policy on the UM Campus: a TSP Variant

The Coral Gables campus of the University of Miami is slowly transitioning into a smoke-free campus. (I can’t wait for that to happen.) Presently, there are a number of designated smoking areas (DSAs) around campus and nobody is supposed to smoke anywhere else. Here’s a map of campus with red dots representing DSAs (right-click on it and open it in a new tab to see a larger version):

Unfortunately, enforcement of this smoking policy is nowhere to be seen. The result? Lots of students smoking wherever they want and, even worse, smoking while walking around campus, which is a great way to maximize their air pollution effect. Don’t you love people who live in the universe of me, myself, and I? But let me stop ranting and return to operations research…

As someone who does not enjoy (and is allergic to) cigarette smoke, I started thinking about how to use OR to help with the enforcement effort. Let’s say there will be an enforcer (uniformed official) whose job is to walk around campus in search of violators. Based on violation reports submitted by students, faculty and staff, the University can draw a second set of colored dots, say black, on the above map. These black dots represent the non-smoking areas in which violations have been reported most often. For simplicity, let’s call them violation areas, or VAs.

In possession of the VA map, what is the enforcer supposed to do? You probably answered “walk around campus visiting each VA”. If you’re now thinking about the Traveling Salesman Problem (TSP), you’re on the right track. The enforcer has to visit each VA and return to his/her starting point. However, this is not quite like a pure TSP. Let me explain why. First of all, unlike the pure TSP, the enforcer has to make multiple passes through the VAs on a single day. Secondly, it’s also likely that some VAs are more popular than others. Therefore, we’d like the enforcer to visit them more often. Finally, we want the multiple visits to each VA to be spread throughout the day. With these considerations in mind, let me define the Smoking Policy Enforcement Problem (SPEP): We are given a set of n locations on a map. For each location i, let v_i be the minimum number of times the enforcer has to visit i during the day, and let s_i be the minimum separation between consecutive visits to location i. In other words, each time the enforcer visits i, he/she has to visit at least s_i other locations before returning to i. The goal is to find a route for the enforcer that satisfies the visitation requirements (v_i and s_i) while minimizing the total distance traveled.

After a few Google searches, I discovered that the SPEP is not a new problem. This shouldn’t have come as a surprise, given the TSP is one of the most studied problems in the history of OR. The article I found, written by R. Cheng, M. Gen, and M. Sasaki, is entitled “Film-copy Deliverer Problem Using Genetic Algorithms” and appeared in Computers and Industrial Engineering 29(1), pp. 549-553, 1995. Here’s how they define the problem:

There are a few minor differences with respect to the SPEP. In the above definition, s_i=1 for every location i. What they call d_i is what I call v_i, and they require exactly d_i visits, whereas I require at least v_i visits.

I wasn’t aware of this TSP variant and I think it’s a very interesting problem. I’m happy to have found yet another application for it. Can you think of other contexts in which this problem appears? Let me know in the comments.


Filed under Applications, Motivation, Traveling Salesman Problem

Installing iPhone Apps: Apple Doesn’t Care About Average Completion Time

Having recently spent some time outside the US, I found, upon my return, that many of the Apps on my iPhone needed to be updated. No big deal: I clicked “Update all”, typed in my password, and let the operating system finish the job. After watching the update process for a minute or so, I noticed one interesting fact: Apps were updated (i.e. downloaded+installed) in the order the updates became available (chronologically increasing), rather than by increasing order of the size of the update (in bytes). If you’re asking “why does it matter?”, read on.

During my grad school years at Carnegie Mellon, I had the pleasure of taking the Sequencing and Scheduling class with Egon Balas (I later sat in the class again as his TA, and finally taught it to the MBA students in 2004). So let’s start with some terminology: given a machine (the iPhone) and a set of tasks (Apps) to be executed (updated) on the machine, the completion time of a task is the time at which it is finished. For example, let’s assume the Concorde TSP App needs an update. If time zero is the moment I enter my iTunes password, the completion time of task “Concorde TSP” is the earliest time at which this App’s latest version is ready to run on my iPhone. The total completion time of a set of tasks is simply the sum of the completion times of all tasks, and the makespan is the completion time of the task that gets updated last.

The App update process has no release dates (all outdated Apps are ready to be updated at time zero), is non-preemptive (once started, the update of an App continues until it’s finished; ignoring crashes and other issues), and doesn’t involve sequence-dependent setup times (as soon as an App finishes updating, the next App in line can start its update right away). Under these circumstances, the makespan of a group of outdated Apps is always the same, regardless of the order in which they get updated. For example, if App A takes 15 seconds to download and install, and App B takes 10 seconds to download and install, it will take me 25 seconds to update both of them, regardless of which one is updated first. So far, so good. But let’s see what happens with the total (or average) completion time.

Continuing with the two-App example above, if I update the Apps in the order A, B, the completion time of A is 15, and the completion time of B is 25. The total completion time is 15+25=40, and the average completion time is 40/2=20 seconds. If the Apps are updated in the reverse order B, A, the completion time of B is 10, and the completion time of A is 25. The total completion time now is 10+25=35, and the average completion time decreases to 35/2=17.5 seconds. If there were other Apps being updated before or after the pair A, B, swapping them to make sure that B goes before A (because B takes less time) would have the same effect (the A, B swap doesn’t change the completion time of other Apps). What I just explained is called an exchange argument. It proves that whenever two Apps are out of order (in the sense that a smaller one is placed after a larger one), swapping them reduces the total/average completion time. Therefore, the minimum total/average completion time is obtained when the Apps are sorted by increasing order of duration. In the scheduling literature, this is called the SPT rule (Shortest Processing Time first).

I still haven’t answered the question of whether all of this matters because the makespan doesn’t depend on the order of Apps (updating A and B always takes 25 seconds). The answer is I don’t know! It’s a psychological effect. Shorter completion times may give the user the impression that the update process is going fast because a bunch of small Apps get updated quickly. By updating larger Apps first, the user may have the impression that the process is taking longer because, after a while, there are still many Apps to go. Should Apple worry about this? I’ll leave that question to my colleagues in the Marketing department who specialize in Consumer Behavior. If the answer turns out to be “yes”, then you now know what to do.

P.S. I’d like to know what other mobile operating systems do. Do they use SPT? Please let me know in the comments.


Filed under Applications, iPhone

Fourth of July Logistics in Coral Gables: No OR, No Glory

After a six-year hiatus, the city of Coral Gables and the Biltmore Hotel decided to host the Fourth of July celebrations once again including, of course, a very nice fireworks display on the Biltmore 18-hole golf course. My wife and I had watched the Independence Day fireworks at Biscayne bay and on the beach the past two years, so we thought this would be a nice change.

At the outset, the event seemed to be very well organized with buses and trolleys departing from four different places in the city to take people to the hotel, as shown in the map below.

So we parked our car at the Andalusia garage (Garage 4 on the map) and took the 6pm trolley. There was going to be a concert starting at 7pm, while the fireworks would go off at 9pm. We found a nice spot to place our chairs and my wife’s camera tripod, so we sat down and relaxed. Numerous food trucks offered plenty of tasty choices, the concert was entertaining and, most importantly, we loved the fireworks. All in all, we were very pleased with the whole thing. The problems started once the fireworks ended. Take a look at this map.

The red arrows indicate the flow of people trying to exit the golf course through a single narrow path (people coming from all directions were converging to that point). The yellow arrows start at the trolley/bus stop (a single stop) and show the path the trolleys/buses would take to go back to the garages in the previous map.

By now you’ve already guessed what happened, but I’ll list some of the main problems: (1) large congestion to exit the golf course (bottleneck); (2) no organized lines were formed by the police; people simply aggregated as a large mass at the bus stop (forget about FIFO); (3) tons of people actually drove their cars and parked not only in the parking lot depicted above, but also all around the neighborhood surrounding the hotel. Therefore, the yellow bus path was full of pedestrians walking to their cars (or walking home) and the police did not allow trolleys/buses to come in or out while there were pedestrians on the road (that is, forever); (4) we were given no indication as to which would be the destination of the incoming trolley/bus until they were parked at the stop (crowd left in the dark = annoyed crowd).

After standing there for a while, my wife and I decided that it would be much faster and less stressful if we simply walked back to Garage 4 (a 1.3-mile, 25-minute walk). Yes, it was very hot that day, and we had to carry some heavy chairs and equipment, but it was better than suffering through the chaos.

As an Operations Research person, I couldn’t stop thinking of all the bad decisions that were made by the organization of this event. I know they meant well, but everyone’s experience would have been much more enjoyable if they did a few things differently. Some of my suggestions below require conveying information to the attendees ahead of time, but this could have been accomplished by handing out flyers to people as they arrived. (Arrivals were not a problem because they were spread out over 3.5 hours, between 5 and 8:30pm.)

  • Divide the crowd by telling people to exit the golf course through different paths depending on where they’re headed: those walking home exit through gate A, those walking to the Biltmore parking lot exit through gate B, those wishing to catch a trolley/bus, exit through gate C, etc.
  • Have multiple bus stops, reasonably away from each other.
  • Have barricades set up so that: (1) lines are properly formed at the bus stops, (2) pedestrians do not walk on the road and impede the flow of trolleys/buses.
  • Schedule the return trips of trolleys/buses in advance and tell people to come to the bus stop at their assigned time based on desired destination (à la Disney fast pass).

These are just some ideas that came to mind right away, but I bet more improvements are possible (what would you have done, dear reader?). Judging by how many of my friends who did not attend the event already knew it had had a chaotic ending even before I told them, I’m sure the city received plenty of feedback. I expect next year’s event to run much more smoothly. However, just in case they need a little extra help, I’d like to write a quick letter to the City of Coral Gables:

Dear City of Coral Gables:

I’m a professor at the University of Miami who specializes in using advanced analytical methods to help with decision making. If you need help with the logistics of your Fourth of July Fireworks or any other city-sponsored activity, I’m available. Here’s my contact information.


Tallys Yunes.

To end this post on a happy note, here are some beautiful photos of the fireworks taken by my favorite photographer. Enjoy!

1 Comment

Filed under Applications, Holidays, Promoting OR

The “Real” Reason Bill Cook Created the TSP App

By now, most people are aware of the latest Internet meme Texts from Hillary which is, by the way, hilarious. You’re also probably aware that Bill Cook created an iPhone App that allows one to solve traveling salesman problems (TSP) on a mobile phone! If you like optimization, you have to give this App a try; and make sure to check out the Traveling Salesman book too!

Inspired by Texts from Hillary I finally figured out the “real” reason why Bill Cook created the App. Here it is:

1 Comment

Filed under Applications, Books, iPhone, Meme, People, Promoting OR, Traveling Salesman Problem

How Should Santa Pair Up His Reindeer?

It’s almost Christmas time and Santa is probably very busy with some last-minute preparations before his longer-than-7.5-million-kilometer trip around the world. One of the many things he has to worry about is how to pair up his reindeer in front of the sleigh. We all know that Rudolf goes right in front of everyone else because of his shiny nose, but what about his other eight four-legged friends? The traditional Christmas carols tell us that the reindeer are typically arranged in four pairs, front to back, as follows:

Dasher, Dancer

Prancer, Vixen

Comet, Cupid

Donner, Blitzen

Therefore, we are going to assume that this is an arrangement that works pretty well (after all it’s been working since 1823). As someone with a degree in a STEM field (he wouldn’t reveal which, though), Santa can’t stop thinking about this interesting question: “Are there other good ways to pair up my reindeer?” Before we can answer that question, we need to define what a “good” pairing of reindeer is. After working tirelessly on Christmas eve, Santa’s reindeer have all the other 364 days of the year to hang out and get to know each other. As in every group of friends who spend a lot of time together, some friendships become closer than others. So it’s reasonable to expect that Rudolf’s eight friends will have a favorite companion for side-by-side galloping, a second favorite, a third favorite, etc. In addition, there’s one more important detail when it comes to reindeer pairings, according to Mrs. Claus: some of them like to be on the left side (Dasher, Prancer, Comet, and Donner), while others prefer to ride on the right side in front of Santa’s sleigh (Dancer, Vixen, Cupid, and Blitzen). Before you mention that I should also consider that male reindeer would rather be side-by-side with female reindeer, there’s scientific evidence that all of Santa’s reindeer are female, so we don’t have to worry about that.

After a nice conversation in front of his cozy fireplace, Santa was kind enough to provide me with the following lists of pairing preferences for each of his reindeer; though he vehemently asked me not to show any of this to his furry friends. I’m counting on you, my readers, to keep these lists to yourselves! The names in each list are sorted in decreasing order of pairing preference. The lefties appear in blue, while the righties appear in red (any resemblance to US political parties is a mere coincidence):

Dasher: Dancer, Cupid, Vixen, Blitzen

Prancer: Vixen, Blitzen, Dancer, Cupid

Comet: Cupid, Dancer, Blitzen, Vixen

Donner: Blitzen, Vixen, Dancer, Cupid

Dancer: Prancer, Comet, Dasher, Donner

Vixen: Dasher, Donner, Prancer, Comet

Cupid: Prancer, Dasher, Comet, Donner

Blitzen: Comet, Prancer, Donner, Dasher

Note that if we were to adhere to the lefties’ first picks, we’d end up with the traditional line-up. We are now ready to define what a good pairing is: a pairing is good (a.k.a. stable) if no one has an incentive to change pairs. In other words, if A is paired up with B, and A prefers C to B, it so happens that C, who is paired up with D, prefers D to A. (Note: this problem is known in the literature as the stable marriage problem and it arises in real life, for example, in the context of the National Resident Matching Program, which pairs up medical residents with hospitals every year in the United States.) Obviously, the traditional pairing shown above satisfies these goodness/stability conditions, given the reindeer’s preferences.

What Santa would like to know is whether or not there are other good pairings in addition to the traditional one. If so, he can add some variety to his line-up and the reindeer won’t get so bored by galloping side-by-side with the same companion every year. How can we help Santa answer this question? Using Operations Research, of course! More precisely, Constraint Programming (CP).

Constraint Programming is a modeling and solution paradigm for feasibility and optimization problems that allows one to represent complicated requirements (such as the stability condition above) in ways that are often easier and simpler than using traditional O.R. techniques such as Integer Programming. For example, indexing variables with variables and expressing logical constraints such as implications are a piece of cake in CP. Here’s a CP model written in the Comet language (not to be confused with Comet the reindeer) that answers Santa’s question. It essentially enforces the stability condition for every choice of A, B, C, and D.

The good news is that, in 3 milliseconds, that CP model finds all of the five different stable pairings. Here they are:

Update (1/1/2012): Here’s an AIMMS version of the CP model, kindly created and provided by Chris Kuip. Look for this reindeer example, including an accompanying graphical user interface, in an upcoming update to the set of examples in AIMMS.

I hope Santa reads this blog post before Christmas eve, but in case he doesn’t, please tell him to check this out if you run into him this holiday season. I’m sure his reindeer would appreciate a little change after 189 years.


Filed under Applications, Constraint Programming, Holidays, Modeling, Traveling Salesman Problem