Theses
You are looking for a B.Sc. or M.Sc. Thesis topic in operations research or mathematical optimization? Below you will find useful information.
At a Glance
You are a student of mathematics, computer science, data science, business and economics, or industrial engineering. You have a profound knowledge in operations research or mathematical optimization, e.g., from our courses or seminars. Typically, your implementation skills are good or better. You write your thesis in English or German, and of course using LaTeX.
Topics range from theoretical/structural analysis/improvement of optimization problems/algorithms and/or their modeling/implementation and/or their experimental evaluation. Should you be working on a practically motivated problem you often deal with the entire OR/analytics process. In that case you have participated in our course "practical optimization" and maybe also in an "OR lab."
In case you are specifically interested in branch-and-price, in particular in GCG, or you like the combination of optimization and machine learning, we will always find a challenging and interesting topic for you. If you have a concrete idea about your thesis already or look for something different than you find here, please contact Marco Lübbecke.
Open Thesis Topics
In their paper “Feasibility Jump: an LP-free Lagrangian MIP heuristic” Luteberget and Sartor propose a primal heuristic to obtain feasible solutions for general mixed-integer problems. We want to take their ideas further and develop conceptually similar heuristics that utilize prior knowledge about a suitable decomposition for a problem. In this thesis, you would:
- Develop and implement one or multiple variants of Feasibility Jump that use knowledge about a decomposition.
- Research and compare previously proposed decomposition-based heuristics from the literature, see for example here, and here.
- Compare the performance of your implementation with the original Feasbility Jump and other state-of-the-art heuristics.
- Investigate the influence of different decompositions when solving the same problem with your implementation
- Optional: Integrate your heuristic with the generic decomposition solver GCG, making use of the fact that partial or complete solutions might be available, subproblems can be solved in an exact manner etc.
Knowledge in combinatorial optimization (minimum 2 master courses from our chair or similar, preferably “Column Generation and Branch-and-Price”) and programming (preferably in C/C++, Python or Julia) are necessary. If you are interested, please shortly describe your prior experiences as well as topics and methods that you are interested in and attach your grade transcript.
We develop the source-open solver GCG for structured integer programs (and when you want to embark on this topic, you will probably already have worked with it). GCG is based on the SCIP framework, for which there is (among others) a C/C++ API, and a Python API, which is called PySCIPOpt. In this thesis project, you will extend this Python interface to GCG. You need a solid knowledge in integer programming, Dantzig-Wolfe reformulation, column generation, branch-and-price. Yet, foremost, this is a topic of good software development and craftpersonship. The existing C/C++ API needs some love before the most important methods can be created in the (extension of the) PySCIPOpt Python API. As a documentation of the new API you develop an example branch-and-price project.
The Python API is the most wanted GCG feature, and the scientific community will be forever in your debt.
We are supervising this Thesis together with the PADS. Please apply as stated on the site, putting us in CC.
With rising shares of renewable energy production, the planning processes for transmission system operators have grown more difficult over the last decade. Current research focuses on using configurable components within the grid to reconfigure the topology and consequently allow for increased throughput of renewable energy. The security-constrained optimal transmission switching problem (SCOTS) comprises switching decisions and generator unit commitment, together with constraints to ensure safe network operation. To our knowledge, no recent surveys on methods to solve the SCOTS have been published.
The goals of this thesis can include
- Conducting a thorough literature review on methods to solve the power flow and transmission switching problems from an OR point-of-view,
- Summarizing and categorizing approaches from the literature,
- Designing a comprehensive way to compare different approaches, potentially implementing some of them,
- Deriving possible next steps for a state-of-the-art implementation to solve the SCOTS.
Proficiency in combinatorial optimization (e.g. OR1) and programming (preferably in Python or Julia) are necessary. Prior knowledge concerning transmission systems is beneficial. If you are interested, please shortly describe your prior experiences and attach your grade transcript.
Integer programs are used as models in many publications about optimization problems in science and applications. We often see a different modeling approach that would lead to a column generation and branch-and-price approach, that has not been followed by the authors. In your thesis you take such an approach, formulate an alternative model, how to state the master and pricing problems, decide which algorithms to apply or design or adapt new ones, think about performance enhancements, implement everything in SCIP or ideally directly in GCG and compare your approach to the one in the literature. Theoretical components like thinking about computational complexity, exploiting combinatorial structure etc. are most welcome, too.
We develop the generic branch-and-price solver GCG. GCG implements several classic branching rules like branching on original variables, Ryan-Foster branching for set partitioning problems, and also Vanderbeck's generic branching. There are no general computational comparisons of branching rules in the literature concerning branch-and-price. You will do such an experimental study in this thesis. You design and implement rules from the literature like strong branching, run experiements to evaluate the performance of the rules and inverstigate e.g., the influence of the choice of row pairs in Ryan-Foster branching and how an intelligent choice can look like. You conclude with a major computational study of branching rules across several applications.
In some freight network applications, so-called load plans determine the possible paths along which each commodity may be routed to ensure operational simplicity. Commonly, only a single path per commodity is allowed. In their paper “The Value of Limited Flexibility in Service Network Designs” Baubaid, Boland and Savelsberg showed that allowing commodities to be forwarded to two instead of only one location significantly decreases costs, especially when faced with uncertain demands. We can think of alternative methods to ensure operational simplicity, such as limiting the number of possible paths for each shipment, as in k-splittable flow problems.
The goals of this thesis depend on the type (Bachelor/Master):
- - Conducting literature research on how limited flexibility is achieved for various related problem classes.
- - Implementing mixed-integer programming models for two or more limited flexibility variants of some basic network flow or design problem under multiple demand scenarios (Bachelor) or stochastic demands (Master).
- - Comparing the approaches with respect to their computational performance, cost reductions versus traditional load plans and complexity of the resulting load plans.
- - (Master) Developing exact or heuristic algorithms to solve larger instances of the implemented models.
Knowledge in combinatorial optimization (the more the better, at a minimum QM/OR1 or similar) and programming (preferably in C/C++ or Python) are necessary. If you are interested, please shortly describe your prior experiences as well as topics and methods that you are interested in and attach your grade transcript.
In power transmission systems, the power flow can be only be modeled accurately by AC power flow formulations. However, these formulations are usually non-convex and thus difficult to solve exactly and globally optimal. As such, a variety of approximations and relaxations exist which aim at simplifying the problem. While this is already well-studied for the optimal power flow problem, less literature exists for optimal transmission switching, and almost no computational comparisons of AC network formulations for the SCOTS problem have been published so far.
The goal of this thesis is to conduct computational experiments on publicly available instances or variations thereof. The thesis can include
- Research and description of suitable formulations as described above,
- Implementation of missing formulations and their extension to OTS or SCOTS as necessary (via the PowerModels framework in Julia),
- Procurement of testing instances (some of which are already available to us),
- Conducting reproducible computational experiments,
- Evaluation of the conducted experiments.
Proficiency in combinatorial optimization (e.g. OR1) and programming (preferably in Python or Julia) are necessary. Prior knowledge concerning transmission systems is beneficial. If you are interested, please shortly describe your prior experiences and attach your grade transcript.
In this thesis we look for optimization models and algorithms that can be used for rectilinear picture compression for monochrome digital images. One can cover the pixels of an image with rectangles or more complicates shapes. Doing this with a minimum number of suitable shapes turns the task into an optimization problem. We can build on a preliminiary work here. Potentially, on has to dynamically build/select the shapes in a column generation approach which makes the problem also algorithmically interesting. It goes without saying that this topic is a rich playground for implementations and (graphically appealing) experiments.
See the German description.
In the Job Shop Scheduling Problem, we are given jobs consisting of multiple operations that need to be executed on machines in a fixed order. The goal is to decide the order in which each machine processes the jobs such that some objective function, for example the total makespan, is minimized. These types of problems have practical relevance in many production scheduling applications. They are often modeled using time-discretization, which inherently comes with a trade-off between good solution quality (fine discretization) and fast solving speeds (coarse discretization). Recently, a stream of research has investigated algorithms for Dynamic Discretization Discovery, i.e., algorithms that can dynamically adjust a discretization such that the continuous-time problem can be solved. The goal of this thesis is to develop such an algorithm for the Job Shop Scheduling Problem, which has not done before and requires adaptions and modifications of previously suggested algorithms. The computational performance and solution quality of the developed approach should then be compared against classical approaches. Theoretical contributions are also very welcome.
This thesis requires significant knowledge of combinatorial optimization (3+ masters-level courses like OR1, OR2/GLO, CG&BP, POM, Opti A/B, GNO, etc.) as well as high proficiency in programming (preferably C/C++ or Python). If you are interested, please shortly describe your prior experiences as well as topics and methods that you are interested in and attach your grade transcript.
We develop the source-open solver GCG for structured integer programs (and when you want to embark on this topic, you will probably already have worked with it). GCG is written in C/C++ and is based on the SCIP framework, for which a suite of example projects is available that document the use of the framework. In this thesis project, you will develop such example projects for GCG. These projects simultaneously serve as end-to-end (full stack) tests of certain components of the solver. You have a great expertise in software development (in particular in C/C++) and some understanding of end-to-end testing. If you want to apply this knowledge in a practice project (i.e., on GCG) this thesis is for you.
For many combinatorial problems better algorithms can be developed if knowledge about the structure is utilized to decompose the problem into subproblems. Recent trends at the intersection of Machine Learning and Operations Research aim at further speeding up these algorithms by learning to make better heuristic decisions. Another less researched and more experimental topic is using learned representations to replace entire subproblems.
We can determine the specific topic to explore based on your interest and skills as well as relevance to our research.
While the potential is abundant, these topics require both a very strong background in the theory and application of combinatorial optimization methods (especially decomposition methods) as well as at least intermediate experience with the implementation and engineering of Machine Learning models. Proficiency with Python or Julia is required, with C/C++ helpful as the methods might be interesting to implement in the open source solvers SCIP or GCG. If you are interested, please shortly describe your prior experiences as well as topics and methods that you are interested in and attach your grade transcript.
See German description.
Dantzig-Wofe reformulations an Benders decomposition are standard techniques to strengthen the relaxation of a MIP. We develop the source-open solver GCG for structured integer programs which automatically employs these techniques. As part of its functioning, GCG has to "detect" model structure in the MIP coefficient matrix which allow a reformulation.
MIP presolve comprises techniques that are applied to a MIP before it is actually solved, in order to simplify or strengthen the model, eliminate redundant information, strengthen inequalities, restrict the domains of variables, etc. Often, a MIP becomes solvable in acceptable time only because presolve was applied. Yet, decomposable model structure can be "destroyed" or "masked" by presolve, which hinders GCG's detection of such structure. Sometimes, GCG detects "better" model structure on the presolved model.
In this thesis project, common presolve techniques should be evaluated theoretically and experimentally with regard for their impact on model structure and its detectability.
When designing logistics networks in practice, often only a limited number of changes to an existing network are allowed, since any changes require effort and may disrupt well-attuned operations. A challenge in evaluating and understanding even minor network changes is that the flows of many commodities change. We therefore want to investigate how we can limit the changes resulting from an optimization of an existing network to improve the acceptance of the results by practitioners. The goals of this thesis include:
- - Developing and implementing mixed-integer programming models to improve network designs under limited change budgets, both with respect to the network design and the commodity flows. For example, the number of commodities that can be rerouted may be limited.
- - Reporting on similar approaches found in the literature.
- - Comparing the approaches with respect to their computational performance, achievable cost reductions and understandibility of the resulting changes. The last part could include expert interviews with practitioners to get a qualitative understanding of which changes are easier to understand and implement for them.
- - Theoretical insights are always welcome and can even be the main contribution if you are interested.
Knowledge in combinatorial optimization (the more the better, at a minimum QM/OR1 or similar) and programming (preferably in C/C++ or Python) are necessary. If you are interested, please shortly describe your prior experiences as well as topics and methods that you are interested in and attach your grade transcript.
Selection of Completed Theses
The hub location problem is a mathematical optimization problem of determining the optimal locations for hubs with the objective of minimizing overall costs by routing traffic between customers through determined hubs. Hub location problems are in general NP-hard. A possible heuristic to receive solutions for large-scale instances is to contract the network of such an instance, drastically reducing the computation time needed for solving the resulting small-scale instance. The solution of the contracted network can then be rewritten to the original network, leading to a feasible solution of the large-scale instance. In this thesis, we explore several contraction and rewrite methods and compare the quality of their solutions.
Die Lösung von NP-vollständigen Problemen – wie die Erfüllbarkeitsüberprüfung von SAT-Formeln (SAT-Probleme)– kann mithilfe von Heuristiken beschleunigt werden. Außerdem ist es beim Lösen von SAT-Problemen ein verbreiteter Ansatz, das Hauptproblem in Teilprobleme aufzuteilen (divide-and- conquer). Dabei ist Konfliktanalyse, in den Teilproblemen, die einen Konflikt enthalten, ein essenzieller Bestandteil aktueller SAT-Solver. In diesen werden Informationen in Form neuer Konfliktklauseln vom Solver abgeleitet. Diese Form der Klauseln wird verwendet, um bekannte Konflikte festzuhalten und in folgende Berechnungen miteinzubeziehen. Darüber hinaus sind SAT-Problemen ein spezielles Fall vom gemischten ganzzahligen Optimierungsproblemen (MIPs). Zudem ist der Branch-and-Bound (B&B)-Algorithmus, der auch ein divide-and-conquer Prinzip folgt, ein essenzieller Bestandteil der aktuellen MIP-Solver. Allerdings werden dabei aufgetretene Konflikte ignoriert. In der Arbeit „Conflict analysis in mixed integer programming“ wurde ein Ansatz entwickelt, bei dem auch das Teilproblem, das einen Konflikt enthält, untersucht wird. Dazu wurde eine Methode vorgestellt, die die Methoden der Konfliktanalyse von SAT-Solvern auf die Lösungsverfahren von MIP überträgt. Dieses Generalisieren der Konfliktanalyse vom SAT-Solving auf MIP-Solving hat in den zum Testen verwendeten MIP-Instanzen zum einen eine Beschleunigung des MIP-Solvings von unzulässigen Problemen gezeigt, zum anderen wurde der Suchbaum bei lösbaren Problemen zwar verkleinert, aber die Gesamtlaufzeit hat sich vergrößert. Des Weiteren existieren mehrere Strategien zum Teilen eines gegebenen Optimierungsproblems mit B&B, um einen Lösungsbaum zu erstellen. Sowohl die Rechenintensität bzw. die gesamte Laufzeit als auch die Größe des resultierenden Lösungsbaums sind entscheidend, um eine bestimmte Strategie auszuwählen. Demzufolge ist es ein Aspekt ML in B&B zu integrieren, diejenigen Teil der Strategien, die zwar rechenintensiver sind, aber einen Lösungsbaum erstellen können, der weniger Knoten enthält und somit zu schnelleren Laufzeit als anderen Strategien führen kann, durch schnelle Approximation zu ersetzen. Der Schwerpunkt dieser Arbeit ist es, an die Idee der Konfliktanalyse beim Lösen von MIP aufzusetzen und zu untersuchen, wie eine Methode zur Anwendung von Machine-Learning (ML) in B&B Perspektiven zur Anwendung von ML für die Konfliktanalyse in MIPs liefert und einen Ansatz zur Adoption dieser Method präsentiert, damit die Konfliktanalyse von MIPs mithilfe von ML durchgeführt werden kann.
Laser Powder Bed Fusion is an additive manufacturing technique for 3D laser printers. Hereby the 3D object gets divided into multiple 2D layers. The 2D object on the layer then gets completed through Laser Powder Bed Fusion. We look at a special variant with a moveable print head and multiple lasers in the print head. Which creates a new challenge in how to find a good pathing to finish the 2D layer fast. We create an integrated model of the path planning and then try to make the whole model more efficient, through changing the objective function parameters and different methods like Dantzig-Wolfe decomposition. When it comes to optimizing the idle time of the lasers we created a model for object with up to 13.720 scan vectors
This master thesis integrates the component bound branching rule, originally proposed by Vanderbeck et al. and later reformulated by Desrosiers et al., into the branch-price-and-cut solver GCG. This rule, similarly to Vanderbeck's generic branching scheme, exclusively operates within the Dantzig-Wolfe reformulated problem, where branching decisions generally have no corresponding actions in the original formulation. The current GCG framework requires modifications for such branching rules, especially within the pricing loop, as seen in Vanderbeck's method implementation. These rules also fail to utilize enhancements like dual value stabilization. A significant contribution of this thesis is the enhancement of the GCG architecture to facilitate the seamless integration of new branching rules that operate solely on the reformulated problem. This allows these rules to benefit from current and future advancements in the branch-price-and-cut framework, including dual value stabilization, without necessitating alterations to the implementation of the branching rule itself. The thesis proposes an interface to manage constraints in the master problem that lack counterparts in the original formulation. These constraints require specific modifications to the pricing problems to ensure their validity in the master. The generic mastercut interface, tightly integrated into the GCG solver, spans the pricing loop, column generation, and dual value stabilization. Enhancements to the existing branching rule interface complement this integration, enabling effective utilization of the generic mastercuts. Finally, the component bound branching rule will be implemented using this new interface and evaluated on a set of benchmark instances. Its performance will be benchmarked against the existing Vanderbeck branching rule, offering a practical comparison of both approaches.
In the process of solving mixed-integer linear programs state-of-the-art solvers construct a so- called conflict graph. The conflict graph stores information about pairs of assignments to binary variables that lead to infeasibility. Commonly such structures are used to obtain clique cuts during separation. However, all inequalities valid for the stable set polytope of the conflict graph can be used to strengthen the linear relaxation of the original problem. In this thesis, we will experimentally investigate the potential strength of the conflict graph. We do so by applying a Dantzig-Wolfe reformulation with convexification. Further, we also investigate the effect of obtaining the whole conflict graph by a probing procedure, as the conflict graph is usually only partially discovered. Here we notice that for some types of problems we can substantially improve the dual bound by considering the whole conflict graph. Additionally, we evaluate the strength of a binary relaxation that is obtained when each constraint is relaxed such that all non-binary variables are eliminated. This shows us that in some cases we do need to consider non-binary variables to obtain all edges. We also apply convexification on separated cuts and see that this often does not lead to a substantial improvement. At last, we discuss further ideas for experiments and to improve techniques based on the conflict graph.
In linear optimization, staircase structures can be found in the coefficient matrix of a mixed integer program by the permutation of its rows and columns. The structure consists of independent blocks along the diagonal, where two consecutive blocks may be connected by variables. Structures in the coefficient matrix of large scale linear programs can be exploited to solve the program more efficiently by procedures like Dantzig-Wolfe decomposition. For this, the structures first have to be detected. However, there are different properties in a staircase structure to look for, as the number of stairs, the number of linking variables and the stair heights. Therefore it is difficult to compare existing detection methods and results. This work gives and overview of different staircase structure variants that highlight different properties of the structure. Different methods for detecting these structures are considered, namely the Longest Shortest Path algorithm, the Clustering Heuristic, and a MIP formulation. The efficacy of these methods is demonstrated through computational experiments and a score for evaluating the quality of staircase decompositions is proposed.
Due to continuously growing metropolitan regions, the requirements for ground- based transport systems are increasing. With increasing electrification, there is a potential of sustainable air mobility systems to relieve existing transport systems and to complement the existing transport networks. This potential of Urban Air Mobility (UAM) is based, among other aircraft, on electrically operated vertical take-off and landing (eVTOL) air taxis. In addition to the concepts, operational strategies have already been investigated in previous research work. For this pur- pose, agent-based approaches provide a relevant basis for the simulation of realistic demand data. In this context, the pooling of passengers into the aircraft with dif- ferent passenger capacities, ranges and loading times was insufficiently analyzed, so that a comparison of the market potentials could be distorted. For this purpose, two potential solutions were investigated: the transition to a new agent-based modeling (ABM) platform and the development of the post-processing tool to the existing de- mand data generated by a simulation in MATSim. The current state of the scientific literature on ABM approaches shows that the platforms OPUS and RePast4Py can be seen as alternatives to MATSim, but require a lot of effort for an implementation. Consequently, a linear optimization model is relevant as a post-processing tool and should be modeled in the context of current operations research (OR) approaches. For this purpose, a second literature research was made in order to compare similar past models and to learn the approaches of the operation research from them. Using the OR methods, the mathematical model was then created and fully described. The development of the mathematical model includes the following aspects: changing of an aircraft by passengers, maintaining passenger capacities and ranges of an aircraft, tracking the battery charge state, dynamic demand. Subsequently, the model was validated and some parameter studies were made. The results of the parameter stu- dies show that passenger pooling is best possible with large passenger flows, but is limited in the context of UAM. However, possible pooling can be optimal and inclu- de the Aurora PAV and Joby S4 vehicles as a total fleet. Evaluations of the runtime show limitations of the application on overall scenarios of UAM and motivate further research with regard to algorithmic optimization.
The thesis investigates many heuristics used to speed up to the pricing in Column Generation by applying these in GCG with different implementations and values on a representative test set. Partial pricing, multiple pricing and heuristic solving of subproblems have been extensively tested, yielding some interesting takeaways on how and when (not) to use them. A newly implemented random subproblem selection technique for partial pricing gave a considerable speedup compared to the other implemented selections in GCG. Also, acceleration techniques like the column pool have been checked for their effectiveness, revealing that it is regularly not beneficial. Furthermore, a finding regarding cutting stock instances formulated as set-covering is reported, speeding up multiple instances extraordinarily. Last but not least, it is reported which configurations worked particularly well for each problem type, and indications are given which heuristics are worth modifying.
Transporting goods plays a crucial role in global trade and e-commerce. This introduces the need for complex transportation networks, which must be efficient and cost-effective. Hub Location Problems model hub-and-spoke networks with the most common objective being the minimization of the total costs of the network. The Service Network Design and Hub Location Problem is a variant of Hub Location Problems that models trucks and their routes on the hub-to-hub connections. However, this problem is challenging to solve with an exact solver for larger instance sizes in a reasonable timeframe. Therefore, we propose a General Variable Neighborhood Search heuristic with the goal of obtaining high-quality solutions for larger instance sizes of the problem in a reasonable timeframe. Our heuristic is able to improve most initial solutions of the problem in a short timeframe, leading to near-optimal, sometimes optimal, solutions for smaller instances. For larger size instances, we were able to obtain solutions beating those of the exact solver for a set maximum duration in a fraction of the time.
Solder Paste Printing (SPP) is an important process in surface mount technology (SMT) for electronics manufacturing and there are two critical steps involved the process:- (i) exchanging the process tools, namely stencils at regular intervals of time during the process and (ii) blocking/replacing the stencils when they cannot be used anymore, both to ensure high quality printed circuit boards (PCB) after the process. However, the major challenges involved here are to perform these steps in such a way that the quality of prints is not affected and at the same time the stencils are utilized to the right extent, and finally the costs or risks of product scrap or tool exchanges are minimized. Hence, it is a crucial step to identify the feasible idle time after which the stencils can be exchanged during the SPP process, and block and replace them to avoid risks during future production. In an attempt to reduce product scrap rates and internal defect costs (IDC) and increase global competitiveness in future PCB manufacturing, this thesis aims to explore and identify the best decision-making approaches for the solder paste printing process based on historical data and determine a feasible idle time for tool exchange and an end-of-life period of stencils through data analytics techniques.
In computational experiments, Bastubbe, Lübbecke, and Witt observed that for a variety of MIP instances the spectra of dual bounds resulting from all their respective Dantzig-Wolfe reformulations contain only a few distinct values. In contrast, we prove that the polyhedra from reformulations are asymptotically almost surely different in the case of the stable set problem on complete graphs. Nevertheless, our experiments illustrate that even in this case there are only a few distinct dual bounds. As a byproduct, we provide a complete classification of when two reformulations of this problem differ, extending previous theorems of Lübbecke and Witt that classify the weakest and strongest reformulations of stable set problems. We also present two general results on the strength of Dantzig-Wolfe reformulation, which generalize Geoffrion's necessary condition.
In this thesis, we look into possible effects of presolving algorithms on mixed-integer program decompositions, with a specific focus on column generation using the GCG solver. We systematically analyze the effects of a range of presolvers on the generation of problem decompositions and their selection through GCG’s implemented scoring method, as well as on the runtime of the selected decomposition and its semantic structure. Based on this, we argue that certain presolvers are decomposition-preserving, meaning they will make it possible to re-detect a decomposition after presolving, that was detectable before presolving as well. Among the presolvers examined, most show only minimal or negligible effects on decomposition detection, with some such as trivial and tworowbnd presolvers demonstrating significant impacts on runtimes for specific instances. Notably, presolvers like convertinttobin and boundshift were found to preserve decompositions, where decompositions with similar structure and semantics were detectable before and after presolving. Further, our analysis of presolvers domcol and sparsify showed a larger impact on the amount of overall generated decompositions as well as for the possibility of aggregating blocks into similar pricing problems within the detected decompositions. This is especially notable for instances of the map labeling problem and the container loading problem. We conclude, that the results of our experiments, even though interesting in certain problem categories and niches, are based on a limited dataset. As demonstrated in our analysis, the majority of instances remained unaffected in terms of decomposition detection, fueling an opportunity for further research to look into, e.g. larger and more diverse testsets including problems from the MIPlib.
In order to minimize generation costs in power grids, linear optimization can be used. Because of the occurrence of Braess’ paradox in power grids, the optimal transmission switching problem (OTS) optimizes over decision variables indicating whether a branch is switched on or off. Modeling the internal structure of substations is called optimal transmission switching with substation reconfiguration (OTS-SR). This thesis aims at evaluating several formulations and different algorithmic approaches for solving OTS-SR more efficiently regarding the solving time. Several alterations to the formulation of OTS-SR, two Dantzig-Wolfe reformulations and two Benders decompositions are proposed and evaluated using GCG, Coluna, SCIP and Gurobi. Results indicate that not only the introduction of transmission switching makes the OTS problem difficult to solve but also modeling the power flow according to Ohm’s law. Furthermore, it seems to be challenging to make use of the internal structure of the substations in order to solve the OTS-SR problem more efficiently.
When solving a mathematical optimization problem with the GCG solver, in general a model with unknown structure and semantics is given. GCG attempts to detect a structure in order to find a good decomposition for a Dantzig-Wolfe reformulation. If a problem is modeled using the modeling language Pyomo, more information about the model and its structure is available. In this thesis several approaches how this model information can be used to improve the solving process with GCG are analyzed. For example, in Pyomo the different types of constraints and variables and their index sets are directly known, so it is not necessary that GCG tries to detect the constraint and variable classes. Another idea is to represent a generalized version of the model as a graph. This graph can be used to determine the type of the given problem. Depending on the problem type, a predefined decomposition can be applied. All of the tested methods improved the running time when solving some of the test instances. However, there are cases for all methods where they perform poorly and significantly increase the running time.
Branch-and-bound is a widely used algorithm to solve discrete optimization problems, such as integer linear programs. In this thesis, we will explore existing methods to visualize the solving trees that the algorithm employs for program optimization. We will define the balance of a branch-and-bound tree as an index indicating which children tend to make more progress in completing the solving process by making a greater impact on the optimality gap of the bounds. Subsequently, we will compare the balance of trees generated by different solvers and settings, examining their potential impact on the algorithm’s performance.
In this thesis we take a look at the variety of graph spanner problems in the current literature and some exact mixed integer programming approaches to solve these. We present five spanner types used in problem definitions, the path formulation and multicommodity flow formulation we use to solve them. We run an experiment using the Erdős-Rényi random graph model as well as the Watts-Strogatz small world model to analyze the behavior of different combinations from spanner types, formulations, and graph models. We find that pairwise and sourcewise spanners prove slower to solve, while subsetwise spanners reveal themselves to be fastest to solve of the spanner types. The random graph model turns out to provide faster running times than the small-word model and the multicommodity flow formulation provides faster model building and subsequently faster overall running times than the path formulation. As part of the experiment we generate a library of input instances for the spanner problems and graph models. Since randomness is involved in the generation of the instances, this library helps in the proper analysis and comparison of solution approaches and easily allows for further expansion on it.
Mixed integer programming is a powerful tool to model and solve many modern optimization problems like network design, frequency assignments, or chip verification. As solving mixed integer programs, or MIPs in short, is NP-hard, many modern solvers rely on primal heuristics to assist the solving process of the branch-and-cut algorithm. These heuristics have the ability to shorten the solving process by using the given information to find high-quality feasible solutions and are therefore very important when solving MIPs of higher complexity. While modern solvers, such as the open-source solver SCIP, have already a wide range of implemented heuristics, there is a great interest in heuristics with new approaches due to increasing MIP complexity. This thesis is going to describe the new heuristic "Feasibility Jump", introduced by Luteberget et al., and integrate it into SCIP. This heuristic tries to find a high-quality feasible solution by iteratively setting one variable to a value where the most constraints are satisfied. Evaluation of the impact feasibility jump has on the solving process of SCIP yields positive results regarding solving time or primal integral while having no significant effect on the solving process of MIPs where it does not find a solution.
Knowledge about structure in an Integer Linear Program can help to improve the solving process, such as a block-diagonal structure by enabling a Dantzig-Wolfe decomposition. While GCG, a solver based on SCIP, can automatically detect structures in Integer Linear Programs, other commercial and non-commercial solvers require the user to enter the structure information manually to utilize the benefits of a Dantzig-Wolfe-Decomposition. Even though one of the main factors for a performant decomposition is to find a good structure within a problem, we are interested in the quality of the decomposition algorithms themselves. We compare and evaluate which solver performs best on certain problem instances providing all with the same information about the problem itself and its structure.
Dantzig-Wolfe decomposition is a popular technique for solving (mixed integer) linear programs where a large subset of constraints can be divided into independent subprograms. In such a program's coefficient matrix, the rows and columns corresponding to the subprograms can be reordered to form a block-diagonal. A similar structure is the staircase, where neighboring blocks are linked by columns or rows. Staircases can appear in coefficient matrices of linear programs modeled over a time horizon where each period is linked to its predecessor and successor. If the periods are linked by columns, Dantzig-Wolfe decomposition can still be applied by utilizing Lagrangian decomposition. To apply it, the periods' constraints and the existence of a staircase structure either must be known beforehand or a staircase must be identified in the coefficient matrix. Since the constraints are likely in the wrong order to identify the structure by just looking at the matrix, algorithms must be used for the detection. In this thesis, an algorithm is presented that was designed to detect staircases by joining rows to blocks via agglomerative hierarchical clustering. It was tested on five temporal (mixed) integer programs and its results were compared to those of two detection algorithms that have already been part of the solver Generic Column Generation. The proposed algorithm was the best staircase detector overall but still showed some flaws, the major one being its high computation time. Nevertheless, it demonstrates that block-diagonal detectors can be utilized for the detection of staircases. At the same time, one of the two existing staircase detectors demonstrated potential. For two of the temporal programs, staircase structures were used for solving instances from the literature. In one of them, they had a very negative impact on the solving times. In the other, the differences in visual quality of various staircases had no obvious impact.
This thesis presents a new web-based platform for the structured Integer Program library, strIPlib, a collection of 21,000 mixed-integer program (MIP) instances with an exploitable problem structure. Each instance is annotated with more than 150 metadata attributes that describe its structure, characteristics, and the algorithmic behavior of the Generic Column Generation (GCG) software while solving it. The metadata is beneficial for compiling test sets to evaluate the algorithmic components of solvers in isolation or together by selecting instances that closely match the objective. However, the current website that provides access to the instances and metadata has limitations in presentation and functionality, such as a lack of in-depth categorization of instances or missing download and filter functions. These limitations hinder researchers from finding instances that match their research question and accessing the raw linear program files. The new user interface displays instances hierarchically, reflecting problem types and contributors. Each of the 150 metadata attributes has been made filterable with graphs showing their distribution and correlation. After filtering, a test set containing instance and structure files can be compiled and downloaded. The integration of GCG reduces the entry barrier and allows researchers to detect and visualize problem structures on the website with one click. Due to the maintainable design, the platform is prepared to integrate other existing projects, such as graphics for visual evaluation of GCG runtime data or a tool for reducing the number of instances after filtering while maintaining the overall diversity.
Abstract: Cluster analysis of statistical data sets attempts to identify similarity structures in data sets in order to be able to combine individual statistical units into groups in a meaningful way. Data sets can be very large and contain features that are either redundant or irrelevant for grouping. The selection of the relevant features is a fundamental problem of cluster analysis. In the paper [Benati et al., 2018] 2 mixed integer models are presented, which calculate the problem of the selection of relevant features. Test results that have been done in the paper solving these models with CPLEX show that both formulations have promising results with well separated clusters, but have their limits for problems where clusters overlap or features are not unique. In this bachelor thesis the two mixed integer programs were solved with the original data sets using the framework GCG [Gamrath and Lübbecke, 2010], which follows a branch price and cut approach and were compared with the results from the original paper. Results show that although the underlying structure of the problem is promising, solving the instances with GCG is slower by orders of magnitude.
Power transmission grids face increasing stress due to changes in power production and consumption patterns. The shift towards renewable, usually less steady energy sources or the deployment of "smart" devices have their contribution to this process. Currently, not all degrees of freedom in the power grid operation are exhausted; thus, the operation is sub-optimal concerning congestion and costs. While the power dispatch is optimized, this is not the case for topology measures, mainly due to computational burdens. This thesis analyzes the issue of using topology measure optimization integrated into an optimal power flow problem, the so-called Security Constrained DC Optimal Transmission Switching (SC-DC-OTS) problem. An integrated problem formulation of a Power Transfer Distribution Factor (PTDF) based SC-DC-OTS is provided together with its implementation. Further, an offline algorithm is proposed to find topology measure candidates efficiently, thereby reducing the solution space of the SC-DC-OTS model, the so called Power Set Poisoning Depth-First Search (PSPDFS) algorithm. In parallel, the implementation explores improvements to the ergonomics of modeling tasks. Four simulation studies are conducted to evaluate the functioning and performance of the model implementation, as well as the potential of the PSPDFS algorithm. Considering topology measures and security constraints leads to a vast amount of side conditions in the model, and thus, considering all potential topology measures is not manageable. The simulations show that in some cases, PSPDFS can considerably reduce the number of topology measures.
Today, machine learning methods based on statistical learning theory are enjoying growing interest in both research and applications. Simultaneously, operations research, in particular, linear programming, experiences a renewed surge in popularity, providing solutions that are guaranteed to be optimal for a wide range of problems. As part of the efforts made to integrate those fields for mutual benefit, recent studies have attempted to apply machine learning to solutions and non-solutions of linear programs. The goal here is to obtain a linear programming instance from such data that describes the problem as accurately as possible. In practice, data, i.e., solutions and non-solutions, can often be collected independently of and much more easily than modeling the problem. However, since modeling a problem as a linear program needs domain as well as methodological knowledge, skilled experts are hard to find and thus also expensive. As a step in the direction of automatic modeling, learning linear programs could therefore be of great interest for everyone applying linear programming. Therefore, in this thesis, an exploratory study of a learning algorithm is conducted with respect to the following questions: Can reasonably good results be obtained by the algorithm on synthetic linear programs obtained from different sampling methods? Can the results obtained be transferred to real-world known instances? Of what quality are the obtained solution instances? And what properties must therefore be met by the examples used? The core of this work is an implementation of the algorithm to verify and investigate these and other aspects through various experiments, and compare it to benchmark algorithms.
In this thesis, we take a look at tackling the challenge of solving complex combinatorial optimisation problems using a combination of machine learning and combinatorial optimisation techniques. As modern-day problems become increasingly intricate, it is essential to develop efficient and effective solutions to address them. We focus on the task of predicting shortest paths in the context of the Warcraft Shortest Paths dataset, a scenario where traditional machine learning approaches may not be sufficient for accurate predictions. To begin with, we look at an overview of the fundamental concepts of machine learning and combinatorial optimisation and investigate various architectural approaches, including machine learning-based architectures, two-stage approaches utilising both machine learning and combinatorial optimisation separately, and combined pipelines. To evaluate the effectiveness of these approaches, I apply them to the Warcraft Shortest Paths dataset and analyse their performance using relevant metrics such as loss and gap. The main contribution to this topic includes a comprehensive comparison of these three approaches applied to the same dataset, offering insights into their advantages, disadvantages, and potential improvements. This analysis also includes a brief discussion regarding the importance of the metrics used for evaluating different architectures, seen through the lens of the optimality gap and losses.
Facility layout design, the efficient arrangement of functional units inside a facility, is a crucial part of factory planning. Despite extensive research on the topic, mathematical optimization approaches are still underutilized in practice, partly due to differences between the facility layout problem and facility layout design. This study addresses both aspects, beginning with the development of an efficient mixed integer programming (MIP) model. The effect of different modeling choices is examined, and a new linear area approximation via non-equidistant support points is introduced. The improved formulation allows operators to set a maximum area deviation and distributes the deviation equally. Much of the problem’s computational complexity comes from the non-overlapping constraints. Two different sets of constraints are presented and compared. Additionally, the importance of a two-stage approach is established, and two solution approaches are proposed. A collection of common test instances is introduced based on which a set of computational tests are performed. The tests show that the MIP approach can efficiently solve small to medium-sized instances with up to 14 functional units. Test instances with 20 to 60 functional units were solved more efficiently by the heuristic. In addition to theoretical test instances, this research includes a real-world planning scenario from the industry. Both solution approaches were successfully adapted to solve the industry test instance. However, the heuristic tends to generate mostly infeasible layouts. This raises the question of whether the heuristic can provide any benefit for solving the facility layout design in practice. Despite this, the overall approach made it possible to improve both the mathematical optimality of the solution and also the applicability to the real-world scenario. Nevertheless, the newly generated layouts exhibit less structural transparency than the original layout, and further fine-tuning and acceptance efforts may be required to ensure the adoption of the solution.
Binarized Neural Networks (BNNs) are feed-forward neural networks with fully connected layers that purely use binary weights. The evaluation is highly efficient, which makes it attractive to use them in low-power settings since their performance is close to the performance of linear neural networks. However, robust networks are important, especially in settings that are safety-critical. So-called attacks on neural networks are small perturbation vectors that are added to the input that lead to different (pre-defined) results. Existing methods to generate attacks mostly use gradient-based methods, but the binary, non-differentiable nature of the BNNs renders gradient-based methods impossible. One more recent approach already makes use of mixed integer linear programming (MILP) models. In this body of work, we reproduce the results of existing approaches that make use of MILP models. We come up with new formulations and we show that they are stronger performance- and results-wise. Furthermore, we show that a reformulation of these models into a column generation approach does not seem to be practical and demonstrate an MILP model to alter the network to be robust against the previously generated attacks.
Many challenges in the industry and economy, e.g. production planning, vehicle routing, and territorial partitioning, can be modeled and solved with Mixed Integer Programming (MIP). MIP is a powerful modeling technique for mathematical optimization and feasibility problems. Solving an MIP problem is generally NP-hard. MIP solvers like Solving Constraint Integer Programs (SCIP) and Gurobi are based on the branch and bound algorithm. MIP solvers use a technique called primal heuristics to find feasible solutions, which can lead to the early pruning of the branch and bound tree. This approach can accelerate the solution process and, in some cases, even enables the discovery of an optimal solution. Examples of primal heuristics are rounding heuristics, diving heuristics, and large neighborhood search. A variant of the branch and bound algorithm is the branch and price algorithm, which exploits a reformulation of the original MIP via Dantzig-Wolfe Decomposition (DWD) to solve the optimization problem. Until recently, implementations of the branch and price algorithm used to be specific to a particular problem and its structure. Current state-of-the-art research and development enables a generic way to reformulate a given MIP and the corresponding structure into the DWD and solve the problem with branch and price automatically. An example of such a solver is Generic Column Generation (GCG), which is based on SCIP. It automatically detects, reformulates and solves a given MIP with branch and price. Primal heuristics, when applied in branch and price algorithms, may not fully exploit the reformulation’s structure in their generic form, or they might be customized for specific problems. Recent research explores the adaptation of generic heuristics to the branch and price algorithm in column generation-based MIP solvers. The thesis will investigate the effectiveness of using primal heuristics in mixed integer programming solvers based on the branch and price algorithm. This involves testing different primal heuristics to determine which ones result in the most efficient and accurate solving process, as well as analyzing the computational performance of the solvers when employing these heuristics. The goal of this research is to provide insight into the potential benefits and drawbacks of using generic primal heuristics in MIP solvers and to ultimately improve their effectiveness in solving complex optimization problems.
Dantzig-Wolfe decomposition is an algorithm that considers an integer program and convexifies a subset of constraints. Solving the resulting master problem leads to potentially stronger dual bounds than the respective linear programming relaxation. As the subset can be chosen arbitrarily, this includes the trivial cases of convexifying no and all constraints, resulting in a weakest and strongest reformulation, respectively. The strength of such reformulations is expressed through the associated dual bound and can impact the performance of corresponding algorithms. We repeat available measures on the strength of DW reformulations from the literature. Next we discuss computational results of experiments on the strength presented among others by Bastubbe, Lübbecke and Witt (2018). They observed that some classes of constraints, as defined by MIPLIB, have less to no effect on the dual bounds of DW decomposition when they are reformulated without other classes of constraints. This situation is called ‘level 0’ and means that only one particular constraint class is convexified. These observations are proven for the set-partitioning, set-packing, set-covering, cardinality and invariant knapsack classes. We can show that they never improve the dual bound when they are chosen to be convexified without other types of constraints. Next the classical edge formulation for the stable set problem is analyzed regarding its strength. We characterize weakest and strongest possible Dantzig–Wolfe reformulations and present a detailed example that illustrates the results.
This work explores if scheduled service network design programs can effectively support the Deutsche Post DHL Group (DP-DHL) in the tactical planning of their service networks. The DP-DHL operates service networks using trucks to transport packages between regional terminals. For cost-efficient transportation, packages are consolidated at hubs, which requires close coordination of the operated services. The underlying combinatorial optimization problem is known as the scheduled service network design (SSND) problem. SSND programs can decide which services to operate and how to transport commodities based on these services to minimize the total operational costs. In the case of DP-DHL, a variety of real-world constraints and operational options need to be considered during the program development. The proposed SSND program is solved by a price-and-branch heuristic, where services and paths for commodity flows are generated based on time-space graphs in column generation schemes. The program simplifies the DP-DHL problem to apply SSND programs known from literature to the DP-DHL case. Within the simplified framework, the currently operated service network under consideration could be optimized in terms of costs. The linear programming relaxation solution values show significant optimization potential, which could be partially translated into mixed-integer solutions. It can be challenging to fully model the real-world complexity and to design large and robust service networks with SSND programs. However, the improvements achieved indicate that SSND programs could be used as part of advanced solution procedures that can overcome modeling and runtime challenges.
Die Vorteile der Danzig-Wolfe-Reformulierung (DWR) beim Lösen von gemischt-ganzzahligen Programmen (MIPs) sind gut dokumentiert. Damit DWR gute Ergebnisse erzielt, muss jedoch eine bestimmte Struktur in der Nebenbedingungsmatrix des MIPs vorhanden sein. In den meisten heutigen Solvern muss der Benutzer solche Strukturen für den Solver bereitstellen. Es gibt in der Literatur einige Vorschläge zur Automatisierung der Strukturerkennung, von denen jedoch einer eine sehr wünschenswerte Prämisse hat, da er keine Benutzereingaben erfordert, d.h. keine Kenntnisse über die Problemstruktur voraussetzt. Der in der Arbeit "Structure detection in mixed-integer programs" von Khaniyev et al. vorgeschlagene Algorithmus erkennt eine bordered block diagonal (BBD) Struktur. Er schlägt eine Metrik für die Güte solcher BBD-Strukturen vor und implementiert einen greedy, auf community detection basierenden Ansatz. Diesen Algorithmus haben wir im GCG Solver implementiert, so dass wir die wichtigsten Designentscheidungen nachvollziehen sowie die Performance testen konnten.
Extracting special structures from coefficient matrices is a known practice to solve linear programs more efficiently. One of these extraction problems is the detection of maximum embedded reflected network matrices (DMERN). That is finding a maximal sub matrix in our coefficient matrix such that by multiplying a number of its rows with −1 this sub matrix contains in each column at most one -1 entry and at most one +1 entry, while all other entries are zero. The goal of this bachelors thesis is to give an introduction into the DMERN problem and provide a collection and explanation of solution approaches that can be found in the literature.
This thesis implements, documents and analyzes two different image-based approaches to detect structural similarity among mixed integer programs. The first approach detects structural similarity among mixed integer programs using the constraint coefficient matrix. Therefore, the constraint coefficient matrix is permuted using decomposition techniques of the Generic Column Generation solver (short GCG) and is visualized as an image. Several instances of strIPlib are used to create these images and then passed on to train a convolutional autoencoder. The trained convolutional autoencoder can compute feature vectors that represent latent structural features of the mixed integer program. Thereafter, the feature vectors are used to measure similarity among mixed integer programs. The aforementioned approach is extended by adding the left-hand and right-hand side vectors and the objective coefficient vector to the constraint coefficient matrix and thus creating more insightful images. These images are easily created using the developed visualization tool for decompositions in PyGCGOpt. This procedure generates our second approach to detect structural similarity among mixed integer programs. Finally, the analysis of both approaches shows that both approaches are equally well suited to detect structural similarity among mixed integer programs.
State estimation is responsible for the stable and efficient operation of electric power system and plays a vital role in power system monitoring and control. The increasing shift towards use of renewable energy sources (RES) in electric power system has introduced bi-directional flow of power in the electric power system, one from the transmission system and other from the renewable energy resources which can violate the network operating constraints which can further lead to the network failure. Therefore, state estimation in distribution networks have become very critical. Distribution System State Estimation (DSSE) algorithms compute the state variables of an electrical network complex voltages (magnitude and phase angle) using input measurements of electric bus active and reactive power injections, branch power flows and bus voltages magnitudes taken from the system. The main challenge for implementation of state estimation in distribution network is the limited number of measurements. System is generally unobservable (partially observable), therefore an attempt to solve this problem of lacking real time measurements is the employment of so-called pseudo-measurements to ensure the observability of the system and a reliable estimation. Therefore, the validity of state estimation depends on the accuracy of generated pseudo-measurements. The focal point of this thesis is to device an alternative approach to generate and model pseudo-measurement in reference to distribution system state estimation. In the proposed approach, Artificial neural networks (ANNs) are used to generate pseudo-measurements where few real measurements are used in conjunction with typical load and generation profiles. The error between the load and generation profiles (target output of ANN) and ANN output is modeled through Gaussian Mixture Model (GMM). The state estimation of three different networks are computed using the pseudo-measurements generated using the proposed methodology and compared with the actual state estimation values to validate the quality of state estimation results using the predicted pseudo-measurements. We have also demonstrated our proposed approach of pseudo-measurement modeling for state estimation on network with topological changes.
Energy grids are the backbone of every developed country. Currently, the grid topology is considered to be fixed. Topological changes are mostly executed in case of overloads or violations of safety margins. Thus, primary measures to control the power grid are market-based ones, like actively steering power plants. Recent research also identified the potential of optimizing the power flow including topological measures, like transmission element switching. The mathematical complexity of this formulation prohibits its practical usage. Currently, results are either obtained too slowly or violate critical security requirements using heavily simplified models. This work presents implementations of the traditional problem formulation and their restrictions. Alternative solution methods like nearest neighbor and reinforcement learning are implemented and compared against traditional optimization methods.
In this work, we explore combinatorial attacks on (binarized) neural networks. The "attacks" are small changes to the input data, such that the network misclassifies the input. To construct such combinatorial attacks, mixed-integer programs are used, instead of the gradient of the network, which is usually used. In the first part of the thesis, we reproduce the Iprop attack proposed by Gupta et al. in "Combinatorial Attacks on Binarized Neural Networks". Binarized neural networks are hereby aritficial neural networks with weights restricted to 0 and 1. We compare our results with those reported in the paper and analyze challenges in the implementation of the proposed method. In the second part, we extend the attacks to continuous neural networks and compare LP-based attacks with gradient-based attacks. We answer the question in which scenarios the LP-based approaches offer better performance and evaluate the strength of our formulation. Furthermore, we analyze whether successful "attacks" can be transferred to other network architectures and if LP-based attacks generalize better than gradient-based attacks.
In the course of planning railway transportation systems multiple problems have to be solved. Typically, those problems are solved sequentially in different stages, but since the planning stages are highly dependent on each other, it is beneficial to follow a more integrated approach. Inspired by the problem statement of the informatiCup 2022 "Abfahrt!", the integrated problem of creating a feasible train timetable with an associated scheduled passenger routing that minimizes the overall delay of all passengers will be investigated. This means, we direct trains and passengers individually through a public transportation network (PTN), while respecting all capacity constraints at all times. This problem turns out to be strongly NP-hard. We propose different solution approaches based on integer linear programming and implement them to analyze their performance on examples. As a result, the approach based on a space-time graph representing actions on the original PTN outperforms the other presented concepts.
This thesis investigates the behavior of Dantzig-Wolfe decompositions with regard to constraint classification. Instead of considering all possible constraint subsets as potential decompositions, classification allows considering significantly fewer decompositions. Thus, analyses on larger instances are possible. This thesis examines, whether one can derive a rule for the strength of the dual bounds based on constraint classification.
Collection and Dispute Management, a part of the broader Order to Cash process, is responsible for collecting the credit back from the customer and improving the cash flow. Bayer's collections process suffers a severe overdue problem, owing to delayed payment by the customers. The current reactive nature of the collection process poses a significant challenge for reducing the average delinquent days. Hence a proactive collection strategy with a specific collection action for each customer is the need of the hour. This thesis focuses on understanding the business problem and providing an analytical solution for the collection team of one of the country. The solution employs Machine Learning via two modules: Invoice prediction and Days to Pay prediction. Finally, the Predictive Worklist module is the decision model that ensembles the results from the other two modules and the behavioral characteristics of the customer to create a recommended action for each customer. A detailed analysis of the problem and performance of those modules are evaluated. The proposed model serves as base for further extension of the project for other countries.
The accurate determination of the measurement uncertainty is important in the production environment to avoid economic losses due to erroneous decisions caused by measurement errors. Prerequisite for determining the measurement uncertainty is the description of the measurement process with the help of a model. Artificial neural networks (ANN) have the potential to create a model that validly describes the measurement process. However, there are no guidelines for the selection of the hyperparameters and topology of the ANN that are used in this context. In this master’s thesis, a method for optimized hyperparameter and topology selection for model building in measurement uncertainty determination using ANN was developed. The optimization of the hyperparameters and topology of ANN was first formally described as a mathematical optimization problem. To solve the problem, different algorithms (including classical algorithms, genetic algorithms) were identified and adapted for the problem of modeling in the determination of measurement uncertainty. Finally, the methods were evaluated using several measurement data sets.
In this Master Thesis, we investigate Mixed Integer Programs (MIPs) for optimally solving the problem of covering rectilinear polygons with a minimum number of axis parallel rectangles. Since larger instances involve millions of possible rectangles inducing long solving times, it might be necessary to choose a MIP formulation that can be solved with column generation. This allows us to start with a basic feasible solution and dynamically add new rectangles improving the current solution. In general, solutions to this NP-hard problem find applications in the fabrication of DNA chip arrays, in VLSI design, and image compression. In particular, aiming at image compression, other shapes like ellipses will be tested to improve the compression rate. Additionally, we investigate an approach that allows the model to select the shapes on its own to achieve even higher compression rates. Furthermore, the MIPs can be extended to allow for a lossy image compression. This thesis will elaborate and implement the introduced approaches to test them for their computational feasibility.
The nesting problem belongs to the class of cutting and packing problems. A two-dimensional cutting problem is called a nesting problem when the two-dimensional objects that need to be cut are non-rectangular. The objective is usually to minimize waste by utilizing the available material in the most efficient manner. Nesting problems are often encountered in manufacturing industries such as wood, steel, and textile industry. The geometric properties of irregularly shaped objects make solving nesting problems in practice very challenging. In this thesis, we cover the existing approaches to solving nesting problems as well as the geometric tools utilized by these approaches. Additionally, we explore a methodology combining two different strategies, namely, a heuristic algorithm and a mixed-integer programming model.
Reduced cost variable fixing is a classic technique for reducing the problem size when solving mixed-integer program. It allows fixing variables whose reduced costs are higher than the absolute gap. Reformulating problems with Dantzig-Wolfe decomposition and solving them with branch-price-and-cut algorithms often leads to tight dual bounds and smaller absolute gaps, but application of RCVF is not straightforward in these algorithms. Fixing the variables in the reformulation is not very promising but fixing variables of the original problem is not immediately possible, as no reduced costs are known for them. This thesis explores various ways to compute reduced costs of original variables and therefore apply RCVF in BP&C both from a theoretical and computational perspective.
In dieser Arbeit untersuchen wir verschiedene Relaxationen für ganzzahlige lineare Programme und betrachten dabei insbesondere deren Anwendung auf das Stable Set Problem. Wir analysieren Dantzig-Wolfe- Reformulierungen, die Corner-Relaxation und deren Variationen, die Sherali-Adams-Relaxation und die Lovasz-Schrijver-Relaxation anhand folgender Gesichtspunkte: Wie genau ist die Relaxation? Kann man die konvexe Hülle bestimmen? Wie hoch ist der Zeitaufwand? Wie hängen diese Faktoren möglicherweise von der Struktur des Graphen ab? Kann man das Problem auf ein Teilproblem reduzieren? Kann man verschiedene Verfahren kombinieren? Kann man die Verfahren auch auf andere Probleme anwenden?
Texts written in natural language are an unstructured data source that is hard for machines to understand. The amount of text in the world wide web is growing every minute. To deal with this huge number of unstructured data automated text analysis is crucial. Natural Language Processing (NLP) is part of artificial intelligence that makes natural language texts comprehensible for machines. Natural language processing (NLP) techniques can contribute to this problem by offering automated means to do preprocessing, text classification, feature extraction, and topic modelling. Social networks, like Facebook or Twitter, Application like Amazon or eBay are a phenomenon that has recently transformed numerous aspects of our lives. The impact of these platforms is no longer limited to entertainment or personal presentation of individuals. Indeed, they have formed a new sheer of business, with some emerging companies focusing primarily on this platform and others, more traditional ones, expanding their marketing efforts there. My intense research is being performed in order to efficiently categorize, filter, detect text and reviews on the customer-generated content on social networks and application reviews. One of the key segments of this effort is sentiment analysis and goal is to create different models on sentiment of the posts, comments, reviews or other forms of reactions users generate in relation to a company or a product. On the basis of that checking computational performance and results.
Um die Frage zu beantworteten, ob sich die Investition in ein bestehendes Produktionssystem lohnt, muss man beide Konfigurationen anhand von Kenngrößen miteinander vergleichen. Dazu wird die Produktionsstruktur als Maschinen, welche über Puffer miteinander verkettetet sind, abstrahiert. Dabei fangen die Puffer Maschinenausfälle ab, damit nicht weitere Arbeitsschritte auf Grund von Materialmangel oder mangelnder Kapazität im Output stillstehen. Ein Puffer kann zum Beispiel ein Förderband sein, auf dem sich Zwischenprodukte aufstauen können. Desweitern soll das Modell auch konvergente und divergente Produktionsströme abbilden, das heißt Maschinen können mit mehr als zwei Maschinen verbunden sein und somit zum Beispiel eine Endmontage beschreiben. Zur Bewertung verschiedener Konfigurationen kann man die Ausbringung als Kenngröße verwenden, welche die Produktionsmenge bezüglich einer Zeiteinheit beschreibt. Zudem ist der Wirkungsgrad, also die Ausbringung im Vergleich zur kleinsten Produktionsrate einer Maschine, interessant. Zur Messung dieser Werte in der Realität, müsste allerdings die Produktionsstruktur umgebaut werden, was mit erheblichen Kosten verbunden. Somit muss auf eine Simulation oder ein numerisches Verfahren zurückgriffen werden. In dieser Arbeit werden Algorithmen, zur Bewertung von konvergenten und divergenten Produktionsstrukturen aus der Literatur erläutert. Der Input ist ein Netzwerk aus Maschinen, wobei diese durch Kennzahlen zur Produktionsrate, Ausfallzeiten und Reparaturdauer beschrieben werden und die Puffer über ihre Kapazität definiert sind. Des Weiteren wird der Fokus auf der Anwendung dieser Algorithmen in der Praxis liegen, insbesondere im Vergleich zur Simulation. Dazu wird ein Algorithmus prototypisch umgesetzt. Zu Beginn der Arbeit werden die Ansätze der verschiedenen Algorithmen aus der Literatur erläutert. Im Anschluss wird die Vorgehensweise der Algorithmen erklärt, dabei wird der mathematische Hintergrund beleuchtet. Der letzte Abschnitt beschäftigt sich mit der Güte der Bewertung des implementierten Algorithmus und mit seiner Laufzeit.
Recent times have seen the rise of a research current which aims to solve classical Combinatorial Optimization (CO) problems through the help of Machine Learning (ML), and specifically in this work Deep Reinforcement Learning (RL) methods. One of the problems where these two exciting fields meet one another tackles the question of how to guide the branching process within a Branch and Bound algorithm, where sequential decisions must be done in order to reach an optimal solution. To this extent, the goal of the current work is to gain insight of the state of the art methodologies dealing with such a problem, and through an implementation and an experimental study, to provide an comprehensive view of the impact of employing RL to aid the solving of CO problems.
Aktuell sind Klimaschutzmaßnahmen in vielen Sektoren von großem gesellschaftlichem und politischem Interesse, um den anthropogenen Treibhauseffekt einzudämmen. Der Verkehrssektor nimmt dabei eine entscheidende Rolle ein, da dieser zu den weltweit größten Verursachern von Treibhausgasen zählt. Im Rahmen der Mobilitätswende wird angestrebt, in den kommenden Jahren die bisher überwiegend verbreiteten Verbrennungsmotoren durch Fahrzeuge mit hybridem oder vollelektrischem Antrieb weitestgehend zu ersetzen. Damit einher geht der Ausbau der nötigen Ladeinfrastruktur und Entscheidungen bezüglich der Standortwahl und Größe von Ladestationen. In dieser Masterarbeit wird das Problem basierend auf dem Temporal Facility Location Problem mit Methoden der diskreten Optimierung mathematisch modelliert. Dieses stellt eine zeitliche Erweiterung des klassischen Standortproblems dar, in der Kunden jeweils nur für ein gegebenes Zeitintervall an einen Standort angebunden werden. Die neue Restriktion lautet, dass zu keinem Zeitpunkt mögliche Kapazitäten der Standorte überschritten werden dürfen. In einer detaillierten theoretischen Analyse werden zunächst verschiedene Varianten des grundlegenden deterministischen Temporal Facility Location Problems untersucht. Neben einer jeweiligen Komplexitätsanalyse werden beispielsweise Untersuchungen bezüglich der Approximierbarkeit durchgeführt sowie verschiedene mathematische Formulierungen hergeleitet und verglichen. Daraufhin werden die Probleme in ihren Online-Varianten untersucht und Resultate bezüglich der Kompetitivität vorgestellt. Anschließend werden realitätsnahe Charakteristika des Anwendungsbereichs in die Problemdefinition integriert, indem verschiedene Modifizierungen vorgenommen werden, und erneut eine detaillierte theoretische Analyse durchgeführt.
The use of machine learning methods such as artificial neural networks or clustering algorithms is becoming more and more popular in the optimization community. Connecting to this trend, we propose a method that, instead of tuning mixed-integer programming solvers' behavior, facilitates testing (modified) solver behavior. Utilizing dynamic clusterings and solving mixed-integer programs, we select and visualize different diverse subsets of the test data available in the structured integer programming library strIPlib that are just as representative but exhibit a much smaller size. The generated subsets do not only comprise a strIPlib collection and a 'general-purpose' benchmark test set, but also a method to generate completely customized, diverse experiment test sets, tailored to the experiment that is to be executed with them.
Mixed-integer programming (MIP) is a common technique to model optimization problems. They can be solved with the decomposition approach of Dantzig-Wolfe reformulation (DWR) and the branch-cut-and-price (BC&P) algorithm. Many state-of-the-art software packages for solving MIPs provide scripting interfaces to interactively model and optimize these problems. We introduce PyGCGOpt, a Python interface for the generic BC&P solver GCG. The interface allows to model MIPs, to construct and visualize decompositions, and to implement solvers for pricing problems which occur from DWR. We provide example applications of our interface and show that it interacts with the solver without a significant decline in performance.
Das klassische Job-Shop-Scheduling-Problem besteht darin, eine optimale Zuordnung anfallender Aufgaben zu bestehenden Maschinen zu finden. Die Optimalität der Lösung kann unterschiedlich definiert sein, zielt in den meisten Fällen aber auf eine möglichst schnelle und/oder kostengünstige Abwicklung ab. Die vorliegende Arbeit erweitert dieses klassische Modell gemäß einer der Praxis entlehnten Problemstellung um eine dynamische Komponente. In eine schon bestehende Lösung sollen nun neu hinzukommende Aufgaben integriert werden. Die neu entstandene Situation wird separat modelliert und hinsichtlich eines eigenen Optimalitätskriteriums betrachtet.
Dantzig-Wolfe Zerlegung ist eine bekannte Methode, mit der lineare gemischt-ganzzahlige Progamme (engl. mixed integer programs), die eine gewisse Struktur haben, umformuliert und so bessere Grenzen für den optimalen Zielfunktionswert erhalten werden können. Die Dantzig-Wolfe Zerlegung wird daher in Branch-and-Bound eingesetzt, wo die Laufzeit von der Qualität der ermittelten Grenzwerte abhängt. JuMP ist eine Modellierungsprache für mathematische Optimierungsprobleme, die in der Programmiersprache Julia eingebettet ist; Coluna ist ein Framework, das JuMP erweitert und die Anwendung einer Dantzig-Wolfe Zerlegung ermöglicht. In Coluna muss jedoch der Benutzer die Problemstruktur beschreiben, welche die Anwendung einer Dantzig-Wolfe Zerlegung ermöglicht. Wir beschreiben, wie dem Benutzer diese Aufgabe abgenommen werden kann und präsentieren unsere Implementierung einer automatischen Dantzig-Wolfe Zerlegung für Coluna. Die Automatisierung der Dantzig-Wolfe Zerlegung erfolt in zwei Schritten. Zunächst erklären wir, wie eine Menge von potentiellen Strukturen für ein gegebenes lineares gemischt-ganzzahliges Programm mithilfe der Indexmengen der Bedingungen identifiziert werden kann. Im zweiten Schritt präsentieren wir verschiedene Scores aus der Literatur, welche zum Ziel haben, die für eine Dantzig-Wolfe Zerlegung potentiell beste Struktur in einer Menge von Kandidaten zu identifizieren.
The selection of the optimal decomposition for Danzig-Wolfe reformulations presents a difficult task for a generic MIP solver. The determination of the structure of the program can contain valuable information for this process. Therefore detection algorithms are used to extract the characteristics of the MIP. In this thesis a new approach is introduced to classify the MIP as a whole, in contrast to commonly used constrain and variable classification. A graph is used to summarize and analyse the extracted features of the MIP. This graph is generalised, by estimating the sets of indices, that were used for the creation of the MIP. Afterwards can the compression with other graphs, which represent different types of MIPs, result in a categorization of the examined MIP. The implementation of this approach showed promising success, when tested with multiple different types of MIPs. For well structured MIPs, the algorithm was able to reliable detect the type of the MIPs and recommend decompositions accordingly. Problems exist in the handling of small irregularities in the MIPs or structure changed by the presolving process. Therefore, further improvements of the robustness of the algorithm could enhance the classification quality.
Visual Speech Recognition, also called lipreading, is the task of interpreting speech and predicting text by only analyzing the movements of a speaker’s mouth. Lipreading is a difficult task for both humans and computers. The field of Machine Learn and its promising sub-field Deep Learn, have proven to be successful in tackling complex problems. Recent advancements have enabled lipreading systems to use deep learn models which are trained end-to-end. This thesis describes the development of such a deep learn based lipreading system which as an end goal will aid temporarily speech impaired patients in the intensive care units of hospitals in communicating more efficiently. It approaches the problem by breaking it down into two stages which are trained separately. The first stage predicts audio features from video frames and the second stage takes the predicted audio features and predicts the spoken text. The audio data that is available can be utilized for training the system. The lipreading system that was developed using this approach was able to achieve results close to the popular baseline model LipNet. Further, the collective behavior of the two stages of the model was evaluated. The presented insights serve as valuable input for future research in the broader project to which this thesis belongs.
Aufgrund der Energiewende ist es unabdingbar, dass sich der europäische Strommarkt in den nächsten Jahren deutlich verändern wird. Um diese Veränderungen vorherzusagen und darauf zu reagieren, wird das europäische Stromnetz mittels linearer Programmierung simuliert. Diese Arbeit ist in Kooperation mit der Firma Consentec GmbH entstanden, wo entsprechende Simulationen durchgeführt werden. Der Anspruch an die Genauigkeit der Simulation ist dabei in den letzten Jahren gewachsen. Konkret sollen Stromflüsse nicht mehr alleine auf der Ebene einzelner Länder oder Zonen (NTC), sondern auf Leitungsebene (flow-based) abgebildet werden. Dabei entstehen sehr viele Nebenbedingungen, von denen jedoch die allermeisten den Lösungsraum nicht verändern. Es werden verschiedene Ideen diskutiert, wie mit diesen Nebenbedingungen umgegangen werden kann. Vor allem wird ein Ansatz präsentiert, der das Herausfiltern der relevanten Nebenbedingungen deutlich beschleunigt.
In the field of mixed-integer programming, selecting the right variables to branch on can have a huge impact on the size of the branch-and-bound (B&B) tree, and thus on the performance of the whole algorithm. Full strong branching, i.e. selecting variables by fully evaluating the child nodes that would be created for each variable, is good at building smaller trees, but is usually not used on its own in B&B, as the computational effort often outweighs the reduced tree size. Instead, it can be combined with other heuristics, resulting in some of the most successful heuristics for general problems, like hybrid pseudocost/strong branching or reliability branching. As evaluating individual nodes in branch-and-price (B&P) generally takes longer than in B&B due to column generation, both the cost of using strong branching and the benefits of having a small tree are emphasized. This potentially changes the relative performance of existing selection heuristics, and gives opportunity for new ones. One such heuristic is hierarchical strong branching, which combines other heuristics with strong branching with and without column generation in a hierarchical fashion. We extend hierarchical strong branching, and among other things combine it with hybrid pseudocost/strong branching and reliability branching. Then, we evaluate and compare the performance of various strong branching-based and several other candidate selection heuristics, both for original variable branching and Ryan-Foster branching.
Modeling production as mixed integer linear programs (MILP) allows exploiting periods of low electricity price to save variable cost. Complicating aspects such as perishable materials significantly increase solving time of such optimization problems. This in turn often leads to untapped potential in solution quality, which is why more efficient solution processes are needed. For recurring production planning, the same MILP is solved repeatedly, with only the input data changing. Yet, experience about patterns in the solution process is usually not used. In this thesis, a production planning problem with perishable intermediates is considered. It is modeled using lazy constraints, i.e. complicating inequalities are separated and added on-the-fly during the solution process. It is then shown that a deep neural network can predict violated lazy constraints with high precision leveraging experience from previously solved instances. The new model outperforms the fastest conventional model as well as the lazy method by up to 34% less solution time.
Laser Powder Bed Fusion (LPBF) is an additive manufacturing method that can create 3D objects by melting metal powder layer by layer with lasers. During this process, several aspects can be subject to optimization. First of all, it is of interest to find the optimal number and configuration of lasers for a given component. Additionally, an optimal path of the process head over the substrate area is to be determined and, lastly, the workload must be distributed to the scanners inside the process head. These problems are modelled as Mixed-Integer Programs and optimized using the Gurobi solver.
The master thesis deals with a classical optimization problem from the financial world, the interesting question of portfolio optimization for banks. It considers the tension between the fact that banks aim to maximize their interest income, but are also obliged to comply with certain regulatory requirements. On the one hand, the focus of this work was on setting up a functional conception of the optimization model as well as on the technical realization of the additional pre-processing that was developed for the optimization. On the other hand, the work includes the setting up as well as the corresponding implementation of a mathematical model based on selected regulatory metrics. A linear optimization model was set up, which includes the metrics NII (Net Interest Income), EVE (Economic Value of Equity) and NSFR (Net Stable Funding Ratio) as well as the capital ratios and the balance sheet. The work was technically supported by a provider of software products in the financial sector and the optimization was based on the result data from their standard software solution for the reporting of banks.
This thesis deals with the theoretical and practical analysis of mixed integer programs for orthogonal packing problems. The goal of the work is to study the packing problem, which is known in the literature as the Multiple Bin Size Bin Packing Problem, and to give solution algorithms. In particular, the condition that items must be placed in the packing without overlapping has caused problems in several dimensions, making it difficult or impossible to extend techniques from the one-dimensional case. For this reason, different modelling techniques for satisfying this condition are first presented on the Orthogonal Packing - and Orthogonal Knapsack subproblems and compared both theoretically and practically. It is shown that compact models based on relations provide the best practical results, although other modelling techniques such as discretisation have better theoretical properties. The techniques that are successful in practice are then extended in various ways for the multiple bin size bin packing problem and the possibilities in this respect are compared again in practice. The resulting comparison shows that the most compact models have the best practical results.
Der Nord-Ostsee-Kanal oder auch Kiel Kanal genannt ist die am meisten befahrene Wasserstraße der Welt. Der Kanal ist in beide Richtungen befahrbar, an vielen Stellen aber einspurig, was die Routenplanung zu einer komplexen, bislang von Hand ausgeführten Aufgabe macht. Ich stelle in meiner Arbeit einen funktionierenden globalen Ansatz vor, der unter den bestehenden Lösungen im Schnitt mehrere Prozentpunkte an Gesamtwartezeit für die Reedereien einsparen kann. Es werden die Auswirkungen verschiedener Formulierungen diskutiert und zudem eine problemspezifische Vorverarbeitung und Stärkungen der Formulierung präsentiert. Ein neuer Ansatz versucht, gute Praxislösungen zu generieren und kann bei zusätzlicher Wartezeit von fünf Minuten pro Schiff eine Halbierung der Routenkomplexität bewirken. Schließlich stelle ich einen erweiterten Branch-&-Price Algorithmus für die Dantzig-Wolfe Reformulierung vor, der in Tests erfolgreich die MIP-Lösung einiger Instanzen übertrifft.
The ability to predict future demand accurately is of essential importance in supply chain management. This includes the prediction of sales in retail during promotion periods, where the number of sales differs greatly compared to non-promotion periods. The goal of this master thesis is to increase the accuracy of stock keeping unit (SKU) sales prediction models during promotion periods in a retail setting. Different promotional factors, together with interaction effects between different articles on promotion and consequences of the COVID-19 pandemic are analysed to achieve this goal. This introduces challenges in the form of high dimensionality and sparsity, which are met with a multiple linear regression model. Besides an accurate prediction, which is achieved by the incorporation of these features, this model allows to draw conclusions about causal relations between the influencing factors and the number of sales. The model is applied to data provide by a Dutch supermarket to evaluate its performance.
Location planning over a time horizon is a common task and therefore needs efficient solving methods. Experiments with temporal extensions to related problems indicate that branch-and-price algorithms might also be suitable for solving the temporal facility location problem (TFLP). This is supported by the reproduction of the results of two papers with related temporal problems, namely the temporal bin packing problem and the temporal knapsack problem. The knowledge obtained from these results is then used to create several TFLP instances with different parameter scenarios. The reproduction as well as the solving of the TFLP instances is performed with GCG and SCIP as branch-and-price and general-purpose solver respectively. In the experiments conducted in this thesis the instance parameter that has the biggest influence on the solving times is the opening cost, but having uncapacitated or capacitated facilities is another important factor. Comparing the results of GCG and SCIP on these scenarios leads to the conclusion that in most cases the general-purpose solver SCIP is the faster alternative. The branch-and-price solver GCG is useful for large instances with expensive opening cost and useful for some instances when the facilities are uncapacitated.
Machine learning methods and deep learning in particular have seen many use-cases for almost every domain in the last couple of years now. Among those is the domain of discrete optimization. In discrete optimization machine learning has been applied to heuristically improve some algorithms. Since similar models in discrete optimization can also be handled or processed similarly, we will utilize Graph Convolutional Network (GCN) to predict which problem class a Mixed Integer Programm (MIP) belongs to, such that suitable algorithm can be applied to a model. To be exact, the natural variable-constraint bipartite graph representation of MIPs is utilized to train a GCN in the task of classification. The trained GCN will additionally be able to represent any MIP by a fixed size feature vector, which can be used to calculate how similar different MIPs are to each other. The process of training the GCN will be documented and our final model will be evaluated by comparing its performance to its competition. In addition to this, some experiments will be conducted to get a better understanding of which features and which parts of the networks are important for good classification results.
In this bachelor thesis the optimasation of the airplane boarding process is investigated. Given is a set of cabin layouts and a set of passengers with different movement speeds and amounts of carry-on luggage. To improve the overall boarding time each seat has to be assigned to a passanger and each carry-on luggage space to a luggage. The passanger satisfaction sets further requirements to the boarding process. The assignment problem is formulated as a mixed integer problem (MIP) and differentiates itself from previous works by the direct optimisation of the overall boarding time, in combination with different numbers of carry-on luggage per passanger such as limitaions of the carry-on luggage space. To investigate the assignment problem, the formulated mathematical model such as developed heuristics are implemented. The experiments with different calculations show a strong correlation between the overall boarding time and the amount of carry-on luggage to stow in the carry-on luggage space in the airplane cabin. The best solution is achieved using the heuristic \(H_{wtw}^{v2}), giving excellent boarding times in addition to a high amount of stowed carry-on luggage reaching close to maximum values.
Emissionsfreie Mobilität gewinnt immer mehr an Wichtigkeit. Für diese sind Elektrofahrzeuge unabdingbar, dadurch entstehen neue Fragen und Herausforderungen. Das iMove Projekt beschäftigt sich unter anderem mit dem „balancierten Ladeproblem (BCP)“. Hierbei wird eine Plattform entwickelt, die die Ladevorgänge von Elektrofahrzeugen optimal den Ladesäulen des Elektrische Netzes zuweisen soll. Optimalität ist bei diesem Problem zweigeteilt. Auf der einen Seite sollen die Wünsche der Elektrofahrzeugfahrer erfüllt werden. Auf der anderen Seite soll die Last auf das Elektrische Netz möglichst ausgeglichen bleiben. Hierfür wurde ein MIP entwickelt. Diese Masterarbeit beschäftigt sich mit dem Problem, dass die Elektrofahrzeugfahrer möglicherweise die Zuweisung zu einer Ladesäule der Plattform ignorieren und dieser nicht folgen. Während robuste Ansätze für diese Art der Unsicherheit nicht optimal geeignet scheinen, stellt es sich heraus das Online Ansätze eine gute Alternative bieten mit abweichenden Fahrern umzugehen. Die beste Güte erzielen Online Ansätze mit Unterstützung der Offline-Lösungen. Das Online-BCP ermöglicht es trotz abweichender Fahrer eine zulässige Lösung zu finden, allerdings zeigt sich, dass die Güte des Online-BCP noch ausbaufähig ist.
The currently ongoing energy transition increases the share of renewable energies in the electricity mix. In Germany, this is covered in particular by wind and solar energy, which are considered volatile. Since a stable power grid must guarantee a balance between the electricity produced and the electricity demanded at all times, this development presents new challenges for electricity suppliers and grid operators. One approach to counteract the fluctuation in the energy supply network is demand-side management (DSM). DSM refers to the management of demand for network-based services by customers in industry, trade and private households. Electricity suppliers benefit from DSM by increasing the reliability of supply, while electricity consumers can reduce their energy costs benefiting from financial incentives. This Master's thesis investigates, with regard to demand-side management, how optimized production planning can reduce the energy costs of a general energy-intensive production process. For this purpose, a mixed-integer linear model is developed and set up. In particular, two new possibilities for cost reduction will be modeled: on the one hand, the exploitation of volatile electricity prices by load redistribution, on the other hand, the provision of balancing power. In production planning, the various framework conditions of the production process and uncertainties in some model parameters, such as the demand for balancing power, must be taken into account. Therefore, methods from the field of robust optimization are applied as a solution for the optimization problem.
With an increasing number of man-made objects in space, the area around earth is getting increasingly polluted. Due to the chance of collisions, orbiting debris puts future missions at risk. The Kessler syndrome projects a cascading amount of debris collisions that result in an exponential increase of debris pieces. In order to avoid a catastrophic scenario, space agencies such as the European Space Agency propose missions for an active removal of space debris. Depending on the removal method, one mission aims to remove multiple debris objects. Thus, during the mission design, a combinatorial problem needs to be solved deciding the selection of debris objects and the order of space rendezvous. The problem setup shares similarities with a variant of the traveling salesman problem with city selection. The goal is to maximize the profit of the visited cities with a limited traveling budget. Since the debris orbits are exposed to perturbations, it is assumed that the costs of traveling between the objects are likely to change over time. This thesis will explore various mixed integer problem formulations for both constant costs and dynamic costs over time. For the static case, we are able to find optimal routes even for bigger debris clouds of more than 2000 objects. The proposed formulations for the dynamic case are able to solve large instances, when carefully selecting the time parameters. Moreover, we have implemented a genetic algorithm to further explore the solution space.
In this thesis, we investigate possible optimisations of the aeroplane boarding process. To accomplish this, we define and mathematically model the process, formulate it as a minimisation problem, and propose a solution procedure. We show that the problem is computationally hard in a theoretical sense and investigate the quality of heuristic approaches as well as some of its online properties.
The boarding process is of particular interest for airlines to improve profitability as well as customer satisfaction. Since boarding policy changes pose a lower implementation barrier than buying new planes or making changes to existing infrastructure, advances in aeroplane boarding research have the potential to positively impact customers and airlines relatively immediately.
This thesis aims to evaluate the boundaries of neural network training using
mixed-integer optimization. Neural networks are a powerful machine learn-
ing technique that is able to classify data non-linearly. This technique is
applied to a lot of different tasks. However, traditional approaches to neural
network training can get stuck in local minima, while algorithms exist that
solve a mixed-integer optimization problem optimally. The goal of this thesis
in the context of mixed-integer programming. We start by showing which
kind of activation functions can be expressed by a mixed-integer program
(MIP). After that variants of neural network training using mixed-integer
programming are presented and it is shown which activation functions can
be incorporated into these variants. Finally, we use a blackbox-solver to ap-
ply the resulting models to the parity function and evaluate the performance
of our approaches.
We introduce an optimization problem on graphs called Connected Vertex Clustering Problem (CVCP). The input consists of a finite graph, arbitrary linear constraints to restrict the set of feasible clusters and a linear objective function. The expected solution is a vertex clustering that optimizes the objective under the condition that each cluster induces a connected subgraph and satisfies the custom constraints. Besides partitional clustering, i.e., node partitioning, the solution may also be restricted to packings or coverings of nodes. We show that this highly configurable problem is NP-hard and propose a branch-and-price method as a solution approach. The suggested method is implemented as a framework which is capable of solving arbitrary CVCP instances. The framework can easily be extended with new features due to its plug-in architecture. This allows to exploit the characteristics of specific variants of the CVCP in order to enhance the efficiency of the solution process. We evaluate the developed framework on a districting problem for the German federal elections and on the Odd Cycle Packing Problem.
Dantzig-Wolfe (DW) reformulation is a well-known approach to produce strong dual bounds for Mixed-Integer Programs (MIPs). If a MIP has a special structure which can be exploited well by a DW reformulation the solver could be much faster on the reformulated model than on the original. Otherwise the solver may fail completely. Providing a strong DW reformulation typically requires enhanced knowledge about the underlying model structure of the MIP which makes an automatic reformulation without such knowledge difficult. Despite this problem there have been several approaches proposed which implement such an automatic DW reformulation process into a state-of the-art MIP solver, especially the Generic Column Generation (GCG) framework [1]. In such frameworks the detection of a decomposition that exploits the underlying model structure is one of the most important aspects. GCG for example uses different detectors that use heuristics and clustering algorithms to generate a set of possible decompositions. Recently supervised learning algorithms have been applied by Kruber et al. [2] to decide if a reformulation of the model should be solved (and also which if several have been detected) or if the original model should better be solved by a standard solver. These first promising results showed that machine learning could be a useful support in this decision process. Besides this single question there might be other decisions in the entire process that could also potentially benefit by the use of supervised machine learning.
The bachelor thesis deals with solving multi-commodity flow problems. These kinds of problems occur in different everyday situations. For example, the transport of goods in distribution networks or the movement of messages in communication networks. In addition to the considerable size of these flow problems, which often occurs, there can be further complicating constraints on nodes, edges, or goods. In the context of the work, an alternative solution approach for minimal-cost multi-commodity flow problems, which is based for the most part on the computation of k-shortest paths, is developed. Based on optimally solved single-sink single-source single-commodity flow sub-problems, whose accumulated solutions are usually an inadmissible solution of the multi-commodity flow, the developed solution approach tries to obtain a feasible solution by rerouting the flow of edges that are infeasible in the original problem. The developed approach will be evaluated in a feasibility study based on a real use case, which is a time-expanded distribution network of an automobile manufacturer with limitations on edges and goods.
In automotive industry long delivery periods and the Just-In-Time production yield to an inflexible production plan for several weeks. At the same time the exact prediction of incoming customer orders is nearly impossible due to mass customization. However, mass customization is a core concept of automotive industry. In combination with time consumption requirements these conditions generate a challenging problem in practice. This thesis is about presenting an algorithm, where mixed integer programming is used to reduce the average waiting period for an ordered car. Based on an existing production schedule the algorithm is able to determine an optimal rescheduling with respect to restrictions in warehousing and customer preferences to guarantee the earliest possible delivery date.
Further extensions to this model can include a differentiation by age and/or gender to obtain a more diverse/equal group set-up. Also the number of previously participated dinners might be taken into consideration.
The student will develop an integer program that solves the consecutive dinner problem as well as implement the program. Real life instances will be provided to test the IP. Therefore, the student must know how to develop a mathematical program as well as how to implement IPs in C/C++ or Java.
In this Master Thesis we consider the problem of covering rectilinear polygons by the
minimum number of axis-parallel rectangles. This problem finds applications in the
fabrication of DNA chip arrays [Hannenhalli et al. 2002], in VLSI design, where the
rectilinear polygon is a chip that has to be covered by a huge number of rectangular
transistors.
Other applications are data compression and in particular image compression, where
large rectangular areas with the same color can be compressed into one pixel.
This work evaluates whether optimization problems resulting from modelling the optimal synthesis, design and operation of a decentralized energy supply system have an embedded structure which can be exploited by decomposition methods in solution algorithms. The objective is to determine if the the accuracy and/or the problem size in terms of number of periods of time and number of units considered may be increased, as these are limited if the branch-and-bound method combined with the simplex method is used to solve the problems. A model of the problem is formulated as a mixed-integer linear program as proposed by Yokoyama et al. (2002) and Voll (2013). The model is analyzed and two embedded structures suitable for decomposition are identified.
The first structure emphasizes the independent operation and design of every component. The second structure emphasizes the design and operation of all components and focuses on the independence of every period of time considered. The model is reformulated using the Dantzig-Wolfe decomposition principle for both proposed embedded structures. A numerical study is conducted where the synthesis, design and operation of a fictional energy supply system is optimized by both the branch-and-bound method combined with the simplex method and by the branch-and-price method. A set of instances is created for different degrees of complexity in terms of the number of units and the number of periods of time considered.
The results show that the dual bounds obtained by solving the rootnode LP relaxation can be improved in comparison to the conventional solution approach, if the reformulation emphasizing independent components is utilized. The results provide no evidence on improvements on the considered test set for the reformulation emphasizing design and operation. For the case of an optimal solution computing times required to solve the considered instances of a test set are found to be reduced by utilizing the branch-and-price method and the reformulation emphasizing components, if identical components are considered in the energy supply system in comparison to the non-commercial solver SCIP.
The Steiner tree packing problem is a long studied problem in combinato-
rial optimization. In contrast to many other problems, where an enormous
progress has been made in the practical problem solving, the Steiner tree
packing problem remains very difficult. Most heuristics schemes are ineffec-
tive and even finding feasible solutions is already NP -hard. What makes this
problem special, is that in order to reach an overall optimal solution non-
optimal solutions to the underlying NP -hard Steiner tree problems must be
used. Any non-global approach to the Steiner tree packing problem is likely
to fail. Integer programming is currently the best approach for computing
optimal solutions.
The goal of this master thesis is to give a survey of models relating to the
Steiner tree packing problem from the literature. In addition, a closer look
at a model for the switchbox routing problem in VLSI-Design will be given.
When reformulating a given mixed integer program by the use of classical Dantzig-Wolfe decomposition, a subset of the constraints is partially convexified, which corresponds to implicitly adding all valid inequalities for the associated integer hull. Since these inequalities are not known, a solution of the original linear programming (LP) relaxation which is obtained by transferring an optimal basic solution of the reformulated LP relaxation is in general not basic. Hence, cutting planes which are separated using a basis like Gomory mixed integer cuts are usually not directly applicable when separating such a solution.
Nevertheless, we can use some crossover method in order to obtain a basic solution which is nearby the considered non-basic solution and generate cutting planes for the basic solution using its basis information. These cutting planes might also the solution we originally wanted to separate. So far, this problem was only considered extensively by Range, who proposed the previously described approach including a particular crossover method. We present a modified crossover method and extend this procedure by considering additional valid inequalities strengthening the original LP relaxation. Furthermore, we provide the first full implementation of a separator like this and tested it on instances of several problem classes.