Prerequisite(s): Approval by a faculty member who agrees to supervise the work. Independent work involving experiments, computer programming, analytical investigation, or engineering design.
A required course for undergraduate students majoring in OR:EMS. Focus on the management and consequences of technology-based innovation. Explores how new industries are created, how existing industries can be transformed by new technologies, the linkages between technological development and the creation of wealth and the management challenges of pursuing strategic innovation.
An introduction to combinatorial optimization, network flows and discrete algorithms. Shortest path problems, maximum flow problems. Matching problems, bipartite and cardinality nonbipartite. Introduction to discrete algorithms and complexity theory: NP-completeness and approximation algorithms.
Convex sets and functions, and operations preserving convexity. Convex optimization problems. Convex duality. Applications of convex optimization problems ranging from signal processing and information theory to revenue management. Convex optimization in Banach spaces. Algorithms for solving constrained convex optimization problems.
Continuation of IEOR E6711, covering further topics in stochastic modeling in the context of queueing, reliability, manufacturing, insurance risk, financial engineering, and other engineering applications. Topics from among generalized semi-Markov processes; processes with a non-discrete state space; point processes; stochastic comparisons; martingales; introduction to stochastic calculus.
Most existing reinforcement learning (RL) research is in the framework of discrete-time Markov Decision Processes (MDPs). Many real world applications, however, call for RL in continuous time with possibly continuous state and action spaces, such as high frequency trading and autonomous driving. Moreover, when cast in continuous time/spaces, it is possible to provide a theoretical and interpretable foundation for RL heuristics due to the availability of many technical tools such as stochastic analysis, stochastic control and differential equations.
This PhD reading course will center around reinforcement learning in continuous time/spaces and applications especially to financial engineering. Students will take turns to present research papers, either important ones in the literature or their own papers, on topics including but not limited to exploration via randomization, entropy regularization, Boltzmann exploration, policy evaluation, policy gradient, q-learning, Langevin diffusions and application to nonconvex optimization, and mean-variance portfolio selection. The objective is to stimulate interest in this emerging, largely unexplored area, to motivate new problems, and to inspire innovative approaches to solve research problems.
The course is mainly for PhD students in IEOR, computer science, mathematics, statistics and business school, who have taken courses in stochastic analysis, and are familiar with optimization and differential equations. Exceptional MS students with similar training may also take the course. The grading is based on the performance in the class including presentation and participation.
Most existing reinforcement learning (RL) research is in the framework of discrete-time Markov Decision Processes (MDPs). Many real world applications, however, call for RL in continuous time with possibly continuous state and action spaces, such as high frequency trading and autonomous driving. Moreover, when cast in continuous time/spaces, it is possible to provide a theoretical and interpretable foundation for RL heuristics due to the availability of many technical tools such as stochastic analysis, stochastic control and differential equations.
This PhD reading course will center around reinforcement learning in continuous time/spaces and applications especially to financial engineering. Students will take turns to present research papers, either important ones in the literature or their own papers, on topics including but not limited to exploration via randomization, entropy regularization, Boltzmann exploration, policy evaluation, policy gradient, q-learning, Langevin diffusions and application to nonconvex optimization, and mean-variance portfolio selection. The objective is to stimulate interest in this emerging, largely unexplored area, to motivate new problems, and to inspire innovative approaches to solve research problems.
The course is mainly for PhD students in IEOR, computer science, mathematics, statistics and business school, who have taken courses in stochastic analysis, and are familiar with optimization and differential equations. Exceptional MS students with similar training may also take the course. The grading is based on the performance in the class including presentation and participation.
Operations Strategy
Supply chain management, Model design of a supply chain network, inventories, stock systems, commonly used inventory models, supply contracts, value of information and information sharing, risk pooling, design for postponement, managing product variety, information technology and supply chain management; international and environmental issues.
Before registering, the student must submit an outline of the proposed work for approval by the supervisor and the chair of the Department. Advanced study in a specialized field under the supervision of a member of the department staff. May be repeated for credit.