• 3cv+2 : Modelo de Calidad para la Construcción de Vivienda en México

      Solís Flores, Juan P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-05)
    • A causal multiagent system approach for automating processes in intelligent organizations

      Hector G. Ceballos; HECTOR GIBRAN CEBALLOS CANCINO;223871 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-12-01)
      The current competitive environment motivated Knowledge Management (KM) theorists to propose the notion of Organizational Intelligence (OI) for enabling a rapid response of large organizations to changing conditions. KM practitioners consider that OI resides in both processes and members of the organization, and recommend implementing learning mechanisms and empowering participants with knowledge and decision making for improving organization competitiveness. In that sense, have being provided some theoretical definitions and practical approaches (e.g. Electronic Institutions and Autonomic Computing), as well as commercial platforms (e.g. Whitestein Technologies), that implement OI to a certain extent. Some of these approaches have already taken advantage of tools and formalisms developed in Artificial Intelligence (e.g. Knowledge Representation, Data Mining, and Intelligent Agents). In this research, I propose the use of Aristotelian Causality for modeling organizations, as well as its members, as intelligent entities through the Causal Artificial Intelligence Design (CAID) theory, and present the Causal Multi-Agent System (CMAS) framework for automating organizational processes. Bayesian Causal Networks are extended to Semantic Causal Networks (SCN) for providing an explicit representation of the goals, participants, resources and knowledge involved in these processes. The CAID principles and the SCN formalism are used for providing a probabilistic extension of the goal-driven Belief-Desire-Intention agent architecture, called Causal Agent. Lastly, the capabilities of this framework are demonstrated through the specification and automation of an information auditing process.
    • A new supervised learning algorithm inspired on chemical organic compounds

      Hiram Eredín Ponce Espinosa; HIRAM EREDÍN PONCE ESPINOSA (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2013)
    • Adquisición de Comportamientos Grupales en un Dominio de Agentes de FÚtbol Utilizando un Mecanismo de Toma de Decisiones Distribuido y Aprendizaje por Refuerzo

      Junco Rey, María de los A. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-01-03)
      En un sistema multiagente la coordinación de las actividades de los diferentes participantes es una tarea difícil de lograr, especialmente si los agentes interactÚan en un medio dinámico. Tradicionalmente, la toma de decisiones se realiza de una forma centralizada lo cual representa ciertas limitantes relacionadas a este modelo: � No se considera la información de diferentes puntos de vista. � La decisión es tomada por un solo agente que puede contar con una visión limitada del problema. � Si el agente que toma las decisiones falla, el sistema completo falla. En este trabajo se presenta la hipótesis de que, para mejorar la utilidad lograda por la interacción de los participantes en un dominio multiagente, el proceso de toma de decisiones debe distribuirse entre los diferentes agentes para considerar los diversos puntos de vista y la información local que cada uno de ellos posee, coordinándose de una mejor manera. Además, el mecanismo de negociación, asociado a este proceso de toma de decisiones, debe considerar tanto la utilidad global del sistema así como las utilidades particulares de cada uno de los agentes participantes. Sin embargo, bajo este marco distribuido, el proceso de toma de decisiones encuentra dificultades de conflictos propios a este tipo de ambientes, los cuales deben ser resueltos a través del uso de mecanismos de negociación. Para lograr un comportamiento coordinado y cooperativo no es suficiente con proveer a los agentes con información acerca de sus compañeros, es necesario contar con alguna medida de racionalidad. La teoría de juegos provee un marco teórico para analizar las interacciones entre varios agentes y proporcionar la medida de racionalidad necesaria. Así, en este trabajo los agentes utilizan un modelo de solución de Nash para evaluar las utilidades obtenidas de las interacciones realizadas por los agentes, ya que permite evaluar diferentes soluciones conjuntas entre los agentes e identificar una o más soluciones que maximizan la utilidad del sistema. Por otro lado, en un dominio multiagente es necesario considerar un proceso de interacción que facilite la coordinación de sus acciones y pueda desempeñarse adecuadamente considerando los cambios que surgen en un universo dinámico como este. Así, los agentes en este trabajo cuentan con la capacidad de aprender de sus interacciones identificando comportamientos conjuntos que los llevan a cumplir su meta global. El proceso de toma de decisiones apoya al proceso de aprendizaje proporcionando a los agentes información que les permite evaluar aquellos comportamientos conjuntos que cuentan con poca posibilidad de éxito y así no considerarlos dentro de los comportamientos válidos a ser ejecutados. Entonces, el algoritmo de aprendizaje implementado en este trabajo es una implementación distribuida del algoritmo de aprendizaje por refuerzo, conocido como Q-learning, y la denominamos Distributed Q-learning. Este algoritmo distribuido permitirá a los agentes aprender utilidades de acciones conjuntas segÚn diferentes roles que ejecuten los agentes involucrados en una misma jugada y no solo de sus acciones en forma individual. Así, al aprender las mejores acciones conjuntas el proceso de intercambio de información local se reduce ya que los agentes son capaces de identificar la relación entre una situación dada y la mejor acción a elegir que los llevará a comportarse como un equipo coordinado. Nuestro modelo de negociación se ha probado en el medio de agentes de fÚtbol y se ha realizado una extrapolación para su aplicación en problemas de toma de decisiones distribuidas en ambientes de negocios, demostrando así que es posible utilizarlo en dominios que impliquen la participación de varios jugadores y donde el proceso de toma de decisiones no se encuentra centralizado en alguno de ellos sino que tiene que ejecutarse de una forma distribuida buscando beneficiar al sistema en general y a cada uno de los jugadores. Las principales contribuciones de este trabajo son: � Un mecanismo de toma de decisiones racional que permite evaluar las acciones conjuntas de los agentes que les permiten comportarse coordinadamente buscando maximizar la utilidad del sistema. � Diseño e implementación de un algoritmo de aprendizaje por refuerzo distribuido que permite a los agentes participantes aprender acciones conjuntas. El algoritmo de aprendizaje por refuerzo se apoya del proceso de toma de decisiones para considerar solo aquellos comportamientos conjuntos que cuentan con mayor posibilidad de éxito, reduciendo así el espacio de comportamientos conjuntos a evaluar, ya que se eliminan aquellos que cuentan con utilidades negativas para el sistema. � La extrapolación teórica de nuestro modelo, probado en un dominio de agentes de fÚtbol, a un dominio de negocios para la toma de decisiones distribuida.
    • Al-based robust multi-regime controller

      Ibarra Moyers, Luis Miguel; LUIS MIGUEL IBARRA MOYERS
    • Un Algoritmo Genético Multimodal y su Aplicación al Problema de Ruteo de Vuelos con MÚltiples Paradas

      Uresti Charre, Eduardo (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2003-01-05)
      En los algoritmos gene?ticos la población tiene un doble papel. Por un lado representa lo que puede ser el resultado entregado por el algoritmo, y por otro, la población es la materia prima para la exploración de espacio de bu?squeda. Siendo la población lo que el algoritmo entrega en su finalización e?sta debe ser cuidadosamente reemplazada por nuevas soluciones que mejoren lo ya encontrado y no destruyan soluciones adecuadas ya descubiertas. Si el problema consiste en encontrar la mejor solución posible en el espacio de bu?squeda, es sencillo cuidar el mejor elemento a lo largo de las generaciones. Cuando la solución al problema consiste en encontrar un conjunto mu?ltiple y diverso de puntos, la población debe modificarse con un cuidado adecuado. Este cuidado puede comprometer sustancialmente la capacidad de exploración del espacio de bu?squeda. Con el propósito de mantener un nivel adecuado de diversidad en la población el aumentar el taman?o de la población aumenta sustancialmente los recursos requeridos para hacer la evolución de la población y, en el contexto de un problema donde el nu?mero de evaluaciones disponible sea limitado o costoso, tambie?n puede compromenter el nivel do exploración. El presente trabajo propone y analiza un algoritmo gene?tico que se aplica a pro- blemas de optimización multimodal donde el rol de la población esta? separado. Este proceso de división es llevado a cabo mediante el manejo de dos poblaciones. Una prime- ra población que hace el papel de una población memoria que representa el conjunto respuesta al problema de bu?squeda. Esta población memoria es concebida como un salón de la fama donde se almacenan los mejores individuos encontrados en el proceso de bu?squeda. El concepto de mejor es el resultado de una combinación entre la aptitud del individuo, una medida descriptiva de la sobrepoblación en el nicho que ocupa, y de un indicador de cómo se compara su evaluación respecto a la de sus compan?eros en el nicho. La segunda población representa el medio de exploración en el espacio de soluciones. A la par con estas dos poblaciones, un mecanismo de administración es desarrollado. Este mecanismo se encargara? de reemplazar elementos en la población memoria por au?n mejores elementos encontrados en el proceso de bu?squeda. La otra función importante a su cargo consistira? en formar las poblaciones para cada una de las nuevas exploraciones. El algoritmo desarrollado es comparado experimentalmente con dos de los algorit- mos gene?ticos que mejor se han desempen?ado en la solución a problemas de optimi- zación multimodal resultando con ventajas sobro olios. Este algoritmo es aplicado al problema de ruteo de vuelos con mu?ltiples paradas el cual es un problema de optimiza- ción discreta de alta complejidad y relevancia pra?ctica. En este problema la evaluación de los individuos tiene un costo computacional elevado, de manera que el overhead com- putacional causado por la administración de la población memoria es reducido cuando se le compara contra los costos de evaluación.
    • An evolutionary framework for producing hyper-heuristics for solving the 2D irregular bin packing problem

      López Camacho, Eunice; EUNICE LÓPEZ CAMACHO;271511 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-05-01)
      This document presents a doctoral dissertation which is a requirement for the Ph.D. degree in Information Technologies and Communications from Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM), Campus Monterrey, major in Intelligent Systems in the field of Hyper-heuristic Search for the Bin Packing Problem. This dissertation works with an evolutionary framework that produces hyper-heuristics for solving several types of bin packing problems introducing relevant improvements to the solution model. The Bin Packing Problem is a particular case of the Cutting and Packing Problem, where a set of pieces are placed into identical objects and the objective is to minimize the number of objects needed. Given the NP-hard nature of this optimization problem, many heuris? tic approaches have been proposed. In this work a solution model is proposed, based on a genetic algorithm, in which a hyper-heuristic is built as a rule or a high-level heuristic that combines several low-level heuristics when building a solution from scratch. Therefore, the hyper-heuristic takes advantage of the main strengths of the low-level heuristics to solve particular kinds of problem instances. A hyper-heuristic is a list of several representative states of the problem, each one labelled by a low-level heuristic. A problem instance to be solved by a hyper-heuristic is first summarized by a numerical vector that carries some of its main features. This vector is then compared with the hyper-heuristic representative states and the correspondent heuristic is chosen to be applied. After one or several pieces are placed, the problem state is updated. This process continues until the problem instance is completely solved. The main inputs of the evolutionary framework in order to produce a hyper-heuristic are: (1) a vector-based way of representing the problem state; (2) a set of low-level heuristics; and (3) a set of training problem instances. The research presented in this document improves each of these elements. First, a data mining based methodology was developed to select the best set of features that can represent the state of the problem instances. This six-step methodology includes the application of the k-means clustering technique and a Multinomial Logistic Regression model to find a subset of features that best predict heuristic performance. This methodology does not require intensive knowledge of the problem domain. Promising results were found when comparing hyper-heuristics produced employing an intuitive representation and a representation built with this methodology. Besides, there are some other solution approaches developed for other combinatorial optimization problems that also require to represent instances with a limited number of features. Therefore, the proposed methodology can be exported for other search space approaches. Second, the Djang and Finch selection heuristic was properly adapted from the onedimensional to the two-dimensional Bin Packing Problem. This adaptation includes a timesaving routine based on avoiding repetitive computations. With the pieces in decreasing order, this heuristic starts filling an object until it is at least one-third full. Then it tries combinations of one, two or three pieces that completely fills the object. If this is not possible, a small waste is allowed and it is increased as necessary until there are not remaining pieces that fit. Then, a new object is opened. Several experiments were conducted with the initial fraction of the object that is full before trying combination of pieces. We found that filling the object up to one-fourth, one-third and one-half produces effective heuristics that behaves differently in different kinds of instances. Third, the level of generality handled by the evolutionary framework was increased in terms of the kind of instances solved. The solution model can be trained with one- and twodimension regular and irregular instances. Irregular instances include convex and concave polygons. Once the hyper-heuristic is evolved, it is able to solved any instance from any of these types with good results and without further parameter tuning. The framework was tested with a large dataset of 1417 instances. One-dimensional instances were drawn from the literature. An algorithm was designed for randomly producing two-dimensional instances with concave pieces. Therefore, geometric functions had to be implemented for dealing with concavities. Twenty hyper-heuristics were generated and tested. Broadly speaking, hyperheuristics were able to learn a combination of single heuristics that produce, at least, the same result than the best single heuristic for each testing case. Finally, an analysis was performed to find which feature values of the Bin Packing Pro? blem are more likely to lead to a good performance of heuristics and hyper-heuristics. With the Principal Component Analysis technique, a two-dimensional map was built in which the 1417 instances were plotted. The more similar two instances are according to nine selected features, the closer the two instances are plotted in the map. It is possible to find combination of feature values that characterize each section of the map. By over imposing the performance of heuristics and hyper-heuristics over the map, we can draw conclusions about the main relations among features and performance. Understanding the Bin Packing Problem structure will help in the design of new solution approaches.
    • Analysis of the Dynamics of Neurosecretory Vesicles by Optical Tweezers and Image Processing

      Alvarez Elizondo, Martha B. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-12)
      We present the analysis of chromaffin vesicles' dynamics during exocytosis. Optical tweezers combined with fluorescence for noninvasive microviscometry in cells and confocal imaging for trajectory analysis were used for this study. By the use of optical tweezers and fluorescent we applied the oscillation method propose by Fischer et al for measuring the intracellular vis- coelastic properties in cells. A sinusoidally moving optical trap was used to drive intracellular optical trapped vesicles present in chromaffin cells in order to obtain local information about the viscoelastic liquid surrounding the vesicles. For probing this technique measurement were performed on water, glycerol and PEO, then in cells.
    • Analysis, Architecture, and Fusion Methods for Vehicle Automation

      Albores Borja, Carlos (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-05-01)
      Autonomous Vehicles (AVs) are automated vehicles to carry out a specific task, without direct human intervention. AVs are mobile robotic applications that have generated great interest in recent years due to their capacity to perform repetitive tasks in remote or harmful environments with extreme operating conditions. The applications and tasks of these devices vary from the transportation of material, to the exploration of planet's surfaces. AVs consist of selecting a vehicle originally designed for human drive, and installing the necessary components and systems to carry out the required tasks with autonomy. A methodology for converting a commercial vehicle into an AV with the proposed architecture is also introduced. To organize and control all the elements and functions of an AV it is indispensable the design of a physical and logical structure of these elements, known as an architecture. The architecture also specifies how the elements are coordinated and how they interact between themselves. This research introduces a control architecture for autonomous vehicles. This architecture makes an abstraction of the functions of the vehicle, with an emphasis on the vehicle's kinematic model. The architecture is modular and is structured mainly in a hierarchical way, with some modules of reactive behavior. One of the main elements of this vehicle's architecture, and of AVs researches in general, is the vehicle's position estimation function. Better state estimations results in better and more reliable performance. This research present a method to obtain a expression for the uncertainty in the odometry position estimate of a mobile vehicle using a covariance matrix whose form is derived from the kinematic model. We then particularize for a non-holonomic Ackerman driving type autonomous vehicle. However, obtaining a expression for the cross-covariance terms between the previous position of the robot and its actual increment of position is not straight forward. Thus, a formulation to obtain a expression for these terms is developed. Finally, special care must be taken into account when data for multiple sensors are fused, since it is easy to over estimate the state's precision using fusion techniques that do not consider correlation between the sensors' measurements. For this reason, techniques considering correlation such as probabilistic approaches or the covariance intersect algorithm are considered in the hierarchical data fusion scheme introduced in this thesis. To validate the aforementioned elements, an utilitarian carrier designed to be used in open space mining industry was automated, and trajectory following experiments were performed and analyzed. The vehicle was able to follow a desired path within an error of less than 1 meter using all the available sensors data.
    • Analysis, Recovery and Potential New Uses of Pegylated Proteins

      González Valdéz, José G. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-01-05)
      Among all biotechnological products and because of their wide range of applications, proteins are probably, from a technological and commercial point of view, the most important molecules available to humans. These natural polymers are involved in basically every single process within the cell. With their discovery and the technological advances in science the use of proteins has become a common procedure in many fields including medicine, agriculture and engineering by exploiting their original biological functions in new purposes. However, in most cases where proteins are removed and further processed from their natural environment, a series of problems like the decrease or loss of biological activity due to structural changes caused by factors like temperature and pH may appear. Their delicate folding behavior tightly related to their precisely defined primary sequence makes them susceptible to enzyme degradation and affects their solubility in organic solvents limiting their use in applications where their possible toxicity and undesired immune response might become an issue.1 This situation results in the need to find ways to preserve, assure or even increase their functionality for the final desired application. Among the tools designed to achieve this we can find techniques such as protein engineering modification (where certain amino acids or sequences from the original protein structure are changed, added and/or deleted) and different chemical modifications such as protein crosslinking, chemical introduction of small moieties, atom replacement, cofactor introduction and modification with monofunctional polymers.2
    • Análisis de técnicas de mitigación de desvanecimientos en radioenlances en el marco de las comunicaciones móviles

      Jaime Humberto Pech Carmona; JAIME HUMBERTO PECH CARMONA (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012)
    • Análisis metabolómico diferencial en fruto de chile habanero (Capsicum chinense Jacq.) durante maduración y en respuesta a condiciones edáficas subóptimas

      Rafael Urrea López (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2014-05-01)
      El chile habanero (Capsicum chinense, Jacq.) es un fruto de alto valor, apreciado por sus propiedades organolépticas de sabor y elevada pungencia. Sin embargo, su producción se ve limitada por la escasez de variedades altamente productivas con buen desempeño ante estreses bióticos y abióticos. El objetivo de este estudio fue caracterizar los efectos de la exposición prolongada a condiciones edafológicas subóptimas en el estatus metabolómico del fruto de chile habanero durante maduración, utilizando e implementando técnicas de metabolómica dirigida y no-dirigida; así como también caracterizar la respuesta fisiológica general de la planta y su rendimiento. Para ello se sometieron plantas de chile habanero a tratamientos de bajo P, bajo N y dos niveles de salinidad (4 y 7 dS·m-1), en cultivo hidropónico con fertirriego basado en solución nutritiva Hoagland, aplicado desde el inicio de floración. A través de técnicas dirigidas se evaluó la respuesta de fotosíntesis de la planta, de partición de biomasa, y de metabolitos relacionados con calidad en frutos maduros (capsaicinoides, ascorbato, carotenoides, fenólicos y azúcares). El perfil de los cambios en metaboloma durante maduración y por efecto de los tratamientos se evaluó a través de HPLC-ESI-TOF en pericarpio de frutos en tres estados de maduración; el procesamiento de datos se realizó con MZmine y los datos se analizaron con modelo lineal de efectos mixtos (MLEM p ≤0.001) programado en R; la identificación tentativa se asignó por comparación de patrones de fragmentación obtenidos por MS/MS.
    • Análisis y diseño de una topología de inversor multinivel basada en primas poligonales para aplicación en energías alternas

      Aldo Elihu Flores González; ALDO ELIHU FLORES GONZÁLEZ (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2016)
    • Aplicación Genérica de Sistemas de Dos Fases Acuosas para Procesos de Recuperación Primaria de Compuestos Biológicos

      Benavides Lozano, Jorge A. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-12-01)
    • Application of ultrasonic micro injection molding for manufacturing of UHMWPE microparts

      Zuñiga, Alez Elías; Hernández Ávila, Marcelo; Sánchez Sánchez, Xavier; Martínez Romero, Oscar; Siller Carillo, Héctor Rafael; Pallacios, Luis Manuel (2017-12-05)
      Ultrasonic micro injection molding was confirmed to be an efficient processing technique for the fabrication of a well-filled miniaturized dog-bone shaped specimen of ultra-high molecular weight polyethylene (UHMWPE). The influence of parameters such as mold temperature, plunger velocity profile, vibrational amplitude, shape of raw material, is analyzed using techniques such as the Design of Experiments, and a methodological proposal. The influence of four process parameters on the filling phase of the reduced-size cavity was then analyzed . It was established that it is possible to fabricate well-defined specimens when the highest ultrasonic amplitude is applied intermittently at specific intervals during the ultrasonic process to small compacted irregularly shaped UHMWPE samples and the mold temperature is set to 100 °C. GPC results showed a decrease in the molecular weight, which was the greatest when 100% of the ultrasonic amplitude was applied. The degree of crystallinity of the processed sample was increased because the reduction of the molecular weight. TGA showed that the thermal stability of UHMWPE fabricated by ultrasonic processing was not significantly influenced by the decrease in the molecular weight. FTIR spectra indicated oxidative degradation in three different regions of the processed UHMWPE specimen. Additionally, the band identified at the wavenumber 910 cm-1 indicated a chain scission phenomenon the polymer experienced during the ultrasonic processing
    • Aprendizaje de Clasificadores Bayesianos Estáticos y Dinámicos

      Martínez Arroya, Miriam (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-06)
      Aunque el clasificador bayesiano simple ha sido ampliamente utilizado debido a que es un modelo de clasificacio ?n eficiente, fácil de aprender y con gran exactitud en muchos dominios, este presenta dos principales desventajas: la exactitud de la clasificacio ?n disminuye cuando los atributos no son independientes, y no puede ocuparse de atributos continuos. Además de que existen otras con- sideraciones que afectan el proceso de aprendizaje, tales como trabajar con informacio ?n incompleta o faltante, manejo de grandes cantidades de datos y/o variables, seleccio ?n de atributos representativos al problema, entre otras. Un clasificador bayesiano simple puede representar dominios estáticos o puede también representar dominios dinámicos, considerar este aspecto complica au ?n mas el proceso de aprendizaje. El objetivo, entoces, es proporcionar un método aprendizaje de clasificadores bayesianos sim- ples que evite que la exactitud de la clasificacio ?n disminuya cuando los atributos no sean independien- tes, y se ocupe de atributos continuos no paramétrizados, además de considerar aspectos de seleccio ?n de atributos relevantes y manejar informacio ?n oculta, garantizando obtener una buena estructura y conservando la simplicidad del modelo o reduciendo la complejidad del mismo. Proponemos dos nuevos métodos: aprendizaje de clasificadores bayesianos estáticos (ACBE) y aprendizaje de clasificadores bayesianos dinámicos (ACBD). El método ACBE incluye cuatro etapas, Inicializacio ?n, Discretizacio ?n, Mejora estructural y Clasificacio ?n. Las etapas de Discretizacio ?n y Mejora estructural se repiten hasta que la exactitud de la clasificacio ?n no puede ser mejorada. La Discretizacio ?n se basa en el principio MDL, donde el nu ?mero de intervalos que minimiza el MDL se obtiene por cada atributo. Para tratar con atributos dependientes y atributos irrelevantes, aplicamos un método que elimina y/o une atributos, basado en medidas de informacio ?n mutua condicional y evaluando la exactitud de la clasificacio ?n despues de cada operacio ?n. El método ACBD incluye cinco etapas, Inicializacio ?n, Discretizacio ?n, Determinacio ?n del no- do clase oculto, Mejora estructural y Clasificacio ?n dinámica. En método de Discretizacio ?n es el mismo que para el ACBE; la etapa de Mejora estructural es similar, solo varia la evaluacio ?n de las estructuras resultantes, ya que se consideran como estructuras de árbol y se evaluan través de una medida de calidad basada en el principio MDL, la Determinacio ?n del mejor nu ?mero de estados para el nodo clase oculto se basa en el algoritmo EM y las estructuras resultantes se evaluan con base a la medidad de calidad. Finalmente para la Clasificacio ?n dinámica se construye la red de transicio ?n mediante una técnica general de redes bayesianas dinámicas. Los métodos se probaron en aplicaciones con datos reales obteniendo muy buenos resultados. El método estático se aplico ? en el reconocimiento de piel (con un 98 % de exactitud) y en la deteccio ?n de cancer cervical (94 % de exactitud), el modelo dinámico se aplico ? en el reconocimiento de siete gestos, usando un modelo para cada gesto, los cuales son representados por un clasificador bayesiano dinámico que obtuvo en promedio 98 % de exactitud para los datos de prueba.
    • An Autonomic Hybrid Multiagent Service Architecture to Reduce IT Troubleshooting

      Fernández Carrasco, Luis M. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-05)
      This dissertation is submitted to the Graduate Programs in Mechatronics and Information Technologies of the School of Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technologies and Communications, with a major in Intelligent Systems. This document describes a novel architectural design that provides an operating-system-like service as it includes autonomic computing features. The objective of this new service is to reduce troubleshooting in general by promoting a self-managing paradigm. The use of computing systems is nowadays something that is taken for granted. One just needs to look around and will easily see that there is a computation process going on in almost every direction. Moreover, now such processes are not just restricted to personal computers but to devices such as cellular phones, PDAs, laptops, etc. In other words, computing systems now are ubiquitous. Furthermore, these devices are not isolated units of processes but are interconnected and can send and receive information at any time, anywhere. The Internet adds another layer of complexity to this already labyrinthine setting. The demands that now people and current business models place on computing systems go from running a simple application, where the hardware was not built specifically for such application, to a cooperative network where all constituents are using a variety of systems and commands. As it can be seen, managing all these networked devices as a whole in a robust and transparent manner demands a lot of resources and time. Nevertheless, it is something that has to be done. IBM observed the problem that managing sets of heterogeneous devices that need to work, cooperate and communicate with each other represented. They perceived this problem as the main obstacle to more progress in the IT industry, i.e., complexity was threatening the development of better IT solutions. Consequently, in 2001, IBM launched its autonomic computing initiative which main objective is to have self-managing systems, more specifically, systems that are self-configuring, self-optimizing, self-healing and self-protecting. Thus, leaving really important tasks to human involvement and delegating administrative tasks to the system itself. Something similar to what the human autonomic nervous system does. The IBM initiative has caught up the attention of a number of institutions both from the IT industry and from academia as everyone sees that managing current and future IT environments will demand a new paradigm. The research project, that this dissertation presents, tackles one aspect of the problem described lines above. The objective is to reduce the troubleshooting in general that a typical user faces when using a personal computer. Consequently, the solution proposed is to design a new system architecture that provides an operating-system-like service which shows all four characteristics that autonomic computing looks for, i.e., self-configuration, self-optimization, vi self-protection and self-healing. This approach is supported by the fact that operating systems are the ones that, at a low level, handle most computing resources and that they are present in all devices which ensures the applicability of the proposed solution and its impact. Moreover, current approaches to autonomic computing systems are usually built on top of non-autonomic ones. That approach may not be right as what one wants is to have a fully autonomic system. Furthermore, having an operating-system-like service that is indeed autonomic allows the development of other autonomic systems that could run on top of it. What is more, current autonomic systems initiatives do not fully integrate all four characteristics, whereas this research does so. The model is a combination of a multiagent design, selection methods found in nature- based algorithms, and learning techniques. The main idea is to mold each component that a typical operating system manages as an agent, incorporate performance criterion evaluators to select the best candidate to perform a task (self-optimization), provide a flexible yet robust communication protocol among agents that allows the execution of any job (self-configuration), and implement specialized agents that supervise and learn from threats and normal program executions in order to keep the system running (self-protection and self-healing). The system was evaluated using a multiagent simulation and, although there might be some objections to this testing approach, a lot of effort was put in order to have the simulated environment perform similarly to one found in real-life systems. Consequently, a programming language was created, named HAL, which allowed the simulation of applications, both benign and harmful, running in a multiagent environment asking for resources. Thus, the simulated prototype is very close to a computer that runs applications, allowing a proper evaluation of the proposed design. Something worth pointing out is the fact that autonomic computing in general is very task dependant and that there are quite few approaches to having an autonomic operating system service provider (e.g., Unity). This fact did not allow a precise one-to-one comparison with other closely related works as people are applying the autonomic computing paradigm to a variety of systems where what changes is the way such feature is achieved. Nevertheless, this research has provided some contributions to the field, namely, a novel architecture for future OS design that is autonomic, a low-level service that shows all four autonomic features, a framework for simulation of autonomic systems, a programming language that can be used to implement testing and evaluation of autonomic systems, and a way to measure and evaluate self-? properties in computing systems. The following chapters of this document present the work that was conducted in order to achieve what has been described lines above, including the first approach to this problem, excitable media. Excitable media provided the guidelines and set the path in order to achieve the results that this research found, results that reaffirm the idea that, at the beginning steered this research project, a multiagent autonomic operating system service is a good way to reduce troubleshooting by providing a self-managing environment.
    • Calibración de un Modelo Estocástico del Comportamiento de los Precios del Petróleo

      Dávila Pérez, Javir (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-11)
      Se propone un modelo de tres factores para calibrar la estructura de plazos de los precios futuros del petróleo crudo. Los parámetros del modelo se estiman mediante el uso del filtro de Kalman en un escenario con paneles de datos incompletos. Se aplica el modelo de tres factores a un conjunto de datos de precios futuros del petróleo, en el que se incluyen todos los contratos de futuros del petróleo negociados en el New York Mercantile Exchange (NYMEX) en los Últimos diez años. Se muestra, finalmente, que el modelo propuesto se ajusta de manera adecuada a los datos observados, así como a la estructura de plazos de la volatilidad empírica, lo cual es relevante para propósitos de administración del riesgo. Una aplicación del modelo propuesto nos sirve para estimar el precio y la volatilidad de la mezcla mexicana de petróleo crudo de exportación (MME) y la volatilidad asociada a sus rendimientos. La metodología propuesta se resume en tomar, por una parte, el precio futuro del petróleo crudo dulce ligero objeto de los contratos de futuros negociados en el NYMEX más un diferencial estimado en forma determinista como un proxy del precio de la MME. Por otro lado, se estima la volatilidad de los rendimientos del precio de la MME como medio para mejorar la precisión de la estimación. Se utilizan los datos de precios diarios correspondientes a los contratos de futuros negociados entre el 2 de enero de 1998 y el 14 de febrero de 2007, incluyéndose un total de 51,334 observaciones. Se agregan los datos, pero se usa cada contrato disponible separadamente. El nÚmero de contratos considerados para la estimación inicia con 22, con una madurez máxima de 3 años, hasta llegar, al final del período muestral, a 68 diferentes contratos con madurez mensual que va desde 1 mes hasta casi 6 años. Se concluye que los resultados de las estimaciones permiten lograr un buen ajuste de la estructura de precios de los contratos futuros de plazos cortos, aunque menos robusto en la predicción de los precios de los contratos futuros de más largo plazo. El modelo de tres factores propuesto presenta un mejor desempeño que los modelos de uno y dos factores. Se muestra también que con este modelo la estructura de plazos de la volatilidad decae conforme el plazo aumenta y que converge a una constante positiva, lo que resulta de haber incluido un proceso no estacionario con reversión a la media.
    • Capacitated fixed cost facility location problem with transportation choices

      Olivares Benítez, Elías; ELIAS OLIVARES BENITEZ;253922 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-05-01)
      In this work a Supply Chain Design problem is addressed. The problem is based on a two-echelon distribution system for a single product. In the system, plants ship the product to distribution centers, and these dispatch the product to the customers. One of the decisions in the problem is to locate the distribution centers among a set of potential sites. There are optional arcs between each pair of facilities in each echelon that represent different transportation channels defined by cost and time parameters. These transportation channels can be seen as transportation modes (rail, truck, airplane, etc.), transportation services from the same company (regular, express, overnight, etc.) or services offered by different companies. At difference of similar models for the same type of problem the transportation channel selection introduces a tradeoff between cost and time. The cause of this tradeoff is that a faster delivery service is usually more expensive. The general problem named "Capacitated Fixed Cost Facility Location Problem with Transportation Choices" (CFCLP-TC) has two objectives: to minimize the cost and to minimize the transportation time from the plants to the customers. The cost criterion is an aggregated function of fixed and variable costs. The time function represents the maximum time that may take to transport the product along any path from the plants to the customers. The mathematical model decides distribution centers location, transportation channel selection, and transportation flows. The aim in the solution of the problem is to present the set of non-dominated alternatives to the decision maker. Therefore it is treated as a bi-objective mixed-integer program that minimizes the time and cost objectives simultaneously. To solve the CFCLP-TC several versions of one algorithm were developed to implement the ?-constraint method. These versions were compared among them and the best was selected to obtain true efficient sets and bound sets for the instances tested. A limit on the size of solvable instances was identified according to the available computational resources. This limit is on instance sizes with 255 binary variables and 940 constraints. Several instances of sizes below this limit were solved with the ?-constraint based algorithm and their true efficient sets were obtained. For instances of larger size a modification to the ?-constraint based algorithm was done to obtain their upper bound sets. Also four lower bounding schemes were studied. These were based on linear relaxations of the mixed-integer program. The relaxation of the set of variables corresponding to the distribution center location resulted in the best lower bound sets. Because of the computational complexity of the CFCLP-TC a metaheuristic algorithm was developed in an attempt to solve it. This algorithm uses some elements from state-ofthe-art metaheuristics for single and multiobjective optimization. The parameters of the i Elias Olivares-Benitez (2007). Capacitated Fixed Cost Facility Location Problem with Transportation Choices. PhD Dissertation, Tecnologico de Monterrey, Monterrey, Mexico, May. algorithm were fixed after some tuning tests. The metaheuristic algorithm was finally tested in instances of small and large sizes. The approximate efficient sets obtained were compared to the true efficient sets for small instances, and to the upper bound sets for large instances. The results indicate an excellent performance of the metaheuristic algorithm in particular for large size instances.
    • Caracterización de un prototipo experimental para el depósito físico de vapores y síntesis de películas delgadas de AIN

      Jorge Alberto Acosta Flores; JORGE ALBERTO ACOSTA FLORES (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007)