• Hybrid Self-Learning Fuzzy PD + I Control of Unknown SISO Linear and Nonlinear Systems

      Santana Blanco, Jesús (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-12-01)
      A human being is capable of learning how to control many complex systems without knowing the mathematical model behind such systems, so there must exist some way to imitate that behavior with a machine. In this dissertation a novel hybrid self-learning controller is proposed that is capable of learning how to control unknown linear and nonlinear processes incorporating human behavior characteristics shown when he/she is learning how to control an unknown process. The controller is comprised of a Fuzzy PD controller plus a conventional I controller and its corresponding gains are tuned using a human-like learning algorithm developed upon characteristics observed on actual human operators while they were learning how to control an unknown process reaching specified goals of steady-state error (SSE), settling time (Ts), and percentage of overshooting (PO). The systems tested were: first and second-order linear systems, the nonlinear pendulum, and the nonlinear equations of the approximate pendulum, Van der Pol, Rayleigh, and Damped Mathieu. Analysis and simulation results are presented for all the mentioned systems. More detailed results are provided for a nonlinear pendulum as a representative of nonlinear systems and for a second-order linear temperature control system as a representative of linear systems. This temperature system is used as a comparative benchmark with other controllers shown in the literature [10] that use this temperature control system, showing that the proposed controller is simpler and has superior results. Also, a robustness analysis is shown that demonstrates that the proposed controller keeps acceptable performance even under perturbation, noise, and parameter variations.
    • The Impact of Statistical Word Alignment Quality and Structure in Phrase Based Statistical Machine Translation

      Guzmán Herrera, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Statistical Word Alignments represent lexical word-to-word translations between source and target language sentences. They are considered the starting point for many state of the art Statistical Machine Translation (SMT) systems. In phrase-based systems, word alignments are loosely linked to the translation model. Despite the improvements reached in word alignment quality, there has been a modest improvement in the end-to-end translation. Until recently, little or no attention was paid to the structural characteristics of word-alignments (e.g. unaligned words) and their impact in further stages of the phrase-based SMT pipeline. A better understanding of the relationship between word alignment and the entailing processes will help to identify the variables across the pipeline that most influence translation performance and can be controlled by modifying word alignment's characteristics. In this dissertation, we perform an in-depth study of the impact of word alignments at different stages of the phrase-based statistical machine translation pipeline, namely word alignment, phrase extraction, phrase scoring and decoding. Moreover, we establish a multivariate prediction model for different variables of word alignments, phrase tables and translation hypotheses. Based on those models, we identify the most important alignment variables and propose two alternatives to provide more control over alignment structure and thus improve SMT. Our results show that using alignment structure into decoding, via alignment gap features yields significant improvements, specially in situations where translation data is limited. During the development of this dissertation we discovered how different characteristics of the alignment impact Machine Translation. We observed that while good quality alignments yield good phrase-pairs, the consolidation of a translation model is dependent on the alignment structure, not quality. Human-alignments are more dense than the computer generated counterparts, which trend to be more sparse and precision-oriented. Trying to emulate human-like alignment structure resulted in poorer systems, because the resulting translation models trend to be more compact and lack translation options. On the other hand, more translation options, even if they are noisier, help to improve the quality of the translation. This is due to the fact that translation does not rely only on the translation model, but also other factors that help to discriminate the noise from bad translations (e.g. the language model). Lastly, when we provide the decoder with features that help it to make "more informed decisions" we observe a clear improvement in translation quality. This was specially true for the discriminative alignments which inherently leave more unaligned words. The result is more evident in low-resource settings where having larger translation lexicons represent more translation options. Using simple features to help the decoder discriminate translation hypotheses, clearly showed consistent improvements.
    • Implementation of a two-photon michelson interferometer for quantum-optical coherence tomography

      López Mago, Dorilián; DORILIAN LOPEZ MAGO;262725 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-05-01)
      Time-domain Optical Coherence Tomography (OCT) is an imaging technique that provides information about the infernal structure of a sample. It makes use of classical light in conjunction with conventional interferometers. A quantum versión of OCT, called Quantum-Optical Coherence Tomography (QOCT), has been developed in previous years. QOCT uses entangled photon pairs in conjunction with two-photon interferometers. QOCT improves depth resolution and offers more information about the optical properties of the sample. However, the current implementation of QOCT is not competitive with its classical counterpart because of the low efficiency of the current sources and detectors that are required for its implementation. We analyzed the feasibility of QOCT using a Michelson interferometer that can be adapted to the state of the art in entangled photon sources and detectors. Despite of its simplicity, no current implementations of QOCT have been done with this interferometer. This thesis develops the theory of the two-photon Michelson interferometer applied in QOCT. It describes the elements that characterizes the coincidences interferogram and support the theory with experimental measurements. We found that as long as the spectral bandwidth of the entangled photons is smaller than their central frequency, the Michelson interferometer can be successfully used for QOCT. In addition, we found that the degree of entanglement between the photons can be calculated from the coincidences interferogram. The two-photon Michelson interferometer provides another possibility for QOCT with the advantages of simplicity, performance and adaptability. The resolution of the interferometer can be improved using ultrabroadband sources of entangled photons, e.g. photonic fibers. In addition, we can study the implementation of photonnumber resolving detectors in order to remove the detection of coincidences that is used for detecting entangled photon pairs.
    • Implementing an object-oriented method of information systems for CIM to the Mexican industry

      Julián Prieto Magnus; JULIÁN PRIETO MAGNUS (Instituto Tecnológico y de Estudios Superiores de Monterrey, 1997)
    • In the Task-Driven Generation of Preventive Sensing Plans for Execution of Robotic Assemblies

      Conant Pablos, Santiago E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      It is well known that success during robotic assemblies depends on the correct execution of the sequence of assembly steps established in a plan. In turn, the correct execution of these steps depend on the conformance to a series of preconditions and postconditions on the states of the assembly elements and in the consistent, repeatable, and precise actions of the assembler (for instance, a robotic arm). Unfortunately, the ubiquitous and inherent real-life uncertainty and variation in the work-cell, in the assembly robot calibration, and in the robot actions, could produce errors and deviations during the execution of the plan. This dissertation investigates several issues related to the use of geometric information about the models of component objects of assemblies and the process of contact formation among such objects for tackling the automatic planning of sensing strategies. The studies and experiments conducted during this research have led to the development of novel methods for enabling robots to detect critical errors and deviations from a nominal assembly plan during its execution. The errors are detected before they cause failure of assembly operations, when the objects that will cause a problem are manipulated. Having control over these objects, commanded adjustment actions are expected to correct the errors. First, a new approach is proposed for determining which assembly tasks require using vision and force feedback data to verify their preconditions and the preconditions of future tasks that would be affected by lack of precision in the execution of those tasks. For this, a method is proposed for systematically assign force compliance skills for monitoring and controlling the execution of tasks that involve contacts between the object manipulated by the robot arm in the task and the objects that conform its direct environmental configuration. Also, a strategy is developed to deduce visual sensing requirements for the manipulated object of the current task and the objects that conform its environment configuration. This strategy includes a geometric reasoning mechanism that propagates alignment constraints in a form of a dependency graph. Such graph codes the complete set of critical alignment constraints, and then expresses the visionand force sensing requirements for the analyzed assembly plan. Recognizing the importance of having a correct environment configuration to succeed in the execution of a task that involve multiple objects, the propagation of critical dependencies allow to anticipate potential problems that could irremediably affect the successful execution of subsequent assembly operations. This propagation scheme represents the heart of this dissertation work because it provides the basis for the rest of the contributions and work. The approach was extensively tested demonstrating its correct execution in all the test cases. Next, knowing which are the tasks that require preventive sensing operations, a sensor planning approach is proposed to determine an ordering of potential viewpoints to position the camera that will be used to implement the feedback operations. The approach does not consider kinematic constraints in the active camera mechanism. The viewpoints are ordered depending on a measure computed from the intersection of two regions describing the tolerance of tasks to error and the expected uncertainty from iii an object localization tool. A method has been posed to analytically deduce the descriptions of inequalities that implicitly describe a region of tolerated error. Also, an algorithm that implements an empirical method to determine the form and orientation of six-dimensional ellipsoids is proposed to model and quantify the uncertainty of the localization tool. It was experimentally shown that the goodness measure is an adequate criterion for ordering the viewpoints because agrees with the resulting success ratio of real-life task execution after using the visual information to adjust the configuration of the manipulated objects. Furthermore, an active vision mechanism is also developed and tested to perform visual verification tasks. This mechanism allows the camera move around the assembly scene to recollect visual information. The active camera was also used during the experimentation phase. Finally, a method is proposed to construct a complete visual strategy for an assembly plan. This method decides the specific sequence of viewpoints to be used for localizing the objects that were specified by the visual sensing analyzer. The method transforms the problem of deciding a sequence of camera motions into a multi-objective optimization problem that is solved in two phases: a local phase that reduces the original set of potential viewpoints to small sets of viewpoints with the best predicted success probability values of the kinematically feasible viewpoints for the active camera; and a global phase that decides a single viewpoint for each object in a task and then stitch them together to form the visual sensing strategy for the assembly plan.
    • Innovative Optimal Design Methods

      Moreno Grandas, Diana P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2013-12-01)
    • An Integrated Data Model and Web Protocol for Arbitrarily Structured Information

      Álvarez Cavazos, Francisco (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
      Within the Web´s data ecosystem dwell applications that consume and produce information with varying degrees of structuring, ranging from very structured business data to the semistructured or unstructured data found in documents which contain a significant amount of text. Current database technology was not designed for the Web and, consequently, database communication protocols, query models, and even data models are inadequate for the demands of "data everywhere." Thus, a technique to uniformly store, search, transport and update all the variety of information within Web or intranet environments has yet to be designed. The Web context require the data management community to address: (a) data modeling and basic querying to support multiple data models to accommodate many types of data sources, (b) powerful search mechanisms that accept keyword queries and select relevant structured sources that may answer them, and (c) the ability to combine answers from structured and unstructured data in a principled way. In consequence, this dissertation constructively designs a technique to store, search, transport and update unstructured and structured information for Web or intranet-based environments: the Relational-text (RELTEX) protocol. Central to the design of the protocol is an integrated model for structured and unstructured data and its associated declarative language interface, namely, the RELTEX model and calculus. The RELTEX model is constructively defined departing from the relational and information retrieval models and their associated retrieval strategies. The model´s data items are tuples with structured "columns" and unstructured "fields" that further allow idiosyncratic schema in the form of "extension fields", which are tuple-specific name/value pairs. This flexibility allows representation of totally unstructured information, totally structured information, and mixtures of structured and unstructured data, such as tables where tuples have a varying number of fields over time. RELTEX calculus extends tuple relational calculus to consider text fields, similarity matches, match ranking, and sort order. Then, building on top of the formally-defined RELTEX data model and calculus and departing from the architecture of the Web, the RELTEX protocol is defined as a resource-centric protocol to describe and manipulate data and schema of unstructured and structured data sources. An equivalence mapping between RELTEX and the relational and information retrieval models is provided. The mapping suggests a wide range of applicability for RELTEX, thus proving the model´s value. On the other hand, the RELTEX protocol is distinguished from other techniques for data access and storage in the Web since (a) it supports structured and unstructured data manipulation and retrieval, (b) it offers operations to describe and manipulate both common and idiosyncratic schema of data items and (c) it directly federates data items to the Web over a compound key; thus demonstrating novelty and value. The RELTEX protocol, model and calculus are proven feasible by means of a proof-of-concept implementation. Departing from a motivating scenario, the prototype is used to provide representative examples of data and schema operations. Having demonstrated that the RELTEX protocol and model contribute towards the data modeling and basic querying challenge imposed by the Web, we expect that this dissertation benefits researchers and practitioners alike with a novel, valuable, effective and feasible technique to store, search, transport and update unstructured and structured information in the Web environment.
    • Intelligent Monitoring and Supervisory Control System in Peripheral Milling Process in High Speed Machining

      Vallejo Guevara, Antonio Jr. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-11)
      This research is leading to solve a real problem in High Speed Machining processes (HSM), specifically in the peripheral milling process. Nowadays, the machining processes have increased their complexity by considering the HSM, because of the high dimensional precision, high surface quality, and the minimum cost in the demanded products. The general scope of this research is: Design and implement an intelligent monitoring and supervisory control system for peripheral milling process in HSM. The main objectives of this research are defined as follows: � Implement a general model to predict the surface roughness by considering several aluminium alloys, cutting parameters, geometries, and cutting tools. � Design and implement a monitoring and diagnosis system for the cutting tool wear condition during the machining process. � Design and develop an intelligent process planning system, which includes a merit variable to compute the optimal cutting parameters and a decision-making module to recommend some actions in agreement with the cutting tool wear condition. The design and implementation of the system implied to make research, exhaust experiments, and write several papers to validate the proposal ideas and algorithms. The main contributions can be summarized as follows: � A complete data acquisition system was implemented in a machining center HS-1000 Kondia. Several sensors were installed to characterize the surface roughness (Ra) and flank wear of the cutting tool with the process state variables. The Mel Frequency Cepstrum Coefficients (MFCC) computed from the process signals were used for modelling the Ra with ANN models. � Related with the Ra modelling, the most important factors affecting the Ra were deduced by applying the screening factorial design. Also, Response Surface Methodology was applied with excellent results for modeling the Ra. The models were computed for a new, half-new, half-worn, and worn cutting tool condition. Multi-sensor and data fusion were used to build ANN models with excellent results. � New ideas based in the Hidden Markov Models (HMM) and the MFCC were developed for monitoring and diagnosis the cutting tool wear condition for peripheral milling process in HSM. The system was implemented for recognizing on-line four cutting tool wear conditions: new, half-new, half-worn, and worn condition. � The design and implementation of the intelligent monitoring and process planning system (IMPPS) represented the main module of the intelligent monitoring and supervisory control system. In this module, Genetic Algorithms with the RSM models were used to compute the optimal cutting parameters in Pre-process operating mode with excellent results. Another contribution was the implementation of the Markov Decision Process in the optimization process. This algorithm recommends optimal actions for minimizing the operation cost during the production of specific workpieces.
    • Intelligent wheelchair

      Gregory Monnard Reguin, David; David Gregory Monnard Reguin
      The project proposed is creating a whellchair that includes four major features. The first is being able to control the chair by moving the eyes, the second is having the possiblility of reproducing prerecorded voice messages, th thirds is being able to control the chair with voice commands and the last feature is an avoidance system based on the data collected with the ultrasonic sensors.
    • Large Scale Topic Modeling Using Search Queries: An Information-Theoric Approach

      Ramírez Rangel, Eduardo H. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-01-12)
      Creating topic models of text collections is an important step towards more adaptive information access and retrieval applications. Such models encode knowledge of the topics discussed on a collection, the documents that belong to each topic and the semantic similarity of a given pair of topics. Among other things, they can be used to focus or disambiguate search queries and construct visualizations to navigate across the collection. So far, the dominant paradigm to topic modeling has been the Probabilistic Topic Modeling approach in which topics are represented as probability distributions of terms, and documents are assumed to be generated from a mixture of random topics. Although such models are theoretically sound, their high computational complexity makes them difficult to use in very large scale collections. In this work we propose an alternative topic modeling paradigm based on a simpler representation of topics as freely overlapping clusters of semantically similar documents, that is able to take advantage of highly-scalable clustering algorithms. Then, we propose the Querybased Topic Modeling framework (QTM), an information-theoretic method that assumes the existence of a "golden" set of queries that can capture most of the semantic information of the collection and produce models with máximum semantic coherence. The QTM method uses information-theoretic heuristics to find a set of "topical-queries" which are then co-clustered along with the documents of the collection and transformed to produce overlapping document clusters. The QTM framework was designed with scalability in mind and is able to be executed in parallel over commodity-class machines using the Map-Reduce approach. Then, in order to compare the QTM results with models generated by other methods we have developed metrics that formalize the notion of semantic coherence using probabilistic concepts and the familiar notions of recall and precisión. In contrast to traditional clustering metrics, the proposed metrics have been generalized to validate overlapping and potentially incomplete clustering solutions using multi-labeled corpora. We use them to experimentally validate our query-based approach, showing that models produced using selected queries outperform the ones produced using the collection vocabulary. Also, we explore the heuristics and settings that determine the performance of QTM and show that the proposed method can produce models of comparable, or even superior quality, than those produced with state of the art probabilistic methods.
    • Mathematical models of some evolutionary systems under the influence of stochastic factors

      Rodríguez Said, Roberto D. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
      As it is known, the problem of availability of information is normally addressed using a buffer. Most of the times it is required that the effectiveness or reliability of the system is calculated to optimize the amount of stored information according to the customers random requests and to the amount of incoming information from the supply line. In this thesis, we consider the case of single buffer connected to any number of customers with bursty demands. We model the variation of the level of stored information in the buffer as an evolution in a random media. We assume that the customers can be modeled as semi-Markov stochastic processes and we use the phase merging algorithm to reduce the evolution process in a semi-Markov to an approximated evolutions in a Markov media. Then we obtain a general solution to the stationary probability density of the level of the buffer and general results for the stationary efficiency of the system.
    • Methodology Based on the State Transition Concept for Simple Constitutive Modeling of Smart Materials

      Varela Jiménez, Manuel I. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Smart materials have the capability to sense and respond to environmental stimuli in a predictable and useful manner. Its existence has transformed the paradigm that materials can be used only for structural purposes into the concept that these can also be the basis for actuators or sensors, generating new possibilities for design of devices. However, development of these materials also creates necessity of proposition of new theories and concepts that allow understanding its behavior. This dissertation focuses on development of a general constitutive model for describing response of several smart materials by considering that a microstructural change is stimulated in them, such a state transition that follows a sigmoidal behavior and can be modeled by a proposed expression that describes transition induced by an external factor. Such expression results flexible and able to adapt to take several kinds of external variable as the main parameter that induces transformation. A methodology for purposing a state transition in smart materials and modeling its response to some stimulus through a common mathematical expression relating the effect of microstructural changes on some variable associated to the material is proposed and evaluated. This way, there were studied; 1) effect of twinned martensite - detwinned martensite - austenite strain/temperature induced phase transformation on stress of Nicke l - Titanium shape memory alloy, 2) effect of glassy - active temperature induced state transition on stress of shape memory polymers, 3) effect o f magnetic field induced arrangement of iron particles on shear yield stress of magnetorheological fluid and 4) effect o f electric field induced arrangement of ions on the bending of a thin film of electroactive polymer. A constitutive model is proposed for each material resulting in promising results due to good fitting with experimental data and comparison with some other models, although it has some limitations such as being unidimensional, considering only one way behavior of the materials or have been fitted for specific geometries or chemical composition and stills needs to be generalized. However, it can be considered as an initial approach for a general model for smart materials regardless of their atomic structure, chemical bonds or physical domain, that could be applied for design of materials and simulation of its behavior through numerical methods.
    • Mixed Oligopoly: analysis of oligopoly models considering social welfare

      Cordero Franco, Alvaro E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-12)
    • Modelación de Interacciones Múltiples en Sistemas de Información Cooperantes

      Camargo Santacruz, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2001-01-12)
      La naturaleza dinámica de los ambientes de agentes cooperativos dificulta la tarea de modelar interacciones permanentes entre agentes, resaltando los problemas de ambigüedad y control en la representación del mecanismo de interacción. La dificultad de modelar se incrementa si más de dos agentes están involucrados en la interacción de manera simultánea. Este problema es uno de los desafíos más importantes dentro de la investigación en Sistemas Multiagentes Cooperantes (SMAC´s). Los Sistemas de Información Cooperantes (SIC´s o CIS por sus siglas en Inglés, Cooperative Information Systems) pueden ser vistos como SMAC´s e integran diferentes tipos de sistemas de información para que trabajen cooperativamente por un objetivo común. Estos sistemas son considerados por su naturaleza como sistemas que presentan un comportamiento dina?mico y por ende, uno de sus principales problemas es el referente al como modelar y controlar múltiples interacciones simultáneas entre los agentes, de una manera sencilla para el ingeniero de software. El problema anterior se acentu?a debido a que los me?todos utilizados por los ingenieros de software para modelar la interacción de los SIC´s son poco expresivos y más aun, cuando se presentan situaciones de interacción entre más de dos agentes de manera simultánea. Esto complica la labor de modelación de estos sistemas y hace difícil la comunicación entre modeladores y desarrolladores lo que trae como consecuencia altos costos de desarrollo. La contribución principal de esta tesis se centra en un modelo basado en Redes de Petri Coloreadas (RPC o CPN por sus siglas en Inglés, Coloured Petri Nets) para modelar las interacciones múltiples simultáneas en Sistemas de Información Cooperantes de una manera expresiva. Este modelo contribuye a facilitar la representación de la dina?mica del sistema y en la reducción de la dificultad asociada con la modelación de la misma. El modelo integra principalmente: a) el ciclo básico de acción llamado "Loop", para representar las interacciones del sistema y modelar conversaciones en las organizaciones, b) las Redes de Petri Coloreadas para la especificación de las interacciones representadas en el loop y para la simulación del sistema, c) los actos comunicativos de la Fundación para Agentes Inteligentes Fi?sicos (FIPA por sus siglas en Inglés, Foundation for Intelligent Physical Agents), incluidos en la especificación del lenguaje de comunicación para agentes. El modelo brinda ventajas en la representación y el razonamiento de los mecanismos de interacción modelados en SIC´s. Para validar el modelo propuesto, se presentan dos aplicaciones de éste en los dominios de Negocios Electrónicos (e-business por sus siglas en Inglés, Electronic Business) y Centros de Contacto respectivamente, los cuales son ambiente dinámicos que requieren de herramientas adecuadas para representar y controlar múltiples interacciones de una manera expresiva.
    • Modelación Multiescala del Comportamiento Mecánico de Polímeros Reforzados con Nanotubos de Carbón de Pared Sencilla

      Rosales Torres, Conrado (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-01-03)
      En el desarrollo del presente trabajo de investigación, se puso especial énfasis en consolidar una teoría que permita relacionar el comportamiento de materiales poliméricos compuestos con nanotubos de carbón desde el continuo hasta el nivel atomístico. Como es de esperarse, las teorías que son factibles de aplicar en el continuo de un material en general no pueden desempeñarse igual en escalas nanoscopicas. Por lo anterior, es necesario recurrir a los aspectos energéticos asociados con los diversos fenómenos físicos que caracterizan el comportamiento de estos materiales. En este sentido, se desarrolló un modelo multiescala en el cual se consolidan diversos principios físicos que en conjunto con modelos mecanísticos han permitido caracterizar el comportamiento de estos materiales cuando son sometidos a diversos estados de carga. En particular, en este trabajo el modelo fue calibrado vía pruebas de tensión simple en materiales tales como el polietileno (PE), el policarbonato (PC) y el Acrilonitrilobutatieno-estireno (ABS) reforzados con nanotubos de carbón de pared sencilla. Algunas de las ventajas inherentes al modelo desarrollado están asociadas con el nÚmero de parámetros que este requiere para predecir teóricamente el comportamiento del compuesto en tensión simple. Además, con la teoría de campo medio de Mori-Tanaka, proveniente de la micromecánica, es posible determinar el tensor de rigidez equivalente del material compuesto considerando que los nanotubos de carbón de pared sencilla (SWCNT's) pueden estar alineados o bien tener orientaciones aleatorias. La sencillez del modelo así como su buen nivel de predicción de los resultados experimentales, sienta las bases para que este sea utilizado en el diseño y desarrollo de componentes estructurales para diversas aplicaciones ingenieriles.
    • Modelación y Análisis de la Respuesta Dinámica del Sistema de Control de Nivel Domo para Calderas Industriales VU 60

      Meléndez Nieto, Juan F. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 1999-05-01)
      La modelación y simulación de un generador de vapor se hacen en basealadescripciónde los fenómenos físicos con la ayuda de las ecuaciones diferenciales lineales o no lineales. Las leyes físicas para la modelación son la primera y segunda ley de la termodinámica (modelación del evaporador), y las reacciones químicas asociadas con la combustión (modelo simplificado de combustión). La estrategia de control utilizada en el simulador es de tres elementos para el modelo del control de flujo de agua de alimentación donde el nivel de agua en el domo, el flujo de agua de alimentación al domo, y el flujo de evaporación de la caldera son cada uno de los tres elementos del controlador. Para el control de combustión es utilizada una simplificación del control maestro de caldera, donde se retroalimenta una señal de presión y la cantidad de porcentaje de exceso de oxígeno es considerado constante y se considera que el flujo de aire siempre es suficiente para lograr quemar el combustible.El primer paso dentro de este trabajo fue escalar un modelo de un sistema caldera turbina de 235 MW de capacidad localizada en Texas11, modelado con base a 7 ecuaciones diferenciales no lineales y 27 ecuaciones algebraicas a un modelo de 3 ecuaciones diferenciales y algunas ecuaciones algebraicas Una vez hecha la modelación para este sistema, se simuló la operación del mismo en estado estable a las cargas de 75,107,131, 171, 203,235 MW. Para probar el simulador en estado transitorio se aplicó una rampa a 5 MW por minuto de 235 a 75 MW y se compararon los resultados gráficamente con el estado transitorio. Una vez hecha las pruebas para esta caldera para generación de electricidad, se modificaron los parámetros físicos de la caldera, el rango de presión y demanda de vapor en el cual trabaja, para adaptar el modelo a una de tipo industrial de una capacidad 10 veces menor en producción de vapor. El estudio se realizó en base a la caldera industrial tipo VU-60 fabricada por la compañía Cerrey. Ellos proporcionaron la información técnica de 2 calderas: una de ellas en operación en una planta de Cervecería Modelo de México, y otra más en construcción en Port Dickson, Malasia. Para validar el modelo en este caso, se montó un sistema de adquisición de datos en la caldera en operación, y se registró el desempeño de la caldera durante varios estados estables y se trató de producir aumentos y rechazos de carga que estuvieron limitados por el proceso al cual sirve este sistema. Sin embargo, se obtuvo información valiosa del generador de vapor, con el cual pudimos validar el modelo tanto en estado estable como en estado transitorio para rampas del 3% por minuto. Posteriormente con una nueva base de datos el modelo fue alimentado para simular la operación de loa planta de Port Dickson. A este modelo se le añadió un modelo lineal del sobre calentador, agregando una ecuación algebraica al modelo original. El modelo resultante fue sometido a las pruebas a las cuales fueron sometida la VU-60 II de C. Modelo en campo, además de rampas del 30 al 100 % de carga a 20% por minuto, y viceversa. Además se realizó un análisis de robustez del controlador con base a un diseño de experimentos tipo factorial donde se variaron cuatro parámetros del controlador de tres elementos: Ganancia Proporcional, Ganancia Integral del controlador de nivel, Ganancia Proporcional y Ganancia Integral del Controlador de Flujo de Agua de alimentación y estos se utilizaron como variables de entrada. La salida o índice de desempeño seleccionado es el valor absoluto entre la diferencia entre el nivel mayor alcanzado durante el transitorio y el nivel de referencia.
    • Modelo de Comportamiento Afectivo para Sistemas Tutores Inteligentes

      Hernández Pérez, María Y. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-12-01)
      Las emociones se han reconocido como parte fundamental de la motivación, y la motivación como un componente indispensable en el aprendizaje. En este documento se propone un modelo de comportamiento afectivo para sistemas tutores inteligentes. Dicho modelo combina el estado afectivo y pedagógico de los estudiantes para establecer las acciones tutoriales. En el contexto de este trabajo, el comportamiento afectivo tiene dos funciones principales: 1) inferir el estado afectivo del estudiante; y 2) establecer la acción tutorial óptima considerando el estado afectivo y pedagógico del estudiante. De esta manera, el sistema tutor inteligente proporciona a los estudiantes una acción tutorial adecuada para su estado afectivo y pedagógico. Nuestra propuesta para inferir el estado afectivo se basa en el modelo cognitivo de emociones OCC. De acuerdo con dicho modelo, las metas son fundamentales para establecer el estado afectivo. En este trabajo, las metas se infieren con base en el modelo de los cinco factores de la personalidad. Para determinar el estado afectivo del estudiante se usan los siguientes para?metros: 1) la personalidad del estudiante, 2) el estado pedagógico del estudiante, 3) las metas y 4) la situación tutorial. El modelo afectivo del estudiante esta? representado por medio de una red bayesiana dina?mica. Se utilizan redes bayesianas ya que en el proceso de modelado afectivo del estudiante hay incertidumbre, y las redes bayesianas son un mecanismo robusto para tratar la incertidumbre, y adema?s permiten modelar la naturaleza dina?mica del estado afectivo del estudiante. Una vez que se ha establecido el modelo afectivo del estudiante, el tutor tiene que responder acorde con el estado afectivo. Para modelar las decisiones del tutor, se usa la teori?a de decisiones considerando un balance entre aprendizaje y estado afectivo. El modelo afectivo del tutor esta? representado como una red de decisión dina?mica. La red bayesiana impli?cita en la red de decisión predice la influencia de las acciones tutoriales en el estado afectivo y pedagógico del estudiante considerando el estado afectivo y pedagógico actual. Esta predicción se usa para establecer la utilidad de cada acción tutorial en el afecto y aprendizaje del estudiante, seleccionando las acciones pedagógicas y afectivas de mayor utilidad. La utilidad de las acciones tutoriales se obtiene con base en las preferencias del tutor, que a su vez se basan en la experiencia de un grupo de profesores. Se llevaron a cabo dos investigaciones con profesores para validar nuestras suposiciones y para establecer el modelo afectivo del tutor. El modelo se evaluó en dos dominios de prueba: un juego educativo para aprender factorización de nu?meros y un sistema tutor inteligente de robótica móvil. Los resultados de la evaluación son alentadores y muestran que el modelo de comportamiento afectivo funciona en estudiantes cuyo perfil es adecuado para el sistema tutor inteligente. El modelo se ilustra con estos dos dominios de prueba, sin embargo el modelo es gene?rico y puede ser aplicado en cualquier ambiente de aprendizaje. Las principales contribuciones de esta investigación son: 1) una arquitectura general de sistema tutor inteligente afectivo; 2) una estructura de la acción afectiva que es parte de la ii acción tutorial, la acción tutorial se compone de una acción afectiva y una acción pedagógica; 3) un modelo de comportamiento afectivo gene?rico que puede integrarse a cualquier sistema tutor inteligente; y 4) conocimiento afectivo con base en la experiencia de profesores, este conocimiento puede usarse para disen?ar otros estudios y obtener conocimiento ma?s profundo sobre como los profesores ayudan a los estudiantes a aprender.
    • A Multiagent Approach to Outbound Intrusion Detection

      Mandujano Vergar, Salvador (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      A Multiagent Approach to Outbound Intrusion Detection. Ph.D. dissertation by Salvador Mandujano Vergara, Instituto Tecnológico y de Estudios Superiores de Monterrey. Advisor: Prof. Arturo Galván. December � 2004. This is a dissertation on the topic of intrusion detection. It supports the philosophy of system vigilance by exploring the concept of outbound intrusion detection, which is concerned with the identification and collection of evidence that helps prove local resources are being used to compromise external systems. We discuss the motivation behind the approach, explain the need for splitting the scope of intrusion detection into sub-problems, and present trends in computer security that reveal basic design considerations that need to be taken into account when developing modern information security tools. We propose a multiagent architecture for outbound intrusion detection supported by an ontology. Groups of agents collectively monitor outbound network traffic and local activity in order to identify references to neighboring systems that may be indicative of a compromise attempt. We organize agents into sub-environments called agent cells that are connected to each other in a non-hierarchical fashion. Different classes of agents and cells compose the system, which performs attack modeling by employing multiple concurrent agents. Detection cells implement independent misuse intrusion strategies whose output is systematically fed to correlation cells capable of more accurate diagnosis. We present an attack-source-centric ontology that extends previous work in the area. It enables message interpretation and enhanced agent communication within the architecture sim- plifying at the same time system maintenance and facilitating the integration of new components. We describe the implementation of the proposed architecture through the FROID prototype as a proof of concept. This is a misuse-based intrusion detection system built with agent and semantic web open-source technology whose particular focus is the identification of automated remote attack tools. It performs signature generation, matching, and correlation, and supports a signature deployment mechanism over the Internet. We introduce a similarity matching method that improves the performance of existing algorithms by leveraging entropy and frequency properties of the input hereby reducing search time. We link detection with incident response by procuring low false alarm rates that allow us to study local and external reaction methods to outbound intrusion events. We also present a component of the architecture that performs tracing of interactive sessions as a way of identifying the root location of a security event. We describe the experimental design and report the results obtained with the prototype that show the feasibility of the approach as an alternate way of containing the impact of security incidents through the integration of a mesh of monitoring agents.