• Effects of Sorghum Digestibility, Endosperm Texture and Type Phenolic Profile on Postharvest Resistance to Maize Weevil and Fuel Ethanol Production

      Chuck Hernández, Cristina E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-01-12)
    • Effects of sound on growth, viability, protein production yield and gene expression in Escherichia coli.

      Acuña González, Edgar
      The effect of sound on biological systems is a subject that has been previously explored, mainly in relation to its use to increase agricultural production. However, the potential of this phenomenon has not been exploited properly because present studies have focused only on one or two sound elements for the characterization of their biological effects. In this sense, the effects of other sound wave elements have been overlooked. In the present work, the effects of frequency, amplitude, duration, intermittence and pulse - individually and in combination - were characterized in Escherichia coli through the measurement of its biomass, viability and yield production of recombinant protein. The treatments of frequency and duration increased the concentration of biomass in 19% and 44% respectively at time 24 h; however, high variability was observed in both treatments. The amplitude treatment had a significant effect on the viability, which the duration of the exponential phase was doubled. The intermittency treatment increased the yield of recombinant protein 1.5 times without significant contribution of the other sound elements. Based on this observation, the effect that intermittency could have on the upregulation of the expression of genes involved in the production of recombinant proteins was investigated. The RNA of three candidate genes (BarA, CheA and CpxR) was quantified in the presence of an intermittent sound. All genes were upregulated (1.38, 2.66 and 1.33 times respectively); however, only upregulation related to chemotaxis (CheA) was statistically significant. Finally, an omnidirectional sound source was adapted to small-volume commercial bioreactors to characterize the distribution of sound within the container. It was determined that the implementation of sound induction in a commercial bioreactor is feasible, although limited to certain specific frequencies close to 500 and 1000 Hz. The integral nature of this characterization presents a deeper understanding of bacterial systems and also offers a way through which it is possible to explore its application for industrial purposes.
    • Elaboración de sistemas multicapas a partir del diseño, construcción y caracterización de un reactor de deposición física de vapor (PVD)

      Rojo Valerio, Alejandro; ALEJANDRO ROJO VALERIO;238053 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-11-01)
      En la presente tesis se tiene el desarrollo de un sistema multicapa CrN / A1N sobre un acero H13 elaborados mediante la depositación física de vapor, en donde lo que se persiguió fue el mejorar las propiedades tribológicas que se presenta en este acero grado herramienta que se usa para operaciones de trabajado en caliente. Algunos de los usos principales de este acero H13 es como dados de fundición, forja y extrusión, en donde los principales problemas tribológicos que se presentan son debido a la fatiga térmica la erosión y la corrosión. Para poder llevar a cabo estos recubrimientos multicapas, fue necesario el diseñar, construir y caracterizar un reactor que realizara el me?todo de depositación física de vapor (PVD), apoyándose para la generación de este sistema multicapa de magnetrones no balanceados. Para ello la primera parte del escrito trata sobre el desarrollo que se hizo sobre el diseño y construcción del reactor PVD a través de dividirlo en tres sistemas principales que son: 1) el sistema de extracción y suministro de gases, 2) el sistema de descarga y generación de recubrimientos y 3) el sistema de análisis del plasma. Con lo cual se explica a mayor detalle en la tesis los componentes principales que integran estos sistemas, siendo una característica distintiva en el diseño de este reactor PVD el suministro de nitrógeno cerca de los blancos. Después se realiza una caracterización del funcionamiento de operación del reactor a partir de análisis del plasma con ayuda de la sonda de Langmuir, así como por espectrometría de emisión óptica (EOS). Con ello se encontró una correlación entre el análisis del plasma y los para?metros del PVD con respecto a la operación misma del reactor a través de sus sistemas. Para corroborar el buen funcionamiento del reactor PVD construido así como la elaboración del sistema multicapa CrN / A1N sobre el H13, fue necesario el caracterizar los productos elaborados mediante los análisis de microscopía electrónica de barrido y sus espectros (SEM - EDS), así como por SEM de alta resolución y por difracción de rayos-X (DRX). Los resultados de estos análisis se relacionaron con respecto a los parámetros de operación del reactor PVD, encontrándose una característica interesante cuando se suministra nitrógeno cerca del blancos para apoyar en la formación de los nitruros deseados. Al final del presente trabajo se tiene una recapitulación de los objetivos perseguidos dentro del marco de la tesis con respecto a los diferentes apartados del escrito, encontrándose satisfactoriamente la elaboración de los recubrimientos multicapas CrN / A1N sobre el H13 que mejoraron las propiedades tribológicas que se buscaron, además que el reactor PVD diseñado y construido dentro del desarrollo de la presente tesis desarrolló este sistema multicapa buscado. Teniendo adema?s para este reactor PVD, características de versatilidad para utilizar diferentes fuentes de energía así como el poder elaborar capas sencillas, sistemas multicapas y sistemas de recubrimientos multicomponentes con este mismo equipo. Como conclusión general final se tiene que se logró correlacionar el proceso de elaboración de las capas, la estructura que las capas tienen así como las propiedades físicas y mecánicas que se pueden conseguir en estas capas, a partir de los análisis tanto de la caracterización del plasma y de la caracterización de los productos presentados durante el desarrollo de la presente tesis. Es decir se encontraron relaciones entre el proceso, los productos y las propiedades de estos productos.
    • Emotional Domotics: acquisition of an equation for the correlation of emotional states and environmental variables through the facial expressions analysis of the user

      Navarro Tuch, Sergio Alberto
      The emotional domotics that is a concept developed by our research team seeks to integrate the subject or user of an inhabitable space as central element for the modulation and control of the environmental variables in a house automation implementation. This research proposed working with an influence on the subject emotional and physiological state, presenting an approach to state the subjects analysis when the light hue, temperature, and humidity are varied. The first experimental results led to the finding of the emotional response time dynamics. Such dynamics were important for further design and implementation of the control loops in-house automation systems for emotion modulation. Throughout this document, the details and progress of the research in emotional domotics, with the aim of developing a controlled algorithm for living space based on the user’s emotional state, will be illustrated and detailed. This project is centered on domotics (home automation) systems, which is, a set of elements installed, interconnected and controlled by a computer system. After introducing the investigation’s core, general preview, and the experiment´s description conducted with light hue variation. After the first experiments that led to the emotional response time dynamics. further research was developed in order to acquire and communicate the control system and to process and recover the physiological variables. The final sections of the work present a final experiment in which together the variables of temperature, humidity, and light intensity were applied to a more complete testing methodology. Which led to the final correlation equations for each of five basic emotions selected. These equations may allow us to propose an initial plant model for a control system to be developed by future researchers.
    • Enabling Intelligent Organizations: An Electronic Institutions Approach for Building Agent Oriented Information Systems

      Robles Pompa, Armando (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-07)
      In this Thesis, we describe a framework to build large information systems that sup� port the operation of enterprises. We base our framework in the application of agent technologies and the concept of Electronic Institutions for the design and development of Institutional Agent Oriented Information Systems. This framework is based on an "institutional perspective" considering an organization as a group of people that use Information Technologies (IT) in order to better achieve some shared objectives. For controlling the interaction between human activities and IT resources, we decided: i) to use the concept of "Agent" to represent in the computational world human participation and availability of IT resources like business processes and data bases; ii) to use the concept of workflows to control the interaction between agents; and iii) to use the concept of Electronic Institutions to capture the way an organization works and to implement workflows in the intended institutional perspective. Using the electronic institution theory, we model the behavior of the real-organization using its context and procedural rules. The electronic institution produces an automated version for this model that is the input to the computational world. The computational world interpretation for this model implements an Intelligent Organization specifying in what order an subject to what conditions the intervening agents should interact in the specified context. We have built and deployed the framework consisting of organizational middleware and domain agents, and we demonstrated the viability of our approach using our ideas, con� cepts and framework in a world class information system for management and operation of hotels.
    • Engineering mammalian-specific post-translational modifications in plant-derived proteins: phosphorylation and Mucin-type O-glycosylation as a challenge.

      Ramírez-Alanis, Israel A.
      Expression of economically relevant plant-derived recombinant proteins in alternative expression platforms, especially plant expression platforms, has gained significant interest in recent years, due to the possibility to reduce production costs, or because of product quality of production. Among the different qualities that plants can offer for the production of recombinant proteins, capability to perform post-translational modifications like protein glycosylation and phosphorylation are some of the crucial ones since it has an impact on pharmaceuticals functionality and/or stability or protein activity, respectively. In this dissertation, the pharmaceutical glycoprotein human Granulocyte-Colony Stimulating Factor is transiently expressed in N. benthamiana, as several protein versions targeted to different compartments (apoplast, cytoplasm and as protein bodies), offering an alternative for the consideration of production of this protein. Furthermore, the glycoprotein was subjected to the native GalNAc-O-glycosylation, by co-expressing the pharmaceutical, together with the enzymes responsible for such glycosylation. In the case of phosphoproteins, the bovine β- and κ-caseins and their specific kinase the bovine Fam20C were also expressed for the first time in N. benthamiana plants, to assess the feasibility of controlling their phosphorylation pattern, which could be considered for the generation of soybean transgenic lines, enriched with such nutraceutical and nutrimental phosphoproteins.
    • Estrategias de movimiento para la localización y construcción de mapas con múltiples robots móviles en interiores

      Muñoz Gómez, Lourdes (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
    • Estrategias para la Purificación de Proteínas Pegiladas Utilizando Cromatografía de Interacción Hidrofóbica: Ribonucleasa A como Modelo de Estudio

      Mayolo Deloisa, Karla P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-01-05)
      La PEGilación es la unión covalente de una proteína a una o varias cadenas de poli(etilen glicol) (PEG). Esta técnica ha sido utilizada para mejorar las propiedades fisicoquímicas de varias proteínas utilizadas como drogas terapéuticas. Durante la reacción de PEGilación se forman diferentes bioconjugados variando en el nÚmero de cadenas de PEG añadidas y el sitio de unión. La purificación de proteínas PEGiladas consiste en remover todas las especies que no formen parte del producto de interés, lo que involucra dos retos principalmente: 1) la separación de las proteínas PEGiladas del resto de los productos de la reacción y 2) el sub-fraccionamiento de las proteínas PEGiladas en base al grado de PEGilación y a los isómeros posicionales. Los métodos cromatográficos han sido frecuentemente utilizados para resolver las mezclas de la reacción de PEGilación, la cromatografía de exclusión molecular (SEC) y de intercambio iónico (IEC) son las más utilizadas. Comparativamente muy poco trabajo ha sido realizado para explorar otras técnicas como la cromatografía de interacción hidrofóbica (HIC). Por otro lado, una revisión detallada de la literatura muestra que las técnicas no-cromatográficas (ultrafiltración, sistemas de dos fases acuosas, electroforesis, etc.) son básicamente utilizadas para la caracterización de los productos de la reacción de PEGilación. Dentro de este contexto, en este trabajo se presenta el estudio de la separación de los productos de la reacción de PEGilación de RNasa A utilizando diferentes condiciones y tipos de resinas en HIC. Se demuestra que el uso de un soporte ligeramente hidrofóbico como CH sefarosa 4B cubierta con Tris, puede ser utilizado como una alternativa para la separación de las proteínas PEGiladas de la proteína nativa. Adicionalmente, los productos de la reacción de PEGilación fueron separados utilizando tres resinas con diferente grado de hidrofobicidad: butil, octil y fenil sefarosa. Se evaluaron los efectos del tipo de resina, el tipo y la concentración de sal (sulfato de amonio y NaCl) y la longitud del gradiente sobre el proceso de separación. Se calcularon la pureza y el rendimiento empleando el modelo de platos. Bajo todas las condiciones analizadas, la proteína nativa es completamente separada de las especies PEGiladas. Las mejores condiciones para la purificación de RNasa A monoPEGilada se dan cuando se utiliza la resina butil sefarosa, sulfato de amonio 1M y un gradiente de elución de 35 CVs. Con esto es posible obtener un rendimiento del 85% y una pureza del 97%. Este proceso representa una alternativa viable para la separación de proteínas PEGiladas.
    • Evaluation of Hydrogel Materials for Insulin Delivery in Closed Loop Treatment of Diabetes Mellitus

      Sánchez Chávez, Irma Y. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-01)
      The recovery of diminished or lost regulatory functions of physiological systems drives important research efforts in biomaterials and modeling and control engineering. Special interest is paid to diabetes mellitus because of its epidemic dimensions. Hydrogels provide the multifunctionality of smart materials and the applicability to medical regulatory systems, which is evaluated in this dissertation. The polymeric matrix of a hydrogel experiences reversible changes in volume in response to the pH of the environment, which depends on the presence of key metabolites in a physiological medium. The hydrogel swells due to internal repulsive electrostatic forces opening the matrix and releasing a preloaded drug. The contracted state of the hydrogel hinders the diffusion of the drug out of the polymer. In this work, poly(methacrylic acid-graft-ethylene glycol), P(MAA-g-EG), hydrogel membranes that incorporate glucose oxidase are used for insulin delivery. These glucose sensitive membranes are characterized and modeled for the closed loop treatment of type I diabetes mellitus. A physiological compartmental model is extended to represent the treatment system of a diabetic patient. Physical parameters of the P(MAA-g-EG) hydrogel material are obtained from experimental characterization and used as a basis to describe anionic and cationic hydrogels. The performance of the system closed by a hydrogel-based device is explored and compared to the dynamic behavior of a conventional scheme with an explicit controller element. A control algorithm for optimal insulin delivery in a type I diabetic patient is presented based on the linear quadratic control problem theory. The glucose-insulin dynamics is first represented by a linear model whose state variables are the glucose and the insulin concentrations in the blood. These variables allow the formulation of an appropriate cost function for a diabetes treatment in terms of the deviation from the normal glucose level and the dosage of exogenous insulin. The optimal control law is computed from this cost function under the servocontrol and regulatory approaches. Superior robustness of the regulatory control design is shown before random variations of the parameters of the linear physiological model. Further evaluation of the regulatory controller is realized with a high order nonlinear human glucose-insulin model. The control system performance can be improved by adjusting the weighting factors of the optimization problem according to the patients needs. The optimal controller produces a versatile insulin release profile in response to the variations of blood glucose concentration. Simulations demonstrate limitations in the range of swelling and contraction of hydrogels in a physiological environment due to factors such as the continuous presence of glucose in blood composition, the buffer characteristics of physiological fluids and the Donnan equilibrium effect. Results show that insulin loading efficiency is critical for the long term service of a hydrogel-based device, while delivery by a diffusion mechanism is convenient since it allows a basal insulin supply. The evaluation of hydrogel macrosystems prompts the consideration of the detected pros and contras in hydrogel microsystems, as well as in composite systems that may combine different materials and structures.
    • Experimental Investigation of Textile Composites Strength Subject to Biaxial Tensile Loads

      Arellano Escárpita, David A. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-05)
      Engineering textile composites are built of a polymeric resin matrix reinforced by a woven fabric, commonly glass, kevlar or carbon fibres. The woven architecture confers multidirectional reinforcement while the undulating nature of fibres also provides a certain degree of out-plane reinforcement and good impact absorption; furthermore, fibre entanglement provides cohesion to the fabric and makes mould placement an easy task, which is advantageous for reducing production times. However, the complexity of textile composites microstructure, as compared to that of unidirectional composites makes its mechanical characterization and design process a challenging task, which often rely on well-known failure criteria such as maximum stress, maximum strain and Tsai-Wu quadratic interaction to predict final failure. Despite their ample use, none of the aforementioned criteria has been developed specifically for textile composites, which has led to the use of high safety factors in critical structural applications to overcome associated uncertainties. In view of the lack of consensus for accurate strength prediction, more experimental data, better testing methods and properly designed specimens are needed to generate reliable biaxial strength models. The aforementioned arguments provide motivation for this thesis, which presents the development of an improved cruciform specimen suitable for the biaxial tensile strength characterization. A glass-epoxy plain weave bidirectional textile composite is here selected as study case, as a representative material used on many industrial applications. The developed cruciform specimen is capable of generating a very homogeneous biaxial strain field in a wide gauge zone, while minimizing stress concentrations elsewhere, thus preventing premature failure outside the biaxially loaded area. Seeking to avoid in-situ effects and other multilayer-related uncertainties, the specimen is designed to have a single-layer gauge zone. This is achieved by a novel manufacturing process also developed in this research, which avoids most drawbacks found in typical procedures, such as milling. Once the suitability of the specimen was demonstrated, an original biaxial testing machine was designed, built, instrumented and calibrated to apply biaxial loads; the apparatus included a high definition video recorder to get images for digital image correlation strain measurement. An experimental tests program was then conducted to generate a biaxial tensile failure envelope in the strain space. Based on the experimental results, a phenomenological failure criterion based on experimental results and physical textile parameters such as the number of layers and unit cell dimensions was developed. The predicted failure envelope predicted by this criterion achieves very good agreement with the experimental data.
    • Experimental techniques for optical micromanipulation

      López Mariscal, Carlos; CARLOS LOPEZ MARISCAL;216947 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-11-01)
      A set of experiments aimed at observing specific aspects of optical trapping and micromanipulation of particles is described. Extensive use of novel optical wavefields is made, while the potential applications of each experiment collected in this work are emphasized.
    • Exploring Hyper-Heuristic Approaches for Solving Constraint Satisfaction Problems

      Ortiz Bayliss, José C. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      This dissertation is submitted to the Graduate Programs in Engineering and Information Technologies in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technologies and Communications, with a major in Intelligent Systems. This document describes and analyses the variable and value ordering problem in Constraint Satisfaction Problems (CSP), and proposes novel techniques to generate hyper-heuristics for such problem. Hyper-heuristics are methodologies that selectively apply low-level heuristics according to the features of the instance at hand, combining the strengths of these heuristics to achieve more general solution methods. The objective of this dissertation is contribute to the knowledge about variable and value ordering heuristics and the description of new techniques to produce hyper-heuristics that behave steadily competent on a wide range of instances when compared against those ordering heuristics. The CSP is a fundamental problem in Artificial Intelligence. It has many immediate practical applications which include vision, language comprehension, scene labelling, knowledge representation, scheduling and diagnosis. The complexity of CSP is in general, computationally intractable. Stated as a classic search problem, every CSP can be solved by going through a search tree associated to the instance. In this way, every variable represents a node in the tree. Every time a variable is instantiated, the constraints must be checked to verify that none of them is violated. When an assignment is in conflict with one or more constraints, the instantiation must be undone, and another value must be considered for that variable. If there are no other values available, the value of one previous instantiated variable must be changed. As we can see, when solving a CSP, the problem of the ordering in which the variables and their values are ordered affects the complexity of the search. Based on this, solving a CSP requires an efficient strategy to select the next variable, in other words, a form to assign priorities and decide which variable will be instantiated before the others and then, decide which value to use for it. Because an exhaustive search is impossible due to the exponential grow with the number of variables, the search is usually guided by heuristics. There are many heuristics exist to decide the ordering in which the variables and their values should be tried, but they have proved to be very specific and work well only on instances with specific features. In this dissertation we are interested in developing a solution model which is able to generate methods that show a good performance for different instances of CSP. Because hyperheuristics are able to adapt to the different problems or instances by dynamically choosing between low-level heuristics during the search, they seem suitable for being used to achieve the objective described. These hyper-heuristics should be able to give good-quality results for a wide set of different CSP instances. In other words, we are interested in developing a ix more flexible solution model than those presented in previous works. These hyper-heuristics can be produced through many strategies. We want to explore some of them and analyse their performance, in order to decide which one are more suitable than others according to some properties of the instances and the current needs in time and quality. In this dissertation, three approaches were used to generate the hyper-heuristics. The first approach uses a decision matrix hyper-heuristic, which contains information about the low-level heuristic to apply given certain features of the instances being solved. This approach is limited in scope because it was designed to work with a small set of features but it provided good results. Later, we studied an evolutionary approach to generate hyper-heuristics. This model is more general than the decision matrix approach and experiments suggest that it produces the hyper-heuristics with more quality among the three models described in this document. Nevertheless, the evolutionary approach requires a significant number of additional operations with respect to the other models to produce one hyper-heuristic. Finally, a neural network approach is introduced. The running time of this model proved that the approach is effective to produce good quality hyper-heuristics in a reasonable time. The general idea of this investigation is to provide a better understanding of the variable and value ordering heuristics for CSP and provide various models for hyper-heuristic generation for variable and value ordering within CSP. All the models described in this investigation generate hyper-heuristics that can be applied to a wider range of instances than the simple heuristics and still achieve acceptable results with respect to time and quality.
    • Flow Stress Model for Titanium Alloy Ti-6AI-4V in Machining Operations

      Martínez López, Alejandro (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-05)
      Machining of titanium alloys is widely used in high-value added industries such as aerospace and medical devices. In this research, an extensive literature review was conducted on experimental and simulation investigations of Ti-6Al-4V machining. Using the findings of the review and applying a novel experimental technique (slot- milling test), an approach to determine the flow stress behavior for the Finite Element Modeling (FEM) of titanium machining was developed and implemented. An evaluation of the proposed model in this study is addressed using experimental data from literature and from slot-milling tests conducted during this research. The proposed flow stress model for Ti-6Al-4V shows good prediction capabilities in regards to chip morphology and cutting forces. The typical serrated chip found in titanium machining is reproduced in this research through FEM simulation and without the need of a damage criterion. This phenomenon can be reproduced through adiabatic softening captured by the developed constitutive model. The proposed flow stress model is based on a Johnson-Cook formulation and modified to use only 4 calibration parameters. Based on these results, FEM simulation is an effective tool for modeling of titanium (Ti-6Al-4V) machining, in order to minimize the use of costly experimentation. The applicability of the multi-scale modeling approach is also shown in this research. Dynamic stability of machining operations and FEM simulations are linked through a non-linear cutting force model. This research shows how FEM simulation in titanium alloys can be applied to generate the parameters of the non-linear cutting force model.
    • Fostering Design Team Performance: A University Design Collaborative Environment

      González Mendívil, Eduardo (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-05-01)
      The learning process that comes from learning by doing activities, promotes new knowledge transferring vehicles that improves design team performance. However, there is still limited understanding of "how� Knowledge is acquired and how it varies using collaborative and conversational technologies to improve product development performance. The main contribution of this research is to establish a set of indicators that can be used as guides to help identify effective Knowledge practices that can be useful for design teams whose performance rely upon effective new product development activities. These indicators are obtained evaluating and comparing documents stored in a Product Data Management System (PDM) for differing levels of semantic significance, applying Latent Semantic Analysis (LSA). This provides a linkage between knowledge acquisition and the development of capabilities for knowledge mobilization, and a better understanding of "how" design teams improve their performance. The present research is contextualized in an academic environment within project design courses at ITESM University.
    • Framework for consistent generation of linked data: the case of the user's academic profile on the web

      Alvarado Uribe, Joanna
      Decision management is relevant for high-value decisions that involve multiple types of input data. Since the Web allows users to keep in touch with other users and likewise, share their data (such as features, interests, and preferences) with applications and devices to customize a provided service, the online data related to these users can be collected as input data for a decision-making process. However, these data are usually provided to the application or device used in a given time, causing three major issues: data are isolated when are provided to a specific entity, data are scattered in the network, and data are found in different formats (structured, semi-structured, and unstructured). Therefore, with the aim of supporting decision makers to make better decisions, in a certain scenario, the proposal to automatically unify, align, and integrate the user data concerning this scope into a centralized and standardized structure that allows, at the same time, to model the user's profile on the Web in a consistent and updated manner as well as to generate linked data from the integrated information is addressed. This is where Decision Support Systems, Semantic Web, and context-enriched services become the cornerstones of the computational approach proposed as a solution to these issues. Firstly, given the generality of fields that can constitute a user profile, the definition of a scope that allows validating the proposed approach is emphasized for this research work. Secondly, the proposal, development, and evaluation of the computational solutions that allow dealing with the data modeling, integration, generation, and updating consistently are highlighted in this research. Therefore, a study focused on the academic area is proposed for this work in order to support researchers and data managers at the institutional level in processes and activities concerning this area, specifically at Tecnologico de Monterrey. To achieve this goal, the design of an interdisciplinary, justified, and interoperable meta-schema (called Academic SUP) that allows to model the user's academic profile on the Web, as well as the development of a computational framework (named as AkCeL) that allows to integrate, generate, and update data into such a meta-schema consistently are proposed in this research work. In addition, in order to support researchers in their decision-making processes, the development of a recommendation algorithm (called C-HyRA) that allows providing a research areas list interesting for researchers, as well as the adoption of a visualization platform related to the academic area to present the information generated by AkCeL are put forward in this proposal. As a result, unified, consistent, reliable, and updated information of the researcher' academic profile is provided on the Web from this approach, in both text and graphics, through the VIVO platform to be consumed primarily by researchers and educational institutions to support their networks and statistics of collaboration/publication and research.
    • Hybrid Self-Learning Fuzzy PD + I Control of Unknown SISO Linear and Nonlinear Systems

      Santana Blanco, Jesús (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-12-01)
      A human being is capable of learning how to control many complex systems without knowing the mathematical model behind such systems, so there must exist some way to imitate that behavior with a machine. In this dissertation a novel hybrid self-learning controller is proposed that is capable of learning how to control unknown linear and nonlinear processes incorporating human behavior characteristics shown when he/she is learning how to control an unknown process. The controller is comprised of a Fuzzy PD controller plus a conventional I controller and its corresponding gains are tuned using a human-like learning algorithm developed upon characteristics observed on actual human operators while they were learning how to control an unknown process reaching specified goals of steady-state error (SSE), settling time (Ts), and percentage of overshooting (PO). The systems tested were: first and second-order linear systems, the nonlinear pendulum, and the nonlinear equations of the approximate pendulum, Van der Pol, Rayleigh, and Damped Mathieu. Analysis and simulation results are presented for all the mentioned systems. More detailed results are provided for a nonlinear pendulum as a representative of nonlinear systems and for a second-order linear temperature control system as a representative of linear systems. This temperature system is used as a comparative benchmark with other controllers shown in the literature [10] that use this temperature control system, showing that the proposed controller is simpler and has superior results. Also, a robustness analysis is shown that demonstrates that the proposed controller keeps acceptable performance even under perturbation, noise, and parameter variations.
    • The Impact of Statistical Word Alignment Quality and Structure in Phrase Based Statistical Machine Translation

      Guzmán Herrera, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Statistical Word Alignments represent lexical word-to-word translations between source and target language sentences. They are considered the starting point for many state of the art Statistical Machine Translation (SMT) systems. In phrase-based systems, word alignments are loosely linked to the translation model. Despite the improvements reached in word alignment quality, there has been a modest improvement in the end-to-end translation. Until recently, little or no attention was paid to the structural characteristics of word-alignments (e.g. unaligned words) and their impact in further stages of the phrase-based SMT pipeline. A better understanding of the relationship between word alignment and the entailing processes will help to identify the variables across the pipeline that most influence translation performance and can be controlled by modifying word alignment's characteristics. In this dissertation, we perform an in-depth study of the impact of word alignments at different stages of the phrase-based statistical machine translation pipeline, namely word alignment, phrase extraction, phrase scoring and decoding. Moreover, we establish a multivariate prediction model for different variables of word alignments, phrase tables and translation hypotheses. Based on those models, we identify the most important alignment variables and propose two alternatives to provide more control over alignment structure and thus improve SMT. Our results show that using alignment structure into decoding, via alignment gap features yields significant improvements, specially in situations where translation data is limited. During the development of this dissertation we discovered how different characteristics of the alignment impact Machine Translation. We observed that while good quality alignments yield good phrase-pairs, the consolidation of a translation model is dependent on the alignment structure, not quality. Human-alignments are more dense than the computer generated counterparts, which trend to be more sparse and precision-oriented. Trying to emulate human-like alignment structure resulted in poorer systems, because the resulting translation models trend to be more compact and lack translation options. On the other hand, more translation options, even if they are noisier, help to improve the quality of the translation. This is due to the fact that translation does not rely only on the translation model, but also other factors that help to discriminate the noise from bad translations (e.g. the language model). Lastly, when we provide the decoder with features that help it to make "more informed decisions" we observe a clear improvement in translation quality. This was specially true for the discriminative alignments which inherently leave more unaligned words. The result is more evident in low-resource settings where having larger translation lexicons represent more translation options. Using simple features to help the decoder discriminate translation hypotheses, clearly showed consistent improvements.
    • Implementation of a two-photon michelson interferometer for quantum-optical coherence tomography

      López Mago, Dorilián; DORILIAN LOPEZ MAGO;262725 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-05-01)
      Time-domain Optical Coherence Tomography (OCT) is an imaging technique that provides information about the infernal structure of a sample. It makes use of classical light in conjunction with conventional interferometers. A quantum versión of OCT, called Quantum-Optical Coherence Tomography (QOCT), has been developed in previous years. QOCT uses entangled photon pairs in conjunction with two-photon interferometers. QOCT improves depth resolution and offers more information about the optical properties of the sample. However, the current implementation of QOCT is not competitive with its classical counterpart because of the low efficiency of the current sources and detectors that are required for its implementation. We analyzed the feasibility of QOCT using a Michelson interferometer that can be adapted to the state of the art in entangled photon sources and detectors. Despite of its simplicity, no current implementations of QOCT have been done with this interferometer. This thesis develops the theory of the two-photon Michelson interferometer applied in QOCT. It describes the elements that characterizes the coincidences interferogram and support the theory with experimental measurements. We found that as long as the spectral bandwidth of the entangled photons is smaller than their central frequency, the Michelson interferometer can be successfully used for QOCT. In addition, we found that the degree of entanglement between the photons can be calculated from the coincidences interferogram. The two-photon Michelson interferometer provides another possibility for QOCT with the advantages of simplicity, performance and adaptability. The resolution of the interferometer can be improved using ultrabroadband sources of entangled photons, e.g. photonic fibers. In addition, we can study the implementation of photonnumber resolving detectors in order to remove the detection of coincidences that is used for detecting entangled photon pairs.
    • Implementing an object-oriented method of information systems for CIM to the Mexican industry

      Julián Prieto Magnus; JULIÁN PRIETO MAGNUS (Instituto Tecnológico y de Estudios Superiores de Monterrey, 1997)
    • In the Task-Driven Generation of Preventive Sensing Plans for Execution of Robotic Assemblies

      Conant Pablos, Santiago E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      It is well known that success during robotic assemblies depends on the correct execution of the sequence of assembly steps established in a plan. In turn, the correct execution of these steps depend on the conformance to a series of preconditions and postconditions on the states of the assembly elements and in the consistent, repeatable, and precise actions of the assembler (for instance, a robotic arm). Unfortunately, the ubiquitous and inherent real-life uncertainty and variation in the work-cell, in the assembly robot calibration, and in the robot actions, could produce errors and deviations during the execution of the plan. This dissertation investigates several issues related to the use of geometric information about the models of component objects of assemblies and the process of contact formation among such objects for tackling the automatic planning of sensing strategies. The studies and experiments conducted during this research have led to the development of novel methods for enabling robots to detect critical errors and deviations from a nominal assembly plan during its execution. The errors are detected before they cause failure of assembly operations, when the objects that will cause a problem are manipulated. Having control over these objects, commanded adjustment actions are expected to correct the errors. First, a new approach is proposed for determining which assembly tasks require using vision and force feedback data to verify their preconditions and the preconditions of future tasks that would be affected by lack of precision in the execution of those tasks. For this, a method is proposed for systematically assign force compliance skills for monitoring and controlling the execution of tasks that involve contacts between the object manipulated by the robot arm in the task and the objects that conform its direct environmental configuration. Also, a strategy is developed to deduce visual sensing requirements for the manipulated object of the current task and the objects that conform its environment configuration. This strategy includes a geometric reasoning mechanism that propagates alignment constraints in a form of a dependency graph. Such graph codes the complete set of critical alignment constraints, and then expresses the visionand force sensing requirements for the analyzed assembly plan. Recognizing the importance of having a correct environment configuration to succeed in the execution of a task that involve multiple objects, the propagation of critical dependencies allow to anticipate potential problems that could irremediably affect the successful execution of subsequent assembly operations. This propagation scheme represents the heart of this dissertation work because it provides the basis for the rest of the contributions and work. The approach was extensively tested demonstrating its correct execution in all the test cases. Next, knowing which are the tasks that require preventive sensing operations, a sensor planning approach is proposed to determine an ordering of potential viewpoints to position the camera that will be used to implement the feedback operations. The approach does not consider kinematic constraints in the active camera mechanism. The viewpoints are ordered depending on a measure computed from the intersection of two regions describing the tolerance of tasks to error and the expected uncertainty from iii an object localization tool. A method has been posed to analytically deduce the descriptions of inequalities that implicitly describe a region of tolerated error. Also, an algorithm that implements an empirical method to determine the form and orientation of six-dimensional ellipsoids is proposed to model and quantify the uncertainty of the localization tool. It was experimentally shown that the goodness measure is an adequate criterion for ordering the viewpoints because agrees with the resulting success ratio of real-life task execution after using the visual information to adjust the configuration of the manipulated objects. Furthermore, an active vision mechanism is also developed and tested to perform visual verification tasks. This mechanism allows the camera move around the assembly scene to recollect visual information. The active camera was also used during the experimentation phase. Finally, a method is proposed to construct a complete visual strategy for an assembly plan. This method decides the specific sequence of viewpoints to be used for localizing the objects that were specified by the visual sensing analyzer. The method transforms the problem of deciding a sequence of camera motions into a multi-objective optimization problem that is solved in two phases: a local phase that reduces the original set of potential viewpoints to small sets of viewpoints with the best predicted success probability values of the kinematically feasible viewpoints for the active camera; and a global phase that decides a single viewpoint for each object in a task and then stitch them together to form the visual sensing strategy for the assembly plan.