• Engineering mammalian-specific post-translational modifications in plant-derived proteins: phosphorylation and Mucin-type O-glycosylation as a challenge.

      Ramírez-Alanis, Israel A.
      Expression of economically relevant plant-derived recombinant proteins in alternative expression platforms, especially plant expression platforms, has gained significant interest in recent years, due to the possibility to reduce production costs, or because of product quality of production. Among the different qualities that plants can offer for the production of recombinant proteins, capability to perform post-translational modifications like protein glycosylation and phosphorylation are some of the crucial ones since it has an impact on pharmaceuticals functionality and/or stability or protein activity, respectively. In this dissertation, the pharmaceutical glycoprotein human Granulocyte-Colony Stimulating Factor is transiently expressed in N. benthamiana, as several protein versions targeted to different compartments (apoplast, cytoplasm and as protein bodies), offering an alternative for the consideration of production of this protein. Furthermore, the glycoprotein was subjected to the native GalNAc-O-glycosylation, by co-expressing the pharmaceutical, together with the enzymes responsible for such glycosylation. In the case of phosphoproteins, the bovine β- and κ-caseins and their specific kinase the bovine Fam20C were also expressed for the first time in N. benthamiana plants, to assess the feasibility of controlling their phosphorylation pattern, which could be considered for the generation of soybean transgenic lines, enriched with such nutraceutical and nutrimental phosphoproteins.
    • Estrategias de movimiento para la localización y construcción de mapas con múltiples robots móviles en interiores

      Muñoz Gómez, Lourdes (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
    • Estrategias para la Purificación de Proteínas Pegiladas Utilizando Cromatografía de Interacción Hidrofóbica: Ribonucleasa A como Modelo de Estudio

      Mayolo Deloisa, Karla P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-01-05)
      La PEGilación es la unión covalente de una proteína a una o varias cadenas de poli(etilen glicol) (PEG). Esta técnica ha sido utilizada para mejorar las propiedades fisicoquímicas de varias proteínas utilizadas como drogas terapéuticas. Durante la reacción de PEGilación se forman diferentes bioconjugados variando en el nÚmero de cadenas de PEG añadidas y el sitio de unión. La purificación de proteínas PEGiladas consiste en remover todas las especies que no formen parte del producto de interés, lo que involucra dos retos principalmente: 1) la separación de las proteínas PEGiladas del resto de los productos de la reacción y 2) el sub-fraccionamiento de las proteínas PEGiladas en base al grado de PEGilación y a los isómeros posicionales. Los métodos cromatográficos han sido frecuentemente utilizados para resolver las mezclas de la reacción de PEGilación, la cromatografía de exclusión molecular (SEC) y de intercambio iónico (IEC) son las más utilizadas. Comparativamente muy poco trabajo ha sido realizado para explorar otras técnicas como la cromatografía de interacción hidrofóbica (HIC). Por otro lado, una revisión detallada de la literatura muestra que las técnicas no-cromatográficas (ultrafiltración, sistemas de dos fases acuosas, electroforesis, etc.) son básicamente utilizadas para la caracterización de los productos de la reacción de PEGilación. Dentro de este contexto, en este trabajo se presenta el estudio de la separación de los productos de la reacción de PEGilación de RNasa A utilizando diferentes condiciones y tipos de resinas en HIC. Se demuestra que el uso de un soporte ligeramente hidrofóbico como CH sefarosa 4B cubierta con Tris, puede ser utilizado como una alternativa para la separación de las proteínas PEGiladas de la proteína nativa. Adicionalmente, los productos de la reacción de PEGilación fueron separados utilizando tres resinas con diferente grado de hidrofobicidad: butil, octil y fenil sefarosa. Se evaluaron los efectos del tipo de resina, el tipo y la concentración de sal (sulfato de amonio y NaCl) y la longitud del gradiente sobre el proceso de separación. Se calcularon la pureza y el rendimiento empleando el modelo de platos. Bajo todas las condiciones analizadas, la proteína nativa es completamente separada de las especies PEGiladas. Las mejores condiciones para la purificación de RNasa A monoPEGilada se dan cuando se utiliza la resina butil sefarosa, sulfato de amonio 1M y un gradiente de elución de 35 CVs. Con esto es posible obtener un rendimiento del 85% y una pureza del 97%. Este proceso representa una alternativa viable para la separación de proteínas PEGiladas.
    • Evaluation of Hydrogel Materials for Insulin Delivery in Closed Loop Treatment of Diabetes Mellitus

      Sánchez Chávez, Irma Y. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-01)
      The recovery of diminished or lost regulatory functions of physiological systems drives important research efforts in biomaterials and modeling and control engineering. Special interest is paid to diabetes mellitus because of its epidemic dimensions. Hydrogels provide the multifunctionality of smart materials and the applicability to medical regulatory systems, which is evaluated in this dissertation. The polymeric matrix of a hydrogel experiences reversible changes in volume in response to the pH of the environment, which depends on the presence of key metabolites in a physiological medium. The hydrogel swells due to internal repulsive electrostatic forces opening the matrix and releasing a preloaded drug. The contracted state of the hydrogel hinders the diffusion of the drug out of the polymer. In this work, poly(methacrylic acid-graft-ethylene glycol), P(MAA-g-EG), hydrogel membranes that incorporate glucose oxidase are used for insulin delivery. These glucose sensitive membranes are characterized and modeled for the closed loop treatment of type I diabetes mellitus. A physiological compartmental model is extended to represent the treatment system of a diabetic patient. Physical parameters of the P(MAA-g-EG) hydrogel material are obtained from experimental characterization and used as a basis to describe anionic and cationic hydrogels. The performance of the system closed by a hydrogel-based device is explored and compared to the dynamic behavior of a conventional scheme with an explicit controller element. A control algorithm for optimal insulin delivery in a type I diabetic patient is presented based on the linear quadratic control problem theory. The glucose-insulin dynamics is first represented by a linear model whose state variables are the glucose and the insulin concentrations in the blood. These variables allow the formulation of an appropriate cost function for a diabetes treatment in terms of the deviation from the normal glucose level and the dosage of exogenous insulin. The optimal control law is computed from this cost function under the servocontrol and regulatory approaches. Superior robustness of the regulatory control design is shown before random variations of the parameters of the linear physiological model. Further evaluation of the regulatory controller is realized with a high order nonlinear human glucose-insulin model. The control system performance can be improved by adjusting the weighting factors of the optimization problem according to the patients needs. The optimal controller produces a versatile insulin release profile in response to the variations of blood glucose concentration. Simulations demonstrate limitations in the range of swelling and contraction of hydrogels in a physiological environment due to factors such as the continuous presence of glucose in blood composition, the buffer characteristics of physiological fluids and the Donnan equilibrium effect. Results show that insulin loading efficiency is critical for the long term service of a hydrogel-based device, while delivery by a diffusion mechanism is convenient since it allows a basal insulin supply. The evaluation of hydrogel macrosystems prompts the consideration of the detected pros and contras in hydrogel microsystems, as well as in composite systems that may combine different materials and structures.
    • Experimental Investigation of Textile Composites Strength Subject to Biaxial Tensile Loads

      Arellano Escárpita, David A. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-05)
      Engineering textile composites are built of a polymeric resin matrix reinforced by a woven fabric, commonly glass, kevlar or carbon fibres. The woven architecture confers multidirectional reinforcement while the undulating nature of fibres also provides a certain degree of out-plane reinforcement and good impact absorption; furthermore, fibre entanglement provides cohesion to the fabric and makes mould placement an easy task, which is advantageous for reducing production times. However, the complexity of textile composites microstructure, as compared to that of unidirectional composites makes its mechanical characterization and design process a challenging task, which often rely on well-known failure criteria such as maximum stress, maximum strain and Tsai-Wu quadratic interaction to predict final failure. Despite their ample use, none of the aforementioned criteria has been developed specifically for textile composites, which has led to the use of high safety factors in critical structural applications to overcome associated uncertainties. In view of the lack of consensus for accurate strength prediction, more experimental data, better testing methods and properly designed specimens are needed to generate reliable biaxial strength models. The aforementioned arguments provide motivation for this thesis, which presents the development of an improved cruciform specimen suitable for the biaxial tensile strength characterization. A glass-epoxy plain weave bidirectional textile composite is here selected as study case, as a representative material used on many industrial applications. The developed cruciform specimen is capable of generating a very homogeneous biaxial strain field in a wide gauge zone, while minimizing stress concentrations elsewhere, thus preventing premature failure outside the biaxially loaded area. Seeking to avoid in-situ effects and other multilayer-related uncertainties, the specimen is designed to have a single-layer gauge zone. This is achieved by a novel manufacturing process also developed in this research, which avoids most drawbacks found in typical procedures, such as milling. Once the suitability of the specimen was demonstrated, an original biaxial testing machine was designed, built, instrumented and calibrated to apply biaxial loads; the apparatus included a high definition video recorder to get images for digital image correlation strain measurement. An experimental tests program was then conducted to generate a biaxial tensile failure envelope in the strain space. Based on the experimental results, a phenomenological failure criterion based on experimental results and physical textile parameters such as the number of layers and unit cell dimensions was developed. The predicted failure envelope predicted by this criterion achieves very good agreement with the experimental data.
    • Experimental techniques for optical micromanipulation

      López Mariscal, Carlos; CARLOS LOPEZ MARISCAL;216947 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-11-01)
      A set of experiments aimed at observing specific aspects of optical trapping and micromanipulation of particles is described. Extensive use of novel optical wavefields is made, while the potential applications of each experiment collected in this work are emphasized.
    • Exploring Hyper-Heuristic Approaches for Solving Constraint Satisfaction Problems

      Ortiz Bayliss, José C. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      This dissertation is submitted to the Graduate Programs in Engineering and Information Technologies in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technologies and Communications, with a major in Intelligent Systems. This document describes and analyses the variable and value ordering problem in Constraint Satisfaction Problems (CSP), and proposes novel techniques to generate hyper-heuristics for such problem. Hyper-heuristics are methodologies that selectively apply low-level heuristics according to the features of the instance at hand, combining the strengths of these heuristics to achieve more general solution methods. The objective of this dissertation is contribute to the knowledge about variable and value ordering heuristics and the description of new techniques to produce hyper-heuristics that behave steadily competent on a wide range of instances when compared against those ordering heuristics. The CSP is a fundamental problem in Artificial Intelligence. It has many immediate practical applications which include vision, language comprehension, scene labelling, knowledge representation, scheduling and diagnosis. The complexity of CSP is in general, computationally intractable. Stated as a classic search problem, every CSP can be solved by going through a search tree associated to the instance. In this way, every variable represents a node in the tree. Every time a variable is instantiated, the constraints must be checked to verify that none of them is violated. When an assignment is in conflict with one or more constraints, the instantiation must be undone, and another value must be considered for that variable. If there are no other values available, the value of one previous instantiated variable must be changed. As we can see, when solving a CSP, the problem of the ordering in which the variables and their values are ordered affects the complexity of the search. Based on this, solving a CSP requires an efficient strategy to select the next variable, in other words, a form to assign priorities and decide which variable will be instantiated before the others and then, decide which value to use for it. Because an exhaustive search is impossible due to the exponential grow with the number of variables, the search is usually guided by heuristics. There are many heuristics exist to decide the ordering in which the variables and their values should be tried, but they have proved to be very specific and work well only on instances with specific features. In this dissertation we are interested in developing a solution model which is able to generate methods that show a good performance for different instances of CSP. Because hyperheuristics are able to adapt to the different problems or instances by dynamically choosing between low-level heuristics during the search, they seem suitable for being used to achieve the objective described. These hyper-heuristics should be able to give good-quality results for a wide set of different CSP instances. In other words, we are interested in developing a ix more flexible solution model than those presented in previous works. These hyper-heuristics can be produced through many strategies. We want to explore some of them and analyse their performance, in order to decide which one are more suitable than others according to some properties of the instances and the current needs in time and quality. In this dissertation, three approaches were used to generate the hyper-heuristics. The first approach uses a decision matrix hyper-heuristic, which contains information about the low-level heuristic to apply given certain features of the instances being solved. This approach is limited in scope because it was designed to work with a small set of features but it provided good results. Later, we studied an evolutionary approach to generate hyper-heuristics. This model is more general than the decision matrix approach and experiments suggest that it produces the hyper-heuristics with more quality among the three models described in this document. Nevertheless, the evolutionary approach requires a significant number of additional operations with respect to the other models to produce one hyper-heuristic. Finally, a neural network approach is introduced. The running time of this model proved that the approach is effective to produce good quality hyper-heuristics in a reasonable time. The general idea of this investigation is to provide a better understanding of the variable and value ordering heuristics for CSP and provide various models for hyper-heuristic generation for variable and value ordering within CSP. All the models described in this investigation generate hyper-heuristics that can be applied to a wider range of instances than the simple heuristics and still achieve acceptable results with respect to time and quality.
    • Flow Stress Model for Titanium Alloy Ti-6AI-4V in Machining Operations

      Martínez López, Alejandro (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-05)
      Machining of titanium alloys is widely used in high-value added industries such as aerospace and medical devices. In this research, an extensive literature review was conducted on experimental and simulation investigations of Ti-6Al-4V machining. Using the findings of the review and applying a novel experimental technique (slot- milling test), an approach to determine the flow stress behavior for the Finite Element Modeling (FEM) of titanium machining was developed and implemented. An evaluation of the proposed model in this study is addressed using experimental data from literature and from slot-milling tests conducted during this research. The proposed flow stress model for Ti-6Al-4V shows good prediction capabilities in regards to chip morphology and cutting forces. The typical serrated chip found in titanium machining is reproduced in this research through FEM simulation and without the need of a damage criterion. This phenomenon can be reproduced through adiabatic softening captured by the developed constitutive model. The proposed flow stress model is based on a Johnson-Cook formulation and modified to use only 4 calibration parameters. Based on these results, FEM simulation is an effective tool for modeling of titanium (Ti-6Al-4V) machining, in order to minimize the use of costly experimentation. The applicability of the multi-scale modeling approach is also shown in this research. Dynamic stability of machining operations and FEM simulations are linked through a non-linear cutting force model. This research shows how FEM simulation in titanium alloys can be applied to generate the parameters of the non-linear cutting force model.
    • Fostering Design Team Performance: A University Design Collaborative Environment

      González Mendívil, Eduardo (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-05-01)
      The learning process that comes from learning by doing activities, promotes new knowledge transferring vehicles that improves design team performance. However, there is still limited understanding of "how� Knowledge is acquired and how it varies using collaborative and conversational technologies to improve product development performance. The main contribution of this research is to establish a set of indicators that can be used as guides to help identify effective Knowledge practices that can be useful for design teams whose performance rely upon effective new product development activities. These indicators are obtained evaluating and comparing documents stored in a Product Data Management System (PDM) for differing levels of semantic significance, applying Latent Semantic Analysis (LSA). This provides a linkage between knowledge acquisition and the development of capabilities for knowledge mobilization, and a better understanding of "how" design teams improve their performance. The present research is contextualized in an academic environment within project design courses at ITESM University.
    • Framework for consistent generation of linked data: the case of the user's academic profile on the web

      Alvarado Uribe, Joanna
      Decision management is relevant for high-value decisions that involve multiple types of input data. Since the Web allows users to keep in touch with other users and likewise, share their data (such as features, interests, and preferences) with applications and devices to customize a provided service, the online data related to these users can be collected as input data for a decision-making process. However, these data are usually provided to the application or device used in a given time, causing three major issues: data are isolated when are provided to a specific entity, data are scattered in the network, and data are found in different formats (structured, semi-structured, and unstructured). Therefore, with the aim of supporting decision makers to make better decisions, in a certain scenario, the proposal to automatically unify, align, and integrate the user data concerning this scope into a centralized and standardized structure that allows, at the same time, to model the user's profile on the Web in a consistent and updated manner as well as to generate linked data from the integrated information is addressed. This is where Decision Support Systems, Semantic Web, and context-enriched services become the cornerstones of the computational approach proposed as a solution to these issues. Firstly, given the generality of fields that can constitute a user profile, the definition of a scope that allows validating the proposed approach is emphasized for this research work. Secondly, the proposal, development, and evaluation of the computational solutions that allow dealing with the data modeling, integration, generation, and updating consistently are highlighted in this research. Therefore, a study focused on the academic area is proposed for this work in order to support researchers and data managers at the institutional level in processes and activities concerning this area, specifically at Tecnologico de Monterrey. To achieve this goal, the design of an interdisciplinary, justified, and interoperable meta-schema (called Academic SUP) that allows to model the user's academic profile on the Web, as well as the development of a computational framework (named as AkCeL) that allows to integrate, generate, and update data into such a meta-schema consistently are proposed in this research work. In addition, in order to support researchers in their decision-making processes, the development of a recommendation algorithm (called C-HyRA) that allows providing a research areas list interesting for researchers, as well as the adoption of a visualization platform related to the academic area to present the information generated by AkCeL are put forward in this proposal. As a result, unified, consistent, reliable, and updated information of the researcher' academic profile is provided on the Web from this approach, in both text and graphics, through the VIVO platform to be consumed primarily by researchers and educational institutions to support their networks and statistics of collaboration/publication and research.
    • Hybrid Self-Learning Fuzzy PD + I Control of Unknown SISO Linear and Nonlinear Systems

      Santana Blanco, Jesús (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-12-01)
      A human being is capable of learning how to control many complex systems without knowing the mathematical model behind such systems, so there must exist some way to imitate that behavior with a machine. In this dissertation a novel hybrid self-learning controller is proposed that is capable of learning how to control unknown linear and nonlinear processes incorporating human behavior characteristics shown when he/she is learning how to control an unknown process. The controller is comprised of a Fuzzy PD controller plus a conventional I controller and its corresponding gains are tuned using a human-like learning algorithm developed upon characteristics observed on actual human operators while they were learning how to control an unknown process reaching specified goals of steady-state error (SSE), settling time (Ts), and percentage of overshooting (PO). The systems tested were: first and second-order linear systems, the nonlinear pendulum, and the nonlinear equations of the approximate pendulum, Van der Pol, Rayleigh, and Damped Mathieu. Analysis and simulation results are presented for all the mentioned systems. More detailed results are provided for a nonlinear pendulum as a representative of nonlinear systems and for a second-order linear temperature control system as a representative of linear systems. This temperature system is used as a comparative benchmark with other controllers shown in the literature [10] that use this temperature control system, showing that the proposed controller is simpler and has superior results. Also, a robustness analysis is shown that demonstrates that the proposed controller keeps acceptable performance even under perturbation, noise, and parameter variations.
    • The Impact of Statistical Word Alignment Quality and Structure in Phrase Based Statistical Machine Translation

      Guzmán Herrera, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Statistical Word Alignments represent lexical word-to-word translations between source and target language sentences. They are considered the starting point for many state of the art Statistical Machine Translation (SMT) systems. In phrase-based systems, word alignments are loosely linked to the translation model. Despite the improvements reached in word alignment quality, there has been a modest improvement in the end-to-end translation. Until recently, little or no attention was paid to the structural characteristics of word-alignments (e.g. unaligned words) and their impact in further stages of the phrase-based SMT pipeline. A better understanding of the relationship between word alignment and the entailing processes will help to identify the variables across the pipeline that most influence translation performance and can be controlled by modifying word alignment's characteristics. In this dissertation, we perform an in-depth study of the impact of word alignments at different stages of the phrase-based statistical machine translation pipeline, namely word alignment, phrase extraction, phrase scoring and decoding. Moreover, we establish a multivariate prediction model for different variables of word alignments, phrase tables and translation hypotheses. Based on those models, we identify the most important alignment variables and propose two alternatives to provide more control over alignment structure and thus improve SMT. Our results show that using alignment structure into decoding, via alignment gap features yields significant improvements, specially in situations where translation data is limited. During the development of this dissertation we discovered how different characteristics of the alignment impact Machine Translation. We observed that while good quality alignments yield good phrase-pairs, the consolidation of a translation model is dependent on the alignment structure, not quality. Human-alignments are more dense than the computer generated counterparts, which trend to be more sparse and precision-oriented. Trying to emulate human-like alignment structure resulted in poorer systems, because the resulting translation models trend to be more compact and lack translation options. On the other hand, more translation options, even if they are noisier, help to improve the quality of the translation. This is due to the fact that translation does not rely only on the translation model, but also other factors that help to discriminate the noise from bad translations (e.g. the language model). Lastly, when we provide the decoder with features that help it to make "more informed decisions" we observe a clear improvement in translation quality. This was specially true for the discriminative alignments which inherently leave more unaligned words. The result is more evident in low-resource settings where having larger translation lexicons represent more translation options. Using simple features to help the decoder discriminate translation hypotheses, clearly showed consistent improvements.
    • Implementation of a two-photon michelson interferometer for quantum-optical coherence tomography

      López Mago, Dorilián; DORILIAN LOPEZ MAGO;262725 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-05-01)
      Time-domain Optical Coherence Tomography (OCT) is an imaging technique that provides information about the infernal structure of a sample. It makes use of classical light in conjunction with conventional interferometers. A quantum versión of OCT, called Quantum-Optical Coherence Tomography (QOCT), has been developed in previous years. QOCT uses entangled photon pairs in conjunction with two-photon interferometers. QOCT improves depth resolution and offers more information about the optical properties of the sample. However, the current implementation of QOCT is not competitive with its classical counterpart because of the low efficiency of the current sources and detectors that are required for its implementation. We analyzed the feasibility of QOCT using a Michelson interferometer that can be adapted to the state of the art in entangled photon sources and detectors. Despite of its simplicity, no current implementations of QOCT have been done with this interferometer. This thesis develops the theory of the two-photon Michelson interferometer applied in QOCT. It describes the elements that characterizes the coincidences interferogram and support the theory with experimental measurements. We found that as long as the spectral bandwidth of the entangled photons is smaller than their central frequency, the Michelson interferometer can be successfully used for QOCT. In addition, we found that the degree of entanglement between the photons can be calculated from the coincidences interferogram. The two-photon Michelson interferometer provides another possibility for QOCT with the advantages of simplicity, performance and adaptability. The resolution of the interferometer can be improved using ultrabroadband sources of entangled photons, e.g. photonic fibers. In addition, we can study the implementation of photonnumber resolving detectors in order to remove the detection of coincidences that is used for detecting entangled photon pairs.
    • Implementing an object-oriented method of information systems for CIM to the Mexican industry

      Julián Prieto Magnus; JULIÁN PRIETO MAGNUS (Instituto Tecnológico y de Estudios Superiores de Monterrey, 1997)
    • In the Task-Driven Generation of Preventive Sensing Plans for Execution of Robotic Assemblies

      Conant Pablos, Santiago E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      It is well known that success during robotic assemblies depends on the correct execution of the sequence of assembly steps established in a plan. In turn, the correct execution of these steps depend on the conformance to a series of preconditions and postconditions on the states of the assembly elements and in the consistent, repeatable, and precise actions of the assembler (for instance, a robotic arm). Unfortunately, the ubiquitous and inherent real-life uncertainty and variation in the work-cell, in the assembly robot calibration, and in the robot actions, could produce errors and deviations during the execution of the plan. This dissertation investigates several issues related to the use of geometric information about the models of component objects of assemblies and the process of contact formation among such objects for tackling the automatic planning of sensing strategies. The studies and experiments conducted during this research have led to the development of novel methods for enabling robots to detect critical errors and deviations from a nominal assembly plan during its execution. The errors are detected before they cause failure of assembly operations, when the objects that will cause a problem are manipulated. Having control over these objects, commanded adjustment actions are expected to correct the errors. First, a new approach is proposed for determining which assembly tasks require using vision and force feedback data to verify their preconditions and the preconditions of future tasks that would be affected by lack of precision in the execution of those tasks. For this, a method is proposed for systematically assign force compliance skills for monitoring and controlling the execution of tasks that involve contacts between the object manipulated by the robot arm in the task and the objects that conform its direct environmental configuration. Also, a strategy is developed to deduce visual sensing requirements for the manipulated object of the current task and the objects that conform its environment configuration. This strategy includes a geometric reasoning mechanism that propagates alignment constraints in a form of a dependency graph. Such graph codes the complete set of critical alignment constraints, and then expresses the visionand force sensing requirements for the analyzed assembly plan. Recognizing the importance of having a correct environment configuration to succeed in the execution of a task that involve multiple objects, the propagation of critical dependencies allow to anticipate potential problems that could irremediably affect the successful execution of subsequent assembly operations. This propagation scheme represents the heart of this dissertation work because it provides the basis for the rest of the contributions and work. The approach was extensively tested demonstrating its correct execution in all the test cases. Next, knowing which are the tasks that require preventive sensing operations, a sensor planning approach is proposed to determine an ordering of potential viewpoints to position the camera that will be used to implement the feedback operations. The approach does not consider kinematic constraints in the active camera mechanism. The viewpoints are ordered depending on a measure computed from the intersection of two regions describing the tolerance of tasks to error and the expected uncertainty from iii an object localization tool. A method has been posed to analytically deduce the descriptions of inequalities that implicitly describe a region of tolerated error. Also, an algorithm that implements an empirical method to determine the form and orientation of six-dimensional ellipsoids is proposed to model and quantify the uncertainty of the localization tool. It was experimentally shown that the goodness measure is an adequate criterion for ordering the viewpoints because agrees with the resulting success ratio of real-life task execution after using the visual information to adjust the configuration of the manipulated objects. Furthermore, an active vision mechanism is also developed and tested to perform visual verification tasks. This mechanism allows the camera move around the assembly scene to recollect visual information. The active camera was also used during the experimentation phase. Finally, a method is proposed to construct a complete visual strategy for an assembly plan. This method decides the specific sequence of viewpoints to be used for localizing the objects that were specified by the visual sensing analyzer. The method transforms the problem of deciding a sequence of camera motions into a multi-objective optimization problem that is solved in two phases: a local phase that reduces the original set of potential viewpoints to small sets of viewpoints with the best predicted success probability values of the kinematically feasible viewpoints for the active camera; and a global phase that decides a single viewpoint for each object in a task and then stitch them together to form the visual sensing strategy for the assembly plan.
    • Innovative Optimal Design Methods

      Moreno Grandas, Diana P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2013-12-01)
    • An Integrated Data Model and Web Protocol for Arbitrarily Structured Information

      Álvarez Cavazos, Francisco (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
      Within the Web´s data ecosystem dwell applications that consume and produce information with varying degrees of structuring, ranging from very structured business data to the semistructured or unstructured data found in documents which contain a significant amount of text. Current database technology was not designed for the Web and, consequently, database communication protocols, query models, and even data models are inadequate for the demands of "data everywhere." Thus, a technique to uniformly store, search, transport and update all the variety of information within Web or intranet environments has yet to be designed. The Web context require the data management community to address: (a) data modeling and basic querying to support multiple data models to accommodate many types of data sources, (b) powerful search mechanisms that accept keyword queries and select relevant structured sources that may answer them, and (c) the ability to combine answers from structured and unstructured data in a principled way. In consequence, this dissertation constructively designs a technique to store, search, transport and update unstructured and structured information for Web or intranet-based environments: the Relational-text (RELTEX) protocol. Central to the design of the protocol is an integrated model for structured and unstructured data and its associated declarative language interface, namely, the RELTEX model and calculus. The RELTEX model is constructively defined departing from the relational and information retrieval models and their associated retrieval strategies. The model´s data items are tuples with structured "columns" and unstructured "fields" that further allow idiosyncratic schema in the form of "extension fields", which are tuple-specific name/value pairs. This flexibility allows representation of totally unstructured information, totally structured information, and mixtures of structured and unstructured data, such as tables where tuples have a varying number of fields over time. RELTEX calculus extends tuple relational calculus to consider text fields, similarity matches, match ranking, and sort order. Then, building on top of the formally-defined RELTEX data model and calculus and departing from the architecture of the Web, the RELTEX protocol is defined as a resource-centric protocol to describe and manipulate data and schema of unstructured and structured data sources. An equivalence mapping between RELTEX and the relational and information retrieval models is provided. The mapping suggests a wide range of applicability for RELTEX, thus proving the model´s value. On the other hand, the RELTEX protocol is distinguished from other techniques for data access and storage in the Web since (a) it supports structured and unstructured data manipulation and retrieval, (b) it offers operations to describe and manipulate both common and idiosyncratic schema of data items and (c) it directly federates data items to the Web over a compound key; thus demonstrating novelty and value. The RELTEX protocol, model and calculus are proven feasible by means of a proof-of-concept implementation. Departing from a motivating scenario, the prototype is used to provide representative examples of data and schema operations. Having demonstrated that the RELTEX protocol and model contribute towards the data modeling and basic querying challenge imposed by the Web, we expect that this dissertation benefits researchers and practitioners alike with a novel, valuable, effective and feasible technique to store, search, transport and update unstructured and structured information in the Web environment.
    • Intelligent Monitoring and Supervisory Control System in Peripheral Milling Process in High Speed Machining

      Vallejo Guevara, Antonio Jr. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-11)
      This research is leading to solve a real problem in High Speed Machining processes (HSM), specifically in the peripheral milling process. Nowadays, the machining processes have increased their complexity by considering the HSM, because of the high dimensional precision, high surface quality, and the minimum cost in the demanded products. The general scope of this research is: Design and implement an intelligent monitoring and supervisory control system for peripheral milling process in HSM. The main objectives of this research are defined as follows: � Implement a general model to predict the surface roughness by considering several aluminium alloys, cutting parameters, geometries, and cutting tools. � Design and implement a monitoring and diagnosis system for the cutting tool wear condition during the machining process. � Design and develop an intelligent process planning system, which includes a merit variable to compute the optimal cutting parameters and a decision-making module to recommend some actions in agreement with the cutting tool wear condition. The design and implementation of the system implied to make research, exhaust experiments, and write several papers to validate the proposal ideas and algorithms. The main contributions can be summarized as follows: � A complete data acquisition system was implemented in a machining center HS-1000 Kondia. Several sensors were installed to characterize the surface roughness (Ra) and flank wear of the cutting tool with the process state variables. The Mel Frequency Cepstrum Coefficients (MFCC) computed from the process signals were used for modelling the Ra with ANN models. � Related with the Ra modelling, the most important factors affecting the Ra were deduced by applying the screening factorial design. Also, Response Surface Methodology was applied with excellent results for modeling the Ra. The models were computed for a new, half-new, half-worn, and worn cutting tool condition. Multi-sensor and data fusion were used to build ANN models with excellent results. � New ideas based in the Hidden Markov Models (HMM) and the MFCC were developed for monitoring and diagnosis the cutting tool wear condition for peripheral milling process in HSM. The system was implemented for recognizing on-line four cutting tool wear conditions: new, half-new, half-worn, and worn condition. � The design and implementation of the intelligent monitoring and process planning system (IMPPS) represented the main module of the intelligent monitoring and supervisory control system. In this module, Genetic Algorithms with the RSM models were used to compute the optimal cutting parameters in Pre-process operating mode with excellent results. Another contribution was the implementation of the Markov Decision Process in the optimization process. This algorithm recommends optimal actions for minimizing the operation cost during the production of specific workpieces.
    • Intelligent wheelchair

      Gregory Monnard Reguin, David; David Gregory Monnard Reguin
      The project proposed is creating a whellchair that includes four major features. The first is being able to control the chair by moving the eyes, the second is having the possiblility of reproducing prerecorded voice messages, th thirds is being able to control the chair with voice commands and the last feature is an avoidance system based on the data collected with the ultrasonic sensors.