• Flow Stress Model for Titanium Alloy Ti-6AI-4V in Machining Operations

      Martínez López, Alejandro (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-05)
      Machining of titanium alloys is widely used in high-value added industries such as aerospace and medical devices. In this research, an extensive literature review was conducted on experimental and simulation investigations of Ti-6Al-4V machining. Using the findings of the review and applying a novel experimental technique (slot- milling test), an approach to determine the flow stress behavior for the Finite Element Modeling (FEM) of titanium machining was developed and implemented. An evaluation of the proposed model in this study is addressed using experimental data from literature and from slot-milling tests conducted during this research. The proposed flow stress model for Ti-6Al-4V shows good prediction capabilities in regards to chip morphology and cutting forces. The typical serrated chip found in titanium machining is reproduced in this research through FEM simulation and without the need of a damage criterion. This phenomenon can be reproduced through adiabatic softening captured by the developed constitutive model. The proposed flow stress model is based on a Johnson-Cook formulation and modified to use only 4 calibration parameters. Based on these results, FEM simulation is an effective tool for modeling of titanium (Ti-6Al-4V) machining, in order to minimize the use of costly experimentation. The applicability of the multi-scale modeling approach is also shown in this research. Dynamic stability of machining operations and FEM simulations are linked through a non-linear cutting force model. This research shows how FEM simulation in titanium alloys can be applied to generate the parameters of the non-linear cutting force model.
    • Fostering Design Team Performance: A University Design Collaborative Environment

      González Mendívil, Eduardo (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-05-01)
      The learning process that comes from learning by doing activities, promotes new knowledge transferring vehicles that improves design team performance. However, there is still limited understanding of "how� Knowledge is acquired and how it varies using collaborative and conversational technologies to improve product development performance. The main contribution of this research is to establish a set of indicators that can be used as guides to help identify effective Knowledge practices that can be useful for design teams whose performance rely upon effective new product development activities. These indicators are obtained evaluating and comparing documents stored in a Product Data Management System (PDM) for differing levels of semantic significance, applying Latent Semantic Analysis (LSA). This provides a linkage between knowledge acquisition and the development of capabilities for knowledge mobilization, and a better understanding of "how" design teams improve their performance. The present research is contextualized in an academic environment within project design courses at ITESM University.
    • Framework for consistent generation of linked data: the case of the user's academic profile on the web

      Alvarado Uribe, Joanna
      Decision management is relevant for high-value decisions that involve multiple types of input data. Since the Web allows users to keep in touch with other users and likewise, share their data (such as features, interests, and preferences) with applications and devices to customize a provided service, the online data related to these users can be collected as input data for a decision-making process. However, these data are usually provided to the application or device used in a given time, causing three major issues: data are isolated when are provided to a specific entity, data are scattered in the network, and data are found in different formats (structured, semi-structured, and unstructured). Therefore, with the aim of supporting decision makers to make better decisions, in a certain scenario, the proposal to automatically unify, align, and integrate the user data concerning this scope into a centralized and standardized structure that allows, at the same time, to model the user's profile on the Web in a consistent and updated manner as well as to generate linked data from the integrated information is addressed. This is where Decision Support Systems, Semantic Web, and context-enriched services become the cornerstones of the computational approach proposed as a solution to these issues. Firstly, given the generality of fields that can constitute a user profile, the definition of a scope that allows validating the proposed approach is emphasized for this research work. Secondly, the proposal, development, and evaluation of the computational solutions that allow dealing with the data modeling, integration, generation, and updating consistently are highlighted in this research. Therefore, a study focused on the academic area is proposed for this work in order to support researchers and data managers at the institutional level in processes and activities concerning this area, specifically at Tecnologico de Monterrey. To achieve this goal, the design of an interdisciplinary, justified, and interoperable meta-schema (called Academic SUP) that allows to model the user's academic profile on the Web, as well as the development of a computational framework (named as AkCeL) that allows to integrate, generate, and update data into such a meta-schema consistently are proposed in this research work. In addition, in order to support researchers in their decision-making processes, the development of a recommendation algorithm (called C-HyRA) that allows providing a research areas list interesting for researchers, as well as the adoption of a visualization platform related to the academic area to present the information generated by AkCeL are put forward in this proposal. As a result, unified, consistent, reliable, and updated information of the researcher' academic profile is provided on the Web from this approach, in both text and graphics, through the VIVO platform to be consumed primarily by researchers and educational institutions to support their networks and statistics of collaboration/publication and research.
    • Hybrid Self-Learning Fuzzy PD + I Control of Unknown SISO Linear and Nonlinear Systems

      Santana Blanco, Jesús (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-12-01)
      A human being is capable of learning how to control many complex systems without knowing the mathematical model behind such systems, so there must exist some way to imitate that behavior with a machine. In this dissertation a novel hybrid self-learning controller is proposed that is capable of learning how to control unknown linear and nonlinear processes incorporating human behavior characteristics shown when he/she is learning how to control an unknown process. The controller is comprised of a Fuzzy PD controller plus a conventional I controller and its corresponding gains are tuned using a human-like learning algorithm developed upon characteristics observed on actual human operators while they were learning how to control an unknown process reaching specified goals of steady-state error (SSE), settling time (Ts), and percentage of overshooting (PO). The systems tested were: first and second-order linear systems, the nonlinear pendulum, and the nonlinear equations of the approximate pendulum, Van der Pol, Rayleigh, and Damped Mathieu. Analysis and simulation results are presented for all the mentioned systems. More detailed results are provided for a nonlinear pendulum as a representative of nonlinear systems and for a second-order linear temperature control system as a representative of linear systems. This temperature system is used as a comparative benchmark with other controllers shown in the literature [10] that use this temperature control system, showing that the proposed controller is simpler and has superior results. Also, a robustness analysis is shown that demonstrates that the proposed controller keeps acceptable performance even under perturbation, noise, and parameter variations.
    • The Impact of Statistical Word Alignment Quality and Structure in Phrase Based Statistical Machine Translation

      Guzmán Herrera, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Statistical Word Alignments represent lexical word-to-word translations between source and target language sentences. They are considered the starting point for many state of the art Statistical Machine Translation (SMT) systems. In phrase-based systems, word alignments are loosely linked to the translation model. Despite the improvements reached in word alignment quality, there has been a modest improvement in the end-to-end translation. Until recently, little or no attention was paid to the structural characteristics of word-alignments (e.g. unaligned words) and their impact in further stages of the phrase-based SMT pipeline. A better understanding of the relationship between word alignment and the entailing processes will help to identify the variables across the pipeline that most influence translation performance and can be controlled by modifying word alignment's characteristics. In this dissertation, we perform an in-depth study of the impact of word alignments at different stages of the phrase-based statistical machine translation pipeline, namely word alignment, phrase extraction, phrase scoring and decoding. Moreover, we establish a multivariate prediction model for different variables of word alignments, phrase tables and translation hypotheses. Based on those models, we identify the most important alignment variables and propose two alternatives to provide more control over alignment structure and thus improve SMT. Our results show that using alignment structure into decoding, via alignment gap features yields significant improvements, specially in situations where translation data is limited. During the development of this dissertation we discovered how different characteristics of the alignment impact Machine Translation. We observed that while good quality alignments yield good phrase-pairs, the consolidation of a translation model is dependent on the alignment structure, not quality. Human-alignments are more dense than the computer generated counterparts, which trend to be more sparse and precision-oriented. Trying to emulate human-like alignment structure resulted in poorer systems, because the resulting translation models trend to be more compact and lack translation options. On the other hand, more translation options, even if they are noisier, help to improve the quality of the translation. This is due to the fact that translation does not rely only on the translation model, but also other factors that help to discriminate the noise from bad translations (e.g. the language model). Lastly, when we provide the decoder with features that help it to make "more informed decisions" we observe a clear improvement in translation quality. This was specially true for the discriminative alignments which inherently leave more unaligned words. The result is more evident in low-resource settings where having larger translation lexicons represent more translation options. Using simple features to help the decoder discriminate translation hypotheses, clearly showed consistent improvements.
    • Implementation of a two-photon michelson interferometer for quantum-optical coherence tomography

      López Mago, Dorilián; DORILIAN LOPEZ MAGO;262725 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2012-05-01)
      Time-domain Optical Coherence Tomography (OCT) is an imaging technique that provides information about the infernal structure of a sample. It makes use of classical light in conjunction with conventional interferometers. A quantum versión of OCT, called Quantum-Optical Coherence Tomography (QOCT), has been developed in previous years. QOCT uses entangled photon pairs in conjunction with two-photon interferometers. QOCT improves depth resolution and offers more information about the optical properties of the sample. However, the current implementation of QOCT is not competitive with its classical counterpart because of the low efficiency of the current sources and detectors that are required for its implementation. We analyzed the feasibility of QOCT using a Michelson interferometer that can be adapted to the state of the art in entangled photon sources and detectors. Despite of its simplicity, no current implementations of QOCT have been done with this interferometer. This thesis develops the theory of the two-photon Michelson interferometer applied in QOCT. It describes the elements that characterizes the coincidences interferogram and support the theory with experimental measurements. We found that as long as the spectral bandwidth of the entangled photons is smaller than their central frequency, the Michelson interferometer can be successfully used for QOCT. In addition, we found that the degree of entanglement between the photons can be calculated from the coincidences interferogram. The two-photon Michelson interferometer provides another possibility for QOCT with the advantages of simplicity, performance and adaptability. The resolution of the interferometer can be improved using ultrabroadband sources of entangled photons, e.g. photonic fibers. In addition, we can study the implementation of photonnumber resolving detectors in order to remove the detection of coincidences that is used for detecting entangled photon pairs.
    • Implementing an object-oriented method of information systems for CIM to the Mexican industry

      Julián Prieto Magnus; JULIÁN PRIETO MAGNUS (Instituto Tecnológico y de Estudios Superiores de Monterrey, 1997)
    • In the Task-Driven Generation of Preventive Sensing Plans for Execution of Robotic Assemblies

      Conant Pablos, Santiago E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      It is well known that success during robotic assemblies depends on the correct execution of the sequence of assembly steps established in a plan. In turn, the correct execution of these steps depend on the conformance to a series of preconditions and postconditions on the states of the assembly elements and in the consistent, repeatable, and precise actions of the assembler (for instance, a robotic arm). Unfortunately, the ubiquitous and inherent real-life uncertainty and variation in the work-cell, in the assembly robot calibration, and in the robot actions, could produce errors and deviations during the execution of the plan. This dissertation investigates several issues related to the use of geometric information about the models of component objects of assemblies and the process of contact formation among such objects for tackling the automatic planning of sensing strategies. The studies and experiments conducted during this research have led to the development of novel methods for enabling robots to detect critical errors and deviations from a nominal assembly plan during its execution. The errors are detected before they cause failure of assembly operations, when the objects that will cause a problem are manipulated. Having control over these objects, commanded adjustment actions are expected to correct the errors. First, a new approach is proposed for determining which assembly tasks require using vision and force feedback data to verify their preconditions and the preconditions of future tasks that would be affected by lack of precision in the execution of those tasks. For this, a method is proposed for systematically assign force compliance skills for monitoring and controlling the execution of tasks that involve contacts between the object manipulated by the robot arm in the task and the objects that conform its direct environmental configuration. Also, a strategy is developed to deduce visual sensing requirements for the manipulated object of the current task and the objects that conform its environment configuration. This strategy includes a geometric reasoning mechanism that propagates alignment constraints in a form of a dependency graph. Such graph codes the complete set of critical alignment constraints, and then expresses the visionand force sensing requirements for the analyzed assembly plan. Recognizing the importance of having a correct environment configuration to succeed in the execution of a task that involve multiple objects, the propagation of critical dependencies allow to anticipate potential problems that could irremediably affect the successful execution of subsequent assembly operations. This propagation scheme represents the heart of this dissertation work because it provides the basis for the rest of the contributions and work. The approach was extensively tested demonstrating its correct execution in all the test cases. Next, knowing which are the tasks that require preventive sensing operations, a sensor planning approach is proposed to determine an ordering of potential viewpoints to position the camera that will be used to implement the feedback operations. The approach does not consider kinematic constraints in the active camera mechanism. The viewpoints are ordered depending on a measure computed from the intersection of two regions describing the tolerance of tasks to error and the expected uncertainty from iii an object localization tool. A method has been posed to analytically deduce the descriptions of inequalities that implicitly describe a region of tolerated error. Also, an algorithm that implements an empirical method to determine the form and orientation of six-dimensional ellipsoids is proposed to model and quantify the uncertainty of the localization tool. It was experimentally shown that the goodness measure is an adequate criterion for ordering the viewpoints because agrees with the resulting success ratio of real-life task execution after using the visual information to adjust the configuration of the manipulated objects. Furthermore, an active vision mechanism is also developed and tested to perform visual verification tasks. This mechanism allows the camera move around the assembly scene to recollect visual information. The active camera was also used during the experimentation phase. Finally, a method is proposed to construct a complete visual strategy for an assembly plan. This method decides the specific sequence of viewpoints to be used for localizing the objects that were specified by the visual sensing analyzer. The method transforms the problem of deciding a sequence of camera motions into a multi-objective optimization problem that is solved in two phases: a local phase that reduces the original set of potential viewpoints to small sets of viewpoints with the best predicted success probability values of the kinematically feasible viewpoints for the active camera; and a global phase that decides a single viewpoint for each object in a task and then stitch them together to form the visual sensing strategy for the assembly plan.
    • Innovative Optimal Design Methods

      Moreno Grandas, Diana P. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2013-12-01)
    • An Integrated Data Model and Web Protocol for Arbitrarily Structured Information

      Álvarez Cavazos, Francisco (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
      Within the Web´s data ecosystem dwell applications that consume and produce information with varying degrees of structuring, ranging from very structured business data to the semistructured or unstructured data found in documents which contain a significant amount of text. Current database technology was not designed for the Web and, consequently, database communication protocols, query models, and even data models are inadequate for the demands of "data everywhere." Thus, a technique to uniformly store, search, transport and update all the variety of information within Web or intranet environments has yet to be designed. The Web context require the data management community to address: (a) data modeling and basic querying to support multiple data models to accommodate many types of data sources, (b) powerful search mechanisms that accept keyword queries and select relevant structured sources that may answer them, and (c) the ability to combine answers from structured and unstructured data in a principled way. In consequence, this dissertation constructively designs a technique to store, search, transport and update unstructured and structured information for Web or intranet-based environments: the Relational-text (RELTEX) protocol. Central to the design of the protocol is an integrated model for structured and unstructured data and its associated declarative language interface, namely, the RELTEX model and calculus. The RELTEX model is constructively defined departing from the relational and information retrieval models and their associated retrieval strategies. The model´s data items are tuples with structured "columns" and unstructured "fields" that further allow idiosyncratic schema in the form of "extension fields", which are tuple-specific name/value pairs. This flexibility allows representation of totally unstructured information, totally structured information, and mixtures of structured and unstructured data, such as tables where tuples have a varying number of fields over time. RELTEX calculus extends tuple relational calculus to consider text fields, similarity matches, match ranking, and sort order. Then, building on top of the formally-defined RELTEX data model and calculus and departing from the architecture of the Web, the RELTEX protocol is defined as a resource-centric protocol to describe and manipulate data and schema of unstructured and structured data sources. An equivalence mapping between RELTEX and the relational and information retrieval models is provided. The mapping suggests a wide range of applicability for RELTEX, thus proving the model´s value. On the other hand, the RELTEX protocol is distinguished from other techniques for data access and storage in the Web since (a) it supports structured and unstructured data manipulation and retrieval, (b) it offers operations to describe and manipulate both common and idiosyncratic schema of data items and (c) it directly federates data items to the Web over a compound key; thus demonstrating novelty and value. The RELTEX protocol, model and calculus are proven feasible by means of a proof-of-concept implementation. Departing from a motivating scenario, the prototype is used to provide representative examples of data and schema operations. Having demonstrated that the RELTEX protocol and model contribute towards the data modeling and basic querying challenge imposed by the Web, we expect that this dissertation benefits researchers and practitioners alike with a novel, valuable, effective and feasible technique to store, search, transport and update unstructured and structured information in the Web environment.
    • Intelligent Monitoring and Supervisory Control System in Peripheral Milling Process in High Speed Machining

      Vallejo Guevara, Antonio Jr. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-11)
      This research is leading to solve a real problem in High Speed Machining processes (HSM), specifically in the peripheral milling process. Nowadays, the machining processes have increased their complexity by considering the HSM, because of the high dimensional precision, high surface quality, and the minimum cost in the demanded products. The general scope of this research is: Design and implement an intelligent monitoring and supervisory control system for peripheral milling process in HSM. The main objectives of this research are defined as follows: � Implement a general model to predict the surface roughness by considering several aluminium alloys, cutting parameters, geometries, and cutting tools. � Design and implement a monitoring and diagnosis system for the cutting tool wear condition during the machining process. � Design and develop an intelligent process planning system, which includes a merit variable to compute the optimal cutting parameters and a decision-making module to recommend some actions in agreement with the cutting tool wear condition. The design and implementation of the system implied to make research, exhaust experiments, and write several papers to validate the proposal ideas and algorithms. The main contributions can be summarized as follows: � A complete data acquisition system was implemented in a machining center HS-1000 Kondia. Several sensors were installed to characterize the surface roughness (Ra) and flank wear of the cutting tool with the process state variables. The Mel Frequency Cepstrum Coefficients (MFCC) computed from the process signals were used for modelling the Ra with ANN models. � Related with the Ra modelling, the most important factors affecting the Ra were deduced by applying the screening factorial design. Also, Response Surface Methodology was applied with excellent results for modeling the Ra. The models were computed for a new, half-new, half-worn, and worn cutting tool condition. Multi-sensor and data fusion were used to build ANN models with excellent results. � New ideas based in the Hidden Markov Models (HMM) and the MFCC were developed for monitoring and diagnosis the cutting tool wear condition for peripheral milling process in HSM. The system was implemented for recognizing on-line four cutting tool wear conditions: new, half-new, half-worn, and worn condition. � The design and implementation of the intelligent monitoring and process planning system (IMPPS) represented the main module of the intelligent monitoring and supervisory control system. In this module, Genetic Algorithms with the RSM models were used to compute the optimal cutting parameters in Pre-process operating mode with excellent results. Another contribution was the implementation of the Markov Decision Process in the optimization process. This algorithm recommends optimal actions for minimizing the operation cost during the production of specific workpieces.
    • Intelligent wheelchair

      Gregory Monnard Reguin, David; David Gregory Monnard Reguin
      The project proposed is creating a whellchair that includes four major features. The first is being able to control the chair by moving the eyes, the second is having the possiblility of reproducing prerecorded voice messages, th thirds is being able to control the chair with voice commands and the last feature is an avoidance system based on the data collected with the ultrasonic sensors.
    • Large Scale Topic Modeling Using Search Queries: An Information-Theoric Approach

      Ramírez Rangel, Eduardo H. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-01-12)
      Creating topic models of text collections is an important step towards more adaptive information access and retrieval applications. Such models encode knowledge of the topics discussed on a collection, the documents that belong to each topic and the semantic similarity of a given pair of topics. Among other things, they can be used to focus or disambiguate search queries and construct visualizations to navigate across the collection. So far, the dominant paradigm to topic modeling has been the Probabilistic Topic Modeling approach in which topics are represented as probability distributions of terms, and documents are assumed to be generated from a mixture of random topics. Although such models are theoretically sound, their high computational complexity makes them difficult to use in very large scale collections. In this work we propose an alternative topic modeling paradigm based on a simpler representation of topics as freely overlapping clusters of semantically similar documents, that is able to take advantage of highly-scalable clustering algorithms. Then, we propose the Querybased Topic Modeling framework (QTM), an information-theoretic method that assumes the existence of a "golden" set of queries that can capture most of the semantic information of the collection and produce models with máximum semantic coherence. The QTM method uses information-theoretic heuristics to find a set of "topical-queries" which are then co-clustered along with the documents of the collection and transformed to produce overlapping document clusters. The QTM framework was designed with scalability in mind and is able to be executed in parallel over commodity-class machines using the Map-Reduce approach. Then, in order to compare the QTM results with models generated by other methods we have developed metrics that formalize the notion of semantic coherence using probabilistic concepts and the familiar notions of recall and precisión. In contrast to traditional clustering metrics, the proposed metrics have been generalized to validate overlapping and potentially incomplete clustering solutions using multi-labeled corpora. We use them to experimentally validate our query-based approach, showing that models produced using selected queries outperform the ones produced using the collection vocabulary. Also, we explore the heuristics and settings that determine the performance of QTM and show that the proposed method can produce models of comparable, or even superior quality, than those produced with state of the art probabilistic methods.
    • Mathematical models of some evolutionary systems under the influence of stochastic factors

      Rodríguez Said, Roberto D. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2007-01-12)
      As it is known, the problem of availability of information is normally addressed using a buffer. Most of the times it is required that the effectiveness or reliability of the system is calculated to optimize the amount of stored information according to the customers random requests and to the amount of incoming information from the supply line. In this thesis, we consider the case of single buffer connected to any number of customers with bursty demands. We model the variation of the level of stored information in the buffer as an evolution in a random media. We assume that the customers can be modeled as semi-Markov stochastic processes and we use the phase merging algorithm to reduce the evolution process in a semi-Markov to an approximated evolutions in a Markov media. Then we obtain a general solution to the stationary probability density of the level of the buffer and general results for the stationary efficiency of the system.
    • Methodology Based on the State Transition Concept for Simple Constitutive Modeling of Smart Materials

      Varela Jiménez, Manuel I. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-01-12)
      Smart materials have the capability to sense and respond to environmental stimuli in a predictable and useful manner. Its existence has transformed the paradigm that materials can be used only for structural purposes into the concept that these can also be the basis for actuators or sensors, generating new possibilities for design of devices. However, development of these materials also creates necessity of proposition of new theories and concepts that allow understanding its behavior. This dissertation focuses on development of a general constitutive model for describing response of several smart materials by considering that a microstructural change is stimulated in them, such a state transition that follows a sigmoidal behavior and can be modeled by a proposed expression that describes transition induced by an external factor. Such expression results flexible and able to adapt to take several kinds of external variable as the main parameter that induces transformation. A methodology for purposing a state transition in smart materials and modeling its response to some stimulus through a common mathematical expression relating the effect of microstructural changes on some variable associated to the material is proposed and evaluated. This way, there were studied; 1) effect of twinned martensite - detwinned martensite - austenite strain/temperature induced phase transformation on stress of Nicke l - Titanium shape memory alloy, 2) effect of glassy - active temperature induced state transition on stress of shape memory polymers, 3) effect o f magnetic field induced arrangement of iron particles on shear yield stress of magnetorheological fluid and 4) effect o f electric field induced arrangement of ions on the bending of a thin film of electroactive polymer. A constitutive model is proposed for each material resulting in promising results due to good fitting with experimental data and comparison with some other models, although it has some limitations such as being unidimensional, considering only one way behavior of the materials or have been fitted for specific geometries or chemical composition and stills needs to be generalized. However, it can be considered as an initial approach for a general model for smart materials regardless of their atomic structure, chemical bonds or physical domain, that could be applied for design of materials and simulation of its behavior through numerical methods.
    • Mixed Oligopoly: analysis of oligopoly models considering social welfare

      Cordero Franco, Alvaro E. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-01-12)
    • Modelación de Interacciones Múltiples en Sistemas de Información Cooperantes

      Camargo Santacruz, Francisco J. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2001-01-12)
      La naturaleza dinámica de los ambientes de agentes cooperativos dificulta la tarea de modelar interacciones permanentes entre agentes, resaltando los problemas de ambigüedad y control en la representación del mecanismo de interacción. La dificultad de modelar se incrementa si más de dos agentes están involucrados en la interacción de manera simultánea. Este problema es uno de los desafíos más importantes dentro de la investigación en Sistemas Multiagentes Cooperantes (SMAC´s). Los Sistemas de Información Cooperantes (SIC´s o CIS por sus siglas en Inglés, Cooperative Information Systems) pueden ser vistos como SMAC´s e integran diferentes tipos de sistemas de información para que trabajen cooperativamente por un objetivo común. Estos sistemas son considerados por su naturaleza como sistemas que presentan un comportamiento dina?mico y por ende, uno de sus principales problemas es el referente al como modelar y controlar múltiples interacciones simultáneas entre los agentes, de una manera sencilla para el ingeniero de software. El problema anterior se acentu?a debido a que los me?todos utilizados por los ingenieros de software para modelar la interacción de los SIC´s son poco expresivos y más aun, cuando se presentan situaciones de interacción entre más de dos agentes de manera simultánea. Esto complica la labor de modelación de estos sistemas y hace difícil la comunicación entre modeladores y desarrolladores lo que trae como consecuencia altos costos de desarrollo. La contribución principal de esta tesis se centra en un modelo basado en Redes de Petri Coloreadas (RPC o CPN por sus siglas en Inglés, Coloured Petri Nets) para modelar las interacciones múltiples simultáneas en Sistemas de Información Cooperantes de una manera expresiva. Este modelo contribuye a facilitar la representación de la dina?mica del sistema y en la reducción de la dificultad asociada con la modelación de la misma. El modelo integra principalmente: a) el ciclo básico de acción llamado "Loop", para representar las interacciones del sistema y modelar conversaciones en las organizaciones, b) las Redes de Petri Coloreadas para la especificación de las interacciones representadas en el loop y para la simulación del sistema, c) los actos comunicativos de la Fundación para Agentes Inteligentes Fi?sicos (FIPA por sus siglas en Inglés, Foundation for Intelligent Physical Agents), incluidos en la especificación del lenguaje de comunicación para agentes. El modelo brinda ventajas en la representación y el razonamiento de los mecanismos de interacción modelados en SIC´s. Para validar el modelo propuesto, se presentan dos aplicaciones de éste en los dominios de Negocios Electrónicos (e-business por sus siglas en Inglés, Electronic Business) y Centros de Contacto respectivamente, los cuales son ambiente dinámicos que requieren de herramientas adecuadas para representar y controlar múltiples interacciones de una manera expresiva.
    • Modelación Multiescala del Comportamiento Mecánico de Polímeros Reforzados con Nanotubos de Carbón de Pared Sencilla

      Rosales Torres, Conrado (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-01-03)
      En el desarrollo del presente trabajo de investigación, se puso especial énfasis en consolidar una teoría que permita relacionar el comportamiento de materiales poliméricos compuestos con nanotubos de carbón desde el continuo hasta el nivel atomístico. Como es de esperarse, las teorías que son factibles de aplicar en el continuo de un material en general no pueden desempeñarse igual en escalas nanoscopicas. Por lo anterior, es necesario recurrir a los aspectos energéticos asociados con los diversos fenómenos físicos que caracterizan el comportamiento de estos materiales. En este sentido, se desarrolló un modelo multiescala en el cual se consolidan diversos principios físicos que en conjunto con modelos mecanísticos han permitido caracterizar el comportamiento de estos materiales cuando son sometidos a diversos estados de carga. En particular, en este trabajo el modelo fue calibrado vía pruebas de tensión simple en materiales tales como el polietileno (PE), el policarbonato (PC) y el Acrilonitrilobutatieno-estireno (ABS) reforzados con nanotubos de carbón de pared sencilla. Algunas de las ventajas inherentes al modelo desarrollado están asociadas con el nÚmero de parámetros que este requiere para predecir teóricamente el comportamiento del compuesto en tensión simple. Además, con la teoría de campo medio de Mori-Tanaka, proveniente de la micromecánica, es posible determinar el tensor de rigidez equivalente del material compuesto considerando que los nanotubos de carbón de pared sencilla (SWCNT's) pueden estar alineados o bien tener orientaciones aleatorias. La sencillez del modelo así como su buen nivel de predicción de los resultados experimentales, sienta las bases para que este sea utilizado en el diseño y desarrollo de componentes estructurales para diversas aplicaciones ingenieriles.