• Caracterización microestructural de un acero AISI 4140 nitrurado por post-descarga microondas, con aplicación de técnicas de simulación molecular

      Medina Flores, Ariosto; ARIOSTO MEDINA FLORES;80712 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-10-01)
      (110) y'-Fe4N Los resultados de los análisis EDS mostraron un perfil de concentración del N, de la superficie al centro del material, se obtuvo un valor máximo de microdureza de 1120 HV en la superficie del material nitrurado, comparado con el valor inicial de microdureza 330 HV para el acero de llegada. Se observó que el acero nitrurado presenta un perfil de microdureza el cual disminuye hacia el interior del material. Los resultados de las pruebas electroquímicas obtenidas mediante polarización anódica y tafel, sobre las muestras nitruradas, mostraron que el valor de potencial de corrosión (Ecorr) más noble y el valor más bajo de velocidad de corrosión se obtuvo en la muestra nitrurada durante 5 minutos, indicando claramente que la nitruración por post-descarga micro-ondas en tiempos cortos es un proceso muy eficiente para la protección de la corrosión del acero AISI 4140.
    • Carbon policy in presence of a consumer-friendly firm

      Kalashnikov, Viacheslav; Kalashnykova, Nataliya; García Martínez, Arturo; Smith Cornejo, Neale Ricardo; Ángel Bello Acosta, Francisco Román; Güemes Castorena, David (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2018-07)
      This dissertation studies the implications of the emergence of a consumer-friendly firm in a duopolistic polluting industry when the regulator can or not commit credibly to an environmental instrument, such as an emission tax or a tradable permits policy. The welfare and environmental consequences are examined. It also investigates the conditions under which one of these instruments is superior to the other. In the second chapter, the study considers a Cournot duopoly model with a consumer-friendly firm and analyzes the interplay between the strategic choice of abatement technology and the timing of government’s commitment to the environmental tax policy. We show that the optimal emission tax under committed policy regime is always higher than that under non-committed one, but both taxes can be higher than marginal environmental damage when the consumer-friendliness is high enough. We also show that the emergence of a consumer-friendly firm might yield better outcomes to both welfare and environmental quality without the commitment to the environmental policy. The third chapter considers the timing of environmental policies with a consumerfriendly firm having abatement technology and compares two market-based regulatory instruments, tradable permits, and emission tax regulations. When the government can credibly commit its policy, we show that the equilibrium outcomes under both policies are equivalent in terms of permits price and tax rate. Under the non- committed policy, however, the equivalence breaks down because firms have opposite incentives to induce time-consistent policy to be adjusted ex-post. In particular, compared to pre-committed government, firms abate less emission to induce higher emission quotas under the permits policy while a consumer-friendly firm abates more emissions to reduce tax rate under the tax policy. Finally, we show that tax policy will result in higher welfare and lower environmental damage unless the concern on consumer surplus is considerable.
    • Characterization of the skin secretions of Dryophytes arenicolor and identification of Arenin, a novel Kunitz-like polypeptide

      Hernández Pérez, Jesús
      Zootherapy is the treatment of human ailments with remedies made from animals and their products. Despite its prevalence in the traditional medical practices worldwide, research on this phenomenon has often been neglected in comparison to medicinal plant research. Amphibian skin secretions are enriched with complex cocktails of bioactive molecules such as proteins, peptides, biogenic amines, alkaloids, guanidine derivatives, steroids and other minor components spanning a wide spectrum of pharmacological actions exploited for centuries in folk medicine. This study presents evidence on the protein profile of the skin secretions of the canyon tree frog, Dryophytes arenicolor, an anuran from the Hylidae family, previously described as an ingredient used in Mexican Traditional Medicine practices. At the same time, it presents the reverse-phase liquid chromatography isolation, mass spectrometry characterization, identification at mRNA level and 3D modelling of a novel 58 amino acids Kunitz-like polypeptide from the skin secretions of D. arenicolor, arenin. To evaluate the bioactivity potential of arenin, cell viability assays were performed on HDFa, Caco-2 and MCF7 cells cultured with different concentrations of arenin. At 2 µg/mL of arenin, HDFa and Caco-2 cells showed a viability of 52.1%±2.86 and 108.8%±4.86, respectively. A viability shift was observed at 4 µg/mL of arenin, since HDFa and Caco-2 cells showed a viability of 100.74%±2.60 and 62.77%±1.69. This viability alternance continued being observed at 8 and 16 µg/mL of arenin, suggesting a multi-target interaction in an hormetic-like fashion. This work demonstrates the lack of typical 12-50 amino acid long peptides in the skin secretions of D. arenicolor and proposes that arenin, one of its major constituents, plays a key role in its defense against predators. The hormetic response produced by arenin in cell proliferation assays requires further transcriptomic, metabolomic and proteomic research to unveil the mechanisms underlying the variable effect on cell viability observed at different concentrations of arenin.
    • Chromatographic Separation of Conjugates PolymerProtein

      Cisneros Ruíz, Mayra (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-01-12)
      The attachment of polyethylene glycol (PEG) molecules, called PEGylation, can improve the therapeutic properties of proteins. The PEGylation product depends on the conditions under which the conjugation reaction takes place. PEGylation reactions often result in a population of conjugate species in terms of the number of attached PEG chains and their locations. As some portion of this population may be biologically inactive the resolution of these protein mixtures represents a challenge to the separation step. Currently, the methods to purify PEGylated proteins have been dominated by size exclusion chromatography (SEC) and ion exchange chromatography (IEX). Research works describing the use of the conjugate hydrophobicity for separation are not very common. It is clear that hydrophobic interaction chromatography (HIC) and reversed phase chromatography (RPC) have not been fully investigated in the past as separation methods for the resolution of PEGylated proteins. This thesis is focused on the analysis of the chromatographic behavior of PEGylated proteins in RPC and the potential use of a mild hydrophobic support combining HIC and aqueous two phase extraction (ATPE) principles. Two proteins were selected as experimental models: ribonuclease A (RNase A) and apo-?lactalbumin (apo-?Lac). Both proteins were reacted with an activated PEG with a nominal molecular weight of 20 kDa and the reaction mixtures were analyzed by SEC and mass spectrometry. The structure of the PEGylated proteins was analyzed, showing that the attachment of PEG molecules did not modify the structure of the proteins. Reverse phase chromatography (RPC) under neutral pH conditions was used to resolve the populations of PEGylated conjugates. PEG-conjugates were separated with better resolution and in less time using RPC at neutral pH rather than using SEC. RPC also allowed the identification of a tri-PEGylated species produced during the reaction with RNase A; and not identify when SEC was used. The results showed that it is possible to separate PEGylated species by RPC at neutral pH. Changes of pH (at 2.0, 7.0 and unbuffered) of the mobile phase, showed that the pH does not play a significant role in the chromatographic behavior of PEG-conjugates, when the unmodified protein is not retained. However, when the unmodified protein is retained, the effect of the pH on the PEGylated proteins is similar to that observed for the unmodified species. It was demonstrated that temperature affects the chromatographic separation of PEGconjugates in a similar manner in which it affects the separation of the neat polymer. A novel approach to potentially separate PEGylated proteins from the unmodified form using a mild hydrophobic support in which PEG is immobilized in sepharose was addressed. Different behavior retention of the native protein from the PEGylated species was achieved using a gradient elution between 3 M ammonium sulfate in 25 mM potassium phosphate, pH 7.0 and 25 mM potassium phosphate, pH 7.0. Parameters such as pH, salt type and salt concentration had no significant influence on chromatographic behavior of native, monoPEGylated and di-PEGylated RNase A using this separation system. The proposed approach described here provides a simple and practical chromatographic method to separate unmodified proteins from their PEG conjugates.
    • Combining artificial intelligence and robust techniques with MRAC in fault tolerant control

      Vargas Martínez, Adriana; ADRIANA VARGAS MARTINEZ;581338 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011-12-01)
      The investigation of this thesis presents different approaches for Fault Tolerant Control based on Model Reference Adaptive Control, Artificial Neural Networks, PID controller optimized by a Genetic Algorithm, Nonlinear, Robust and Linear Parameter Varying (LPV) control for Linear Time Invariant (LTI), LPV and nonlinear systems. All of the above techniques are integrated in different controller�s structures to prove their ability to accommodate a fault. Modern systems and their challenging operating conditions in certain processes increase the possibility of system failures causing damages in equipment and/or their operators. In these environments, the use of automation control (i.e. adaptive and robust control) and intelligent systems is fundamental to minimize the impact of faults. Therefore, Fault Tolerant Control (FTC) methods have been proposed to ensure the continuous operations of system even in fault situation and to prevent more serious effects. Until now, most of the FTC methods that have been developed are based on classical control theory (Yu et al., 2005; Zhang et al., 2007; Fradkov et al., 2008; Yang et al., 2008). The use of Artificial Intelligence (AI) in FTC has emerged recently (Stengel, 1991; Bastani & Chen, 1998; Patton et al., 1999; Korbiicz et al., 2004). Classical Artificial Intelligence (AI) approaches such as Artificial Neural Networks (ANN), Fuzzy Logic (FL), ANN-FL and Genetic Algorithms (GA) may offer some advantages over traditional methods (Schroder et al., 1998; Yu et al., 2005; Dong et al., 2006; Alves et al., 2009; Beainy et al., 2009; Kurihara, 2009; Li, 2009; Nieto et al., 2009; Panagi & Polycarpou, 2009) in the control community such as state observers, statistical analysis, parameter estimation, parity relations, residual generation, etc. The reasons are that AI approaches can reproduce the behavior of nonlinear dynamical systems with models extracted from data. Also, there are many learning processes that improve the FTC performance. This is a very important issue in FTC applications on automated processes, where information is easily available, or processes where accurate mathematical models are hard to obtain. In the last years, FTC and control schemes based on LPV systems have been developed. In Bosche et al. (2009) a Fault Tolerant Control structure for vehicle dynamics is developed employing an LPV model with actuator failures. The methodology described in Bosche et al. (2009) paper is based on the resolution of Linear Matrix Inequalities (LMIs) using the DC-stability concept and a Parameter-Dependent Lyapunov Matrix (PDLM). In Montes de Oca et al. (2009), an Admissible Model Matching (AMM) FTC method based on LPV fault representation was presented; in this approach the faults were considered as scheduling variables in the LPV fault representation allowing the controller adaptation on-line. For instance, in Rodriges et al. (2007) a FTC methodology for polytopic LPV systems was presented. The most important contribution of Rodrigues et al. (2007) work was the development of a Static Output Feedback (SOF) that maintains the system performance using an adequate controller reconfiguration when a fault appears. On the other hand, advanced techniques from Robust Control such as H?, have also been applied to FTC with encouraging results. For example, in Dong et al. (2009), an active FTC scheme for a class of linear time-delay systems, using a H? controller in generalized internal mode architecture in combination with an adaptive observer-based fault estimator was presented. In Xiadong et al. (2008) a dynamic output feedback FTC approach that uses a H? index for actuator continuous gain faults was proposed. And, in Liang & Duan (2004) a H? FTC approach was used against sensor failures for uncertain descriptor system (systems which 3 capture the dynamical behavior of natural phenomena). To improve the capabilities of the FTC systems mentioned above, different types of controller based on Adaptive Control, Artificial Neural Networks, Robust, Nonlinear and LPV Control for LTI, LPV and Nonlinear systems are proposed in this thesis. These controllers are first tested in an Industrial Heat Exchanger and then tested in a Coupled-Tank LPV System. Different types of faults are simulated in the implemented schemes: First, additive abrupt faults and gradual faults were introduced. In the abrupt fault case, the whole magnitude of the fault is developed in one moment of time and is simulated with a step function. On the other hand, gradual faults are developed during a period of time and are implemented with a ramp function. Second, multiplicative faults were tested. All types of faults, additive and multiplicative, can be implemented in sensors (feedback), in which the properties of the process are not affected, but the sensor readings are mistaken. And it also can be implemented in actuators (process entry) causing changes in the behavior of the process or interruption. The controllers developed to test the Industrial Heat Exchanger are a Model Reference Adaptive Controller (MRAC), an MRAC with a PID controller whose parameters were optimized using a GA (MRACPID), an MRAC with an ANN (MRAC-ANN), an MRAC with a PID and an ANN (MRAC-ANN-PID), an MRAC with a Sliding Mode Controller (MRAC-SMC) and finally, an MRAC with an H? control (MRACH?). These MRAC controllers were design using the MIT rule. The controller with the best response against the faults is the MRAC-ANN-PID controller because was robust against the tested sensor and the actuator were imperceptible with almost a 0% error between the reference model and the process model. For the Coupled-Tank LPV system, an MRAC (MRAC-4OP-LPV), an MRAC with an ANN (MRAC-ANN4OP-LPV) and an MRAC with an H? controller (MRAC-H?4OP-LPV) were designed for 4 operating points of the LPV system. For the sensor faults, the controller with the best results was the MRACNN4OP-LPV because it was fault tolerant against the tested sensor faults no matter the value of the operating point. This method resulted the best scheme because is a combination of two type of controllers, one is a Model Reference Adaptive Controller (MRAC) and the other one is an Artificial Neural Network designed to follow the ideal trajectory (non-faulty trajectory). For the actuator faults, the MRAC-H?4OP-LPV was the best scheme because it was fault tolerant to the applied faults and also could accommodate the faults faster than the MRAC-4OP-LPV scheme. In addition, for the Coupled-Tank system, an MRAC (MRAC-LPV) controller and an MRAC with an H? Gain Scheduling controller (MRAC-H?GS-LPV) that work for all the operating points of the LPV system were developed. Both controllers were tested using the LPV system of the plant and also were tested using the nonlinear model of the system. In general, for additive and multiplicative faults, the MRAC-H?GSLPV showed better results because is a combination of two type of LPV controllers, one is a Model Reference Adaptive Controller (MRAC) and the other one is a H? Gain Scheduling Controller, both controllers were designed for an LPV system giving them the possibility of controlling any desired operating point between the operation range of the dependent variables (? and ? ). In addition, the manipulated variable was plotted and it can be observe on this figure how the system compensates the fault. 4 The main contributions of this research are the development of the MRAC with an Artificial Neural Network and a PID controller optimized by a Genetic Algorithm (MRAC-ANN-PID) and the development of an MRAC with an H? Gain Scheduling Controller that works for all the operating points of an LPV system (MRAC-H?GS-LPV). The MRAC-ANN-PID controller as mentioned above resulted to be robust against sensor and the actuator faults were imperceptible with a very low error between the reference model and the process. The PID parameters of this controller Kp, Ki and Kd were optimized in order to follow the desired trajectory (no faulty system) and the ANN was trained also to follow the desired system trajectory no matter the fault size. The MRAC-ANN-PID controller is different from the controllers that already exist in the literature first because none of them had the controller structure of the MRAC-ANN-PID, second because most of them do not use any Artificial Intelligence methods such as ANN or GA. And third, in the literature, the ANN is used to represent or estimate the plant not as a controller which is the case of this research. On the other hand, for the MRAC-H?GS-LPV controller the main contribution was the development of a passive structure of FTC able to deal with abrupt and gradual faults in actuators and sensors of nonlinear processes represented by LPV models. This controller can accommodate the tested faults for any operating point between the operating ranges. The MRAC and the H? Gain Scheduling controller were specially designed to switch from one operating point to another in less than a second. The MRAC controller was chosen as a FTC because guarantees asymptotic output tracking, it has a direct physical interpretation and it is easy to implement. The H? Gain Scheduling Controller was also chosen because it increases the robust performance and stability of the closed loop system. In the existing literature, the H? technique has been combined with other schemes to control systems but to the best of our knowledge there are no reports concerning the combination of an MRAC with an H? Gain Scheduling controller.
    • Comparación de Tres Algoritmos Genéticos, un Algoritmo de Conteo y un Algoritmo Voraz a la Información de 10 Años de los Rendimientos de 40 Emisoras de la Bolsa Mexicana de Valores

      Villegas Zermeño, J. Eddie C. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2005-02-01)
      Existen muchas formás de predecir el comportamiento de los mercados financieros de manera experimental, desde los modelos clásicos de pronósticos como lo son los modelos econométricos, las series de tiempo, las relaciones de causalidad y las metodologías de Box-Jenkins; hasta los modelos que aplican la heurística y la volatilidad estocástica y que hacen uso de una gran cantidad de información para intentar una mayor precisión en el pronóstico de los rendimientos de un mercado en particular.A pesar de esto las aplicaciones prácticas siguen utilizando fundamentalmente promedios móviles-28 para obtener un pronóstico de los rendimientos y tomar sus decisiones de inversión.La presente disertación propone el uso de las técnicas de la computación suave y la minería de datos para obtener modelos más precisos del rendimiento de las acciones en los mercados financieros internacionales.De partida se toma la información de 10 años de la Bolsa Mexicana de Valores tomando una muestra de 40 emisoras y con esta muestra se desarrollan portafolios de 10 emisoras que posteriormente se comparan unos con otros hasta encontrar aquel que mayor rendimiento hubiera tenido a lo largo del horizonte temporal bajo estudio.Se toman diferentes fases de tiempo, a saber: diario, semanal y mensual para determinar cuál medición de tiempo da mayores rendimientos. Sin embargo se ignora la parte práctica de las comisiones.Posteriormente se aplica una minería de datos a un algoritmo de conteo que efectÚa hasta 2´123,389 ´622,640 combinaciones para encontrar aquella que forma el portafolio de inversión que mayor rendimiento da en el periodo.El resultado de este algoritmo se compara con los encontrados para las mismás fases de tiempo con un algoritmo genético con tres diferentes medidas deiiaptitud a saber: el rendimiento global, la distribución normal y el movimiento geométrico browniano.Finalmente se hace uso de un algoritmo voraz para encontrar otros portafolios para comparar con los obtenido previamente.Los resultados muestran que los portafolios formados con al algoritmo genético que usa como medida de aptitud la distribución normal generan mayores rendimientos anualizados.
    • Complex Modulation Code for Low- Resolution Modulation Devices

      Ponce Díaz, Rodrigo (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2006-01-12)
      Digital information technology is constantly developed using electronic devices. The three dimensional (3D) image processing is also supported by electronic devices to record and display signals. Computer generated holograms (CGH) and integral imaging (II) use liquid-crystal spatial light modulator (SLM). This doctoral dissertation studies and develops the application of a commercial twisted nematic liquid crystal display (TNLCD) in computer generated holography and integral imaging. The goal is to encode and reconstruct complex wave fronts with computer generated holograms, and 3D images using Integral Imaging systems. Light modulation curves are presented: amplitude and phase-mostly modulation. Holographic codes are designed and implemented experimentally with optimum reconstruction efficiency, maximum signal bandwidth, and high signal to noise ratio (SNR). The study of TNLCD into II is presented as a review of the basics techniques of display. A digital magnification of 3D images is proposed and implemented. 3D digital magnified images have the same quality of optical magnified images, but the magnified system is less complex. Recognition system for partially occluded object is solved using a 3D II volumetric reconstruction. 3D Recognition solution presents better performance than the conventional 2D image systems. The importance in holography and 3D II is supported by the applications as: optical tweezers, as dynamic trapping light configurations, invariant beams, and 3D medical images.
    • Configuration of an Autonomous Vehicle a Systematic Framework

      Gutiérrez Jagüey, Joaquín (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2004-01-12)
      Autonomous Vehicles have generated an emerging interest in demanding applications due to their great potential to move toward a particular site and perform some specialized work (such as goods transportation, exploration, and work tool manipulation), especially in remote environments which can be dangerous, unsuitable, and inaccessible for humans. Despite the great potential of Autonomous Vehicles, including safety and productivity, their use is not widespread because of the complexity to define the robotic vehicle configuration. Besides, current design processes rely on the designer´s experience, require long periods of time and signify high investments. Thus, practical approaches for their configuration are needed. This thesis focuses on the development of a method for the synthesis of an Autonomous Vehicle configuration, based on the analysis of both the task and the environment. The method assists to determine a suitable configuration to that serves as a comprehensive foundation in the building of the robotic vehicle. The configuration is expressed as a hierarchical and modular structure of interlinked components that fulfill the requirements to perform a given task under the constraints of a certain environment. The components implement the functionality of a composition of motions that solve the task. This composition is selected through a preliminary analysis, in which a geometric description of the task and the environment is used to combine motions and to find viable compositions (possible solutions). Each motion of the selected composition becomes a component or component set into the vehicle structure. To select the components, the method uses criteria of Robotics and engineering principles for providing of autonomy to the configuration, as a function that correlates the perception, the control, the positioning, and the geometry of the robotic vehicle. The definition of the requirements, the geometric description, the combination of motions, the selection of the fundamental composition, and the gradual fulfillment of the configuration with the proper components are derived through this systematic framework, exploiting the vast inventory of technologies and products developed by decades of engineering and robotic research. The implementation of this method is illustrated through a real request for the underground mining domain. The major result of this research work has been the formulation and validation of the proposed framework as a suitable approach to systematically determine the Autonomous Vehicle configuration.
    • Control charts for autocorrelated processes under parameter estimation

      Garza Venegas, Jorge Arturo
      Statistical Processes Monitoring is a collection of statistical-based methodologies and methods for monitoring the quality of manufactured products or services. Within these tools, control charts are powerful ones to assist practitioners on the detection of departures from in-control situations as long as the assumptions made on their design are fulfilled; otherwise, their power might decrease. For instance, control charts performance has been shown to be negatively affected when using estimated parameters (in which case the Average Run Length, ARL, becomes a random variable) or when dealing with autocorrelated data. Given that, this research is focused on the effect of parameter estimation on the performance of the X-bar and the modified S^2 control charts for monitoring the mean and the variance, respectively, of autocorrelated processes under parameter estimation. The average of the ARL and its standard deviation are considered as performance measures as they take into account the sampling variability of the ARL. Furthermore, a bootstrapping methodology is applied to adjust control limits in order to have a guaranteed conditional in-control performance with a certain probability and the effect on the out-of-control ARL is also studied.
    • Controlled drug delivery strategies: Advances on the synthesis and molecular dynamics of dendrimers, and optimization of liposomes preparation by Dual Asymmetric Centrifugation

      Valencia Gallegos, Jesús Ángel; Sampogna-Mireles, Diana; Guitiérrez Uribe, Janet Alejandra; Elizondo Martínez, Perla; Hernández Hernández, José Ascención; Aguilar Jimenéz, Oscar Alejandro (2017-12-01)
      In early phase of drug development, New Chemical Entities (NCEs) are used as highly active drugs, with limited availability, and high cost. This study presents two alternatives of drug delivery systems, site-specific dendrimers and liposomes, with potential use as carriers of highly active drugs like pristimerin. Advances in the synthesis of site-specific dendrimers with bis-MPA as branching precursor and three key building blocks are reported, including compounds with folic acid (FA) to provide selectivity to folate receptors of cancer cells, pristimerin as cancer drug model, and fluorescein isothiocyanate (FITC) as fluorescent dye for future evaluation as cancer treatment. Five different dendrimer syntheses were reported, differing on their nucleus (bis-MPA or ethylene glycol) and the linker of folic acid (PEG 3350 and triethylene glycol TEG). Different purification techniques, such as normal-phase and reverse-phase column chromatography, size exclusion chromaography, dialysis and ultrafiltration, were probed according to their chemical characteristics and solubility. Characterization was carried out by FT-IR, MALDITOF-MS and/or NMR. Theoretical evidence generated in this work supports that FA-PEG750 and FA-PEG3350 dendrimers can be applied to selectively carry drugs and interact with cancer cell receptors (FR-α). Another drug delivery carrier and a viable alternative to the use of dendrimers is liposomes. Liposomes are the most mature drug carriers for passive and active targeting commercially available in oncology and other disease treatments. Dual Asymmetric Centrifugation (DAC) is a novel, fast, simple, and reproducible method for liposomal formulation screening, it facilitates liposomes preparation, and favors small diameters (<120 nm) in a small scale, with high drug encapsulation (EE). An optimization of DAC parameters (type and volume of buffer, beads size, centrifugation speed and time) was done to obtain liposomes of 70-80 nm with high encapsulation efficiency (71%). Also, dialysis was the best method for liposomes purification in comparison to Zeba-Spin and Micro-Spin size-exclusion columns, and can be applied to other drug delivery systems, like dendrimers. Over all, site-specific dendrimers demand a meticulous control on their structure during synthesis. Their preparation is time consuming and analytically complex. Meanwhile, theoretical evidence supports their potential to selectively carry drugs and interact with cancer cell receptors. Meanwhile, liposomes are a faster, easier and versatile alternative as drug delivery carriers of highly toxic drugs.
    • Crecimiento concomitante de capas durante la nitruración por el plasma de hierro puro : modelado matemático, simulación y validación experimental

      Antonio Jiménez Ceniceros; ANTONIO JIMÉNEZ CENICEROS (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2016)
    • Dendrones y Dendrímeros Multifuncionales Altamente Cargados: El Concepto Troyano

      Valencia Gallegos, Jesús Á. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-01-01)
      El aumento de la población, aunado con su envejecimiento, impone nuevas demandas en cuanto a la disponibilidad de mejores fármacos para la prevención, control y alivio de las enfermedades con mayor capacidad de afectar la calidad de vida de las personas, entre las que se encuentran el cáncer y las neurológicas y neurodegenerativas. Esto puede ser logrado no solamente con nuevas moléculas, sino con la mejor administración de las ya existentes. La topología dendrítica ha demostrado alto potencial de empleo en la entrega controlada de fármacos, desde el concepto de "caja dendrítica" para la encapsulación de activos en espacios intersticiales formados en el interior de dendrímeros de altas generaciones, hasta la anexión de activos, etiquetas, marcadores y modificadores de propiedades en la periferia de los mismos. En esto Último, cada funcionalización específica se ha realizado en detrimento de la capacidad de carga terapéutica. Sin embargo, hasta años recientes, la estructura interna no había sido explotada como alternativa de funcionalización específica y solamente empleando la estrategía divergente de síntesis. A pesar de las ventajas obvias que esta construcción de dendrímeros presentan, es la explotación de la síntesis convergente la que ofrece mayor potencial de aprovechamiento por las ventajas reconocidas de esta estrategia sobre la divergente en la construcción de dendrímeros, principalmente de baja generación. Se presenta, por primera vez, la síntesis convergente de este tipo de dendrímeros señalando sus ventajas sobre la estrategia divergente en el contexto de la síntesis de moléculas empleando ácido ?-amino butírico (GABA), el principal neurotransmisor inhibitorio en el sistema nervioso central, como molécula bioactiva. El GABA esta asociado a desordenes neurológicos iii importantes como epilepsia. Para esto, y como elemento innovador, se ha considerado que cada generación del dendrímero está constituida por elementos con funciones diversas dependiendo de la necesidad; así, es posible tener unidades de ramificación, carga activa y espaciadores o elementos que sirvan como espaciadores y punto de ramificación simultáneamente, dentro de cada generación. El enfoque es emplear como elementos de construcción del dendrímero a las moléculas del fármaco y contar con profármacos dendríticos (dendrones y dendrímeros) altamente cargados y hacer disponible esta carga mediante hidrólisis química o enzimática del dendrímero. Esta aproximación obliga al dendrímero a ser biodegradable por efecto de la actividad biológica del huésped y limita el tipo de elementos de construcción y los tipos de enlaces químicos a emplearse en su unión, a aquellos más sensibles a hidrólisis. Adicionalmente a las ventajas que se conocen actualmente para la síntesis convergente, bajo la estrategía de este trabajo se aportan nuevos recursos, ya que es posible la introducción de mÚltiples cargas por generación y cargas diferentes por generación en ambos tipos de elementos, dendrones y dendrímeros. Por otro lado, empleando la técnica presentada en este trabajo, cada elemento de construcción puede ser considerado un profármaco de la carga considerada, susceptible de ser evaluados per se, ampliando el potencial de identificación de entidades efectivas en pruebas de actividad biológica. Finalmente, se valida el modelo propuesto de síntesis mediante la construcción convergente y caracterización de dendrones y dendrímeros empleando moléculas de GABA como parte integral de su estructura. Como elementos complementarios para la construcción de cada dendrón y dendrímero se emplea ácido 2,2-bis(hidroximetl) propanóico como unidad de ramificación y trimetilol propano como nÚcleo. Los enlaces empleados son éster y amida, susceptibles de hidrólisis. El dendrímero de segunda generación contiene 9 moléculas de GABA en su estructura.
    • Desarrollo de aplicaciones virtuales de entrenamiento médico incorporando dispositivos hápticos

      Ricárdez Vázquez, Eusebio; EUSEBIO RICARDEZ VAZQUEZ;348934 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2016-09-02)
      En la actualidad los ambientes virtuales de aprendizaje son herramientas ampliamente utilizadas en el ámbito académico y están presentes en prácticamente todas las áreas del conocimiento. La medicina no es la excepción, inclusive es una de las áreas en las que los ambientes de simulación presentan grandes ventajas, al evitar que el aprendizaje sea en cadáveres o seres vivos. Se ha demostrado que un ambiente virtual es más eficaz cuando puede enviar al usuario sensaciones táctiles que permitan al usuario interactuar de forma más completa con el mismo. En este trabajo se desarrollan las técnicas necesarias para generar un ambiente virtual de entrenamiento médico que envía al usuario sensaciones táctiles a través de dispositivos hápticos, para lo cual se modeló el instrumental quirúrgico necesario. Se propone una forma de simular piel humana con diferentes características como grosor, flexibilidad, textura, etc. Se plantea un método de generación de respuesta háptica (haptic rendering) para superficies deformables aplicado a dicha piel. Se desarrolla un hilo que posee características que le permiten fijarse a una superficie deformable, colisionar con objetos sólidos y generar nudos simples. Como producto final de esta investigación, se muestra la integración e interacción de los diversos elementos citados anteriormente para generar el ambiente virtual de entrenamiento médico para realización de sutura: SutureHap. Las pruebas realizadas por los usuarios en el ambiente muestran que el comportamiento del mismo es cercano a un ambiente real y que puede ser usado para el entrenamiento de futuros médicos.
    • Desarrollo de aplicaciones virtuales de entrenamiento médico incorporando dispositivos hápticos

      Ricárdez Vázquez, Eusebio; Eusebio Ricárdez Vázquez
      En este trabajo se desarrollan las técnicas necesarias para generar un ambiente virtual de entrenamiento médico que envía al usuario sensaciones táctiles a través de dispositivos hápticos, para lo cual se modeló el instrumental quirúrgico necesario. Se propone una forma de simular piel humana con diferentes características como grosor, flexibilidad, textura, etc. Se plantea un método de generación de respuesta háptica (haptic rendering) para superficies deformables aplicado a dicha piel. Se desarrollo un hilo que posee características que le permiten fijarse a una superficie deformable, colisionar con objetos sólidos y generar nudos simples. --Resumen.
    • Design of an Ultra-Low Voltage Analog-Front-End for an Electroencephalography System

      Bautista Delgado, Alfredo F. (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2009-06-01)
    • Design, fabrication and characterization of a high-speed, bimodal, CMOS-MEMS resonant scanner driven by temperature-gradient actuators

      Camacho León, Sergio; SERGIO CAMACHO LEON;213140 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2010-05-01)
    • Detection and Defense Mechanism against Security Attacks in Reconfigurable Networks: Network Coding Approach

      Villalpando Hernández, Rafaela (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2008-01-12)
      This document is a research work of a Doctorate in Information Technology and Communications of ITESM, in the field of examination environment, network coding applied to securing wireless reconfigurable networks under some physical impairments from several attacks. Wireless reconfigurable networks are prone to several routing security attacks such as sink holes, selective forwarding, black holes, wormholes and so on. Conventional security algorithms are processing expensive and vulnerable to central unit compromising. Network processing may provide several advantages in the implementation of distributed collaborative securing method for reconfigurable networks subject to security attacks. In this research work, we introduce a network coding method that detects security attacks related the routing process. The proposed method works on a distributed fashion performing linear network coding over nodes composing a given route. The proposed methodology uses network coding not only to distribute content, but also to provide data confidentiality by cooperation as a mechanism of detection. The method provides a robust response for varying network conditions, such as large node density, high interference, and route failure due to Rayleigh channel fading. In this research work, we introduce an interference analysis of the proposed security method caused by the nearest neighbor to the route. We also introduce an outage probability analysis due to link failures in routes caused by a Rayleigh fading channel. The detection capabilities of the network coding based detection and defense method are analyzed under both impairments. The security method is faced to different attack probabilities, several network densities and transmission ranges, where parameters like packet overhead and successfully received packet as well as the detection accuracy of the method are analyzed throughout simulations. Finally, we provide an algebraic representation of a network implementing the proposed security method and formulate the sufficient conditions for feasibility of routes commanding the method. Also an alternative criterion for route selection is formulated in basis of the algebraic representation.
    • Development of a hybrid ejector-compressor refrigeration system with improved efficiency

      Gutiérrez Ortiz, Alejandro; ALEJANDRO GUTIERREZ ORTIZ;347162 (Instituto Tecnológico y de Estudios Superiores de Monterrey, 2016-09-02)
      The present doctoral dissertation addresses the design of an ejector suitable for a thermally driven hybrid ejector-compressor cooling system; research was aimed at improving the performance of the ejector in terms of both critical backpressure and entrainment ratio. An ejector efficiency analysis is presented to establish a theoretical limit for the maximum achievable entrainment ratio of an ejector undergoing a fully reversible process without entropy generation; the main sources of irreversibility within the ejector are subsequently discussed. The shock circle model is implemented as a mean to predict the entrainment ratio for an ejector with a given set of nozzle and constant area section diameters; experimental results from the literature are presented and used to validate the model. A Computational Fluid Dynamics design exploration aimed at identifying the effects of the rest of the key geometrical parameters that are not covered by the mathematical model is also presented. In order to generalize the findings from this study for ejectors of different scales, the results are presented using non-dimensional parameters. The study showed that in order to guarantee a critical mode operation at the range of operating conditions suitable for a hybrid ejector-compressor cycle, only the constant area section or the nozzle throat diameters need to vary accordingly with the condenser backpressure. The optimum geometrical parameters found by the design exploration are used to propose an optimized ejector design, the Constant Rate of Momentum Change method is also implemented to generate a diffuser geometry that reduces one of the sources of entropy generation identified during the first stages of this research. The assessment of the performance increase for the proposed ejector is measured against a baseline design; the results showed that the new ejector outperforms the baseline in terms of both entrainment ratio and critical condenser backpressure. The thesis concludes with a study to quantify the thermal efficiency increase of a hybrid ejector-compressor system employing the proposed ejector, lastly, the design for a test stand to experimentally verify the findings of this research work is proposed.
    • Development of SERS substrates for the characterization of cellular systems and the determination of molecules of interest

      Ornelas Soto, Nancy Edith; Aguilar Hernández, Iris Anahí; García García, Alejandra; Cárdenas Chávez, Diana Linda; Rodríguez Delgado, Melissa (2017-12-04)
      Raman spectroscopy is a powerful vibrational spectroscopy technique that provides useful information regarding the chemical composition of a sample. It is a label-free technique that can be successfully applied for both single analyte detection and the analysis of complex matrices. The only main limitation of Raman spectroscopy is the inherent low scattering efficiency. Surface Enhanced Raman Spectroscopy (SERS) is employed to overcome this limitation. SERS active structures are typically in the form of colloidal solutions, or as solid substrates with metallic nanostructures on the surface. The work included in this dissertation explores the development of SERS substrates for (a) the detection of a single molecule of interest, and (b) the analysis of cellular systems. For the detection of molecules of interest, two studies were carried out: In the first study, the ideal synthesis conditions of colloidal silver nanoparticles that rendered the highest SERS enhancement was explored via principal component analysis (PCA). The selected silver nanoparticles were used for the ultrasensitive detection of phenolic compounds in solution. The second work focused on the development of solid substrates, where gold nanoparticles were synthesized and immobilized on a carbon nanofibers matrix and enhancement capacity of the SERS substrate was evaluated with Rhodamine 110. The use of SERS for the analysis of biological systems was also explored. First, the effect of an oxidative agent (CdTe quantum dots) on the freshwater microalgae H. pluvialis was studied with SERS via colloidal gold nanoparticles. Mammalian cell lines were also analyzed; Colloidal concave gold nanocubes were synthesized and immobilized onto a solid substrate for SERS enhancement of HeLa cells, showing that solid SERS substrates are also suitable for cell analysis. Finally, radiation resistant and radiation sensitive murine leukemia sublines were characterized for the first time by normal Raman spectroscopy and SERS, with the aim of contributing the development of predictive radiosensitivity assays. SERS substrates in colloidal and solid form were developed, and successfully used for the label-free detection of analytes in solution and complex biological samples, showing the versatility of SERS and contributing to this growing multidisciplinary field.