Online 1. Leveraging azides in the synthesis of cyclobutenes and the conversion of arenes to pyridines [2023]
- Patel, Sajan, author.
- [Stanford, California] : [Stanford University], 2023
- Description
- Book — 1 online resource
- Summary
-
Four-membered rings are rapidly becoming sought-after scaffolds in pharmaceuticals due to their rigid structure and well-defined exit vectors. Towards this goal, we developed a method to enantioselectively form cyclobutenes from simple olefins and N-sulfonyl-1,2,3-triazoles. While these triazoles are known to act as diazo precursors via ring-chain tautomerization, we recognized them as vicinal dicarbene equivalents. Thus, alkynes are reacted in [3+2] cycloadditions with azides to form triazoles, which are then reacted with alkenes in a formal [2+2] cycloaddition. A host of enantioenriched cyclobutenes were synthesized, several of which were carried on to assemble the carbon skeletons of several natural products. The ability to selectively delete, insert, or exchange atoms in the core scaffolds of molecules is of fundamental interest to synthetic chemists and could be of great use to medicinal chemists seeking to rapidly modulate the parameters of lead compounds. Though atom deletions and insertions have garnered much interest in the form of ring contractions and expansions, atom exchanges have seen considerably less development. One notable exchange that has eluded chemists is the conversion of benzene to pyridine, which is of interest due to the "necessary nitrogen atom" effect, which describes the enhancement of key pharmacological properties when an arene in a lead compound was replaced with a pyridine. To this end, we found that azides serve as effective nitrene precursors to engage arenes in a C to N atom exchange sequence featuring nitrogen atom insertion and carbon atom deletion
- Also online at
-
Online 2. Quantum controlled cold scattering between simple atoms and diatoms [2023]
- Zhou, Haowen, author.
- [Stanford, California] : [Stanford University], 2023
- Description
- Book — 1 online resource
- Summary
-
This thesis presents the experimental studies of quantum-controlled cold scattering of H2 isotopologues (HD, D2) with simple rare gas atoms (He, Ne) and diatom molecules D2. In these experiments, we prepare HD and D2 molecules in specific rovibrational levels (v = 2, 4, j = 2, 4) with defined alignments, and study their rotational inelastic scatterings of the state-prepared molecules at low collision temperatures. From a time-of-flight apparatus, we extract information about the angular distributions of the scattered products, therefore providing insights into the dynamics of the collision processes. By combining quantum-state control and low-energy scattering, we are able to interrogate the fundamental interactions between such simple atoms and diatoms at an unprecedented level of detail. The thesis is structured as follows. Chapter 1 serves as an introductory point of the subject. A brief overview of the development and progress of cold scattering is given, and various techniques to achieve cold relative temperatures as well as quantum state control are discussed. The purpose is mainly to situate our experimental techniques in the broad field, and to demonstrate the similarities and difficulties we face. Chapter 2 presents the theoretical treatment of the scattering problem. They serve as the necessary background for analyzing the data in later discussions, and sometimes provide direct comparisons. Chapter 3 focuses on the experimental setup we use to achieve quantum-controlled cold scattering. Many details are elucidated at length. In the next two chapters (Chapter 4, 5), various experimental results of rovibrationally excited HD/D2 with other scattering partners are presented and discussed. Chapter 4 describes the Δj = 2 inelastic scattering results of D2 (v = 2, 4, j = 2, 4) with rare gas atoms He and Ne, as well as the Δj = 1, 2 relaxations of HD (v = 4, j = 2). Chapter 5 includes the scattering results between a pair of state-prepared diatoms D2 (v = 2, j = 2). In these two chapters, theoretical comparisons are provided alongside the data where available. In the last chapter (Chapter 6), we conclude the thesis with more thoughts on the summary and outlook of the project. Although complete satisfactory agreements between theory and experiments have not been necessarily achieved for all of the studies we present, we hope such discussions would inspire more future works of this kind
- Also online at
-
Online 3. 3D scene understanding with efficient spatio-temporal reasoning [2022]
- Gwak, JunYoung, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Robust and efficient 3D scene understanding could enable embodied agents to safely interact with the physical world in real-time. The key to the remarkable success of computer vision in the last decade owes to the rediscovery of convolutional neural networks. However, this technology does not always directly translate to 3D due to the curse of dimensionality. The size of the data grows cubically with the voxels, and the same level of input resolution and network depth was infeasible compared to that of 2D. Based on the observation that the 3D space is mostly empty, sparse tensors and sparse convolutions stand out as an efficient and effective 3D counterparts to the 2D convolution by exclusively operating on non-empty spaces. Such efficiency gain supports deeper neural networks for higher accuracy in real-time reference speed. To this end, this thesis explores the application of sparse convolution to various 3D scene understanding tasks. This thesis breaks down a holistic 3D scene understanding pipeline into the following subgoals; 1. data collection from 3D reconstruction, 2. semantic segmentation, 3. object detection, and 4. multi-object tracking. With robotics applications in mind, this thesis aims to achieve better performance, scalability, and efficiency in understanding the high-level semantics of the spatio-temporal domain while addressing the unique challenges the sparse data poses. In this thesis, we propose generalized sparse convolution and demonstrate how our method 1. gains efficiency by leveraging the sparseness of the 3D point cloud, 2. achieves robust performance by utilizing the gained efficiency, 3. makes predictions on empty spaces by dynamically generating points, and 4. jointly solves detection and tracking with spatio-temporal reasoning. Altogether, this thesis proposes an efficient and reliable pipeline for a holistic 3D scene understanding
- Also online at
-
Online 4. 3D vertical resistive switching memory towards ultrahigh-density non-volatile memory [2022]
- Qin, Shengjun, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
To meet the exploding information processing and data storage demands, memory technology, as one of the cornerstones in modern computing systems, faces the constant challenge of advancing to the next technology node with a larger capacity and higher density. As difficulties arise in the further scaling of conventional memories, new memories show the potentials to bridge the performance gap between memory and storage, providing new opportunities to innovate computing systems and architectures. Among emerging memories, resistive switching memory (RSM) is a promising solution due to its high device density and reasonably fast write/read operation speed. Resistive random-access memory (RRAM), as an example of RSM, has a simple structure and uses low-temperature fabrication that is compatible with back-end-of-the-line (BEOL) metal wiring of typical CMOS logic technology, thus potentially leading to low cost and on-chip integration with logic for high bandwidth access. Research efforts have been made to explore various material options, device and array structures, and chip architectures. However, to achieve an ultrahigh-density memory, a practical co-design must take into account all the above considerations to arrive at a superior solution. In this dissertation, I present a way to realize ultrahigh-density memory with 3D vertical RRAM (VRRAM). I develop a design guideline for ultrahigh-density 3D VRSM using simulations of 3D memory arrays based on an accurate and computationally efficient model of the memory and parasitic resistance of the memory wired in 3D. I detail the model formulation and validation with physics-based simulations. Combined with simulation results, I discuss design specifications for different physical levels from device to array to chip architecture and provide a comprehensive list of design tasks to achieve an ultrahigh-density 3D VRRAM. To prioritize design tasks among different levels, I focus on the rudimentary design constraints at the device level and extend design requirements to a ready-to-build memory device in the lab. I then experimentally demonstrate an 8-layer 3D Ru/AlOxNy/TiN VRRAM towards an ultrahigh-density memory. This 3D VRRAM satisfies the design requirements for a tera-bit class memory when integrated with a proper selector. I further investigate the downscaling potentials of 3D VRRAM based on experimental data and ongoing scaling techniques and trends in the industry. Incorporating these scaling prospects, I project that 3D VRRAM can achieve a much higher density and capacity with the same number of 3D layers and fewer bits per cell compared to the state-of-the-art 3D NAND. With the structural and process flexibility (e.g., BEOL), 3D VRRAM can expand its high-density applications to be integrated on-chip with CMOS logic for high bandwidth memory access
- Also online at
-
Online 5. Accelerating numerical methods for gradient-based photonic optimization and novel plasmonic functionalities [2022]
- Zhao, Nathan Zhiwen, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Optimizing the design and performance of photonic systems has been an active and growing area of research over the past decade with many practical applications such as image sensors, augmented reality and virtual reality, on-chip photonic systems, and more. Moreover, gradient-based methods, such as the adjoint variable method (AVM), have led to very distinctive and complex designs for on-chip multiplexers, tapers, dielectric laser accelerators, and more, while yielding much higher performance metrics which classical design approaches using first principles physics cannot match. However, less research has been dedicated in the photonics community to understanding and improving the underlying numerical methods which are critical for the success of these applications. In this thesis, we will demonstrate four key numerical advancements in the use of gradient-based design methods using frequency domain numerical solvers of Maxwell's equations, particularly finite difference frequency domain (FDFD) solvers. The first is the application of domain decomposition techniques to gradient-based optimization, allowing us to reduce the effective system size for a gain in efficiency. The second exploits the physics of perturbative series expansions to efficiently determine the optimal learning rate essential to gradient-based optimization. The third leverages the fundamental similarities of the previous two methods, allowing us to combine the two to achieve a further multiplicative acceleration. The fourth is exploiting the choice of boundary condition in the context of perfectly matched layers to minimize overhead and optimize the efficiency of the simulations required during the optimization procedure. Furthermore, we will demonstrate one novel practical application in designing a next generation replacement for traditional filter-based image sensors that we term a 'color router'. By using a gradient-based approach, we demonstrate not only can we overcome the traditional limitations of filter-based approaches, but we can approach the absolute physical limit of color separation efficiency. In the context of this problem as well, we also demonstrate one further novel method to accelerate optimization, using an L1-like penalty method inspired by L1-regularization popularized in machine learning to improve the robustness to manufacturing errors and other perturbations to the device design. Finally, as a contrast to the gradient-based technique of optimization, we also showcase two examples of more traditional device optimization using the theoretical principles of Maxwell's equations. The first is to exploit analytic continuation and the band-structure of insulator-metal-insulator waveguides to design a reflector with superior reflection properties to that of a uniform metal but with lower loss (essentially a nearly metal-less metallic metamaterial). The second is to engineer interesting radiative properties and extraordinarily high reflection in atomically-thin monolayer graphene nano-ribbon system
- Also online at
-
- Qian, Jason, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
In the search for higher efficiencies, a number of unconventional aircraft planforms have been proposed to replace the conventional tube and wing airliner. With entirely new designs, these new configurations would have to be designed from a clean sheet and makes for a risky proposition. To ensure that such an endeavor would succeed, high-fidelity geometry and analysis would need to be introduced earlier in the aircraft design process to ensure that the configuration is feasible before any component is manufactured. This thesis pursues this idea and presents several methods in which high-fidelity aircraft structures can be generated, analyzed and optimized in a faster manner. Within the design process, bottlenecks can occur in the geometry/mesh generation and the computationally expensive analysis modules. To speed up the generation process, a parametric framework was written which takes a set of inputs and automatically and quickly generates a finite element analysis (FEA) model of a wing structure. Fidelity is maintained by being able to model all primary structural components in both conventional and unconventional aircraft configurations. This ability to rapidly create meshes is then leveraged to generate models of different resolutions, which can be tailored for the various FEA analysis types in terms of computational speed and accuracy. One of the most computationally expensive FEA analysis is finding the buckling eigenvalues. To reduce the cost of local buckling analysis, an iterative method calculates the eigenvalue of an aircraft panel and its surrounding section. This method was found to match the eigenvalues of the same global model to high degree of accuracy at a lower computational cost. To find the global eigenvalues without repeating the local eigenvalues, a displacement constraint method was applied to limit the possible mode shapes. This application guarantees only global mode shapes from the eigenvalue solver, making the extraction of global buckling eigenvalues easier. These methods are applied the optimization of a conventional wingbox and the unconventional structures of a blended wing body and truss-braced wing, where the complete model is generated from scratch and optimized under its aeroelastic loading with stress and buckling constraints. Their demonstration shows that ability design and optimize with high-fidelity geometry and methods can be applied at the start of the aircraft design process and can be applied for unconventional aircraft planforms
- Also online at
-
Online 7. The accidental ethnographers : race, recording technology, and nature in modern Latin American fiction [2022]
- Hernández, Daniel, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
My dissertation, "The Accidental Ethnographers: Race, Recording Technology, and Nature in Modern Latin American Fiction, " examines how fictional works of early 20th century Cuban, Colombian, and Brazilian criollo authors were shaped by these authors' use of recording technologies in their ethnographic endeavors. I show that the process of perceiving the world through cameras and tape recorders, and interacting with unfamiliar places, cultures, and cosmologies, prompted these authors to incorporate Afro-descendant and indigenous worldviews in their texts. These fictional texts enable the conception of a reality that does not necessarily subscribe to the anthropocentric division between human culture and the natural world, commonly found in 19th-century literary naturalism. With this research, I participate in contemporary debates on technology's impact on literature and, by extension, the role of technology in defining Latin American fiction
- Also online at
-
Online 8. Adaptable behavior from anatomically fixed neural circuits : investigations into the synaptic bases of learning and ethanol pharmacology [2022]
- Kaganovsky, Konstantin, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
The brain must strike a balance between reliable information processing and adaptation to an ever-changing environment. At a gross anatomical level, the brain's wiring diagram is believed to be relatively set after development. Therefore, a fundamental question arises: how does stereotyped wiring lead to flexible dynamics, computation, and behavior? This dissertation will explore this question through the lens of 2 phenomena -- the activity dependent strengthening of neural circuits and the pharmacology of ethyl alcohol. Chapter 1- Neurons are known to modify synaptic weights based on their history of coincident activity patterns, termed Hebbian plasticity. In the first study, I tested whether Hebbian potentiation of synapses in the hippocampus or striatum play a causal role in learning. Using the most specific manipulation of synaptic potentiation that we are aware of (Stx3 cKO), I tested a battery of behaviors that are known to require either the hippocampus or striatum. Much to our surprise, Stx3 cKO did not affect most behaviors we tested. However, hippocampal potentiation was critical for novelty-driven spatial learning. Further, we found that spatial/contextual coding was intact after Stx3 cKO, explaining our demonstration of normal spatial learning, and contextual novelty coding was greatly reduced after Stx3 cKO. Lastly, there were two other deficits related to reward and novelty coding that warrant further study. Over-all, our data refine the proposed role of synaptic potentiation -- from an all-encompassing learning signal to a synaptic mechanism for salience encoding. Chapter 2 -- A fixed neural circuit can update dynamics when a psychoactive sub-stance is applied. Indeed, the study of drugs of abuse has yielded insights into the basic biology of the brain. For example, the endogenous opioid and endocannabinoid systems were discovered through investigation of the pharmacological action of opioids and cannabinoids, respectively. In the second study, I continued this rich history by investigating the pharmacology of ethyl alcohol (EtOH). I found that EtOH reduces GABA co-release from dopamine neurons, a recently discovered property of dopamine neurons that has the potential to powerfully modulate brain-wide neural activity. Digging into the mechanism revealed that EtOH was not directly affecting GABA co-release, but rather EtOH's metabolite, acetaldehyde was out-competing GABA's precursor for access to the enzyme ALDH1a1
- Also online at
-
Online 9. Adaptive and sensory machines : active foam and swimming rheometers [2022]
- Kroo, Laurel Anne, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Passive adaptation and sensing are exceptionally useful attributes, enabling robustness and redundancy in the design of functional machines. We see this for example in biological tissues and in hierarchical network infrastructures. In this dissertation we discuss two specific examples of fully-synthetic engineered systems that display these attributes of passive adaptation and sensing: active foam [1] and swimming rheometers [2]. First, in the context of studying the self-assembly of programmable soft matter, we discuss the response of 2D air-liquid foam to cyclical inflation and deflation of an embedded "active" bubble. Experimental and numerical results suggest that such volume oscillations can be used to train foam to achieve local structural properties, communicate long-range mechanical signals through the CW/CCW motion of vertex trajectories, and may be used to actively probe properties of the surrounding network structure. We will also discuss the statistical influence of microstructural yielding events ("T1 transitions") within the material, and the role of disorder on the mechanical response. In the next example, we will discuss how an untethered robot is capable of self-propulsion at low Reynolds number only when submerged in an elastic fluid. Based on prior theoretical results, this robot consists of two counter-rotating rotationally-symmetric objects, and propels itself in the direction of the larger "head" object. By controlling the relative rotation rate of this device while recording motility, the robot acts as rheological sensor of the surrounding elastic fluid, with remarkable sensitivity (at strain rates < 1 Hz). We will discuss our experimental discovery of a non-inertial, viscoelastic jet structure responsible for propulsion, and the specific rheological properties that can be inferred by observation of the device. Fundamentally, these examples demonstrate how adaptive and sensory machines can be used in engineering to enable exceptional redundancy and robustness in real-world environments. [1] Kroo, Laurel A., Matthew Storm Bull, and Manu Prakash. "Active Foam: The Adaptive Mechanics of 2D Air-Liquid Foam under Cyclic Inflation." arXiv preprint arXiv:2204.00937 (2022). [2] Kroo, L. A., et al. "A freely suspended robotic swimmer propelled by viscoelastic normal stresses." Journal of Fluid Mechanics 944 (2022)
- Also online at
-
- Huynh, Benjamin Quoc, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Modern computational approaches promise to address issues of health equity through identifying potential disparities or interventions. However, such approaches are subject to data scarcity: structurally marginalized populations tend to have poorer-quality data, and computational tools reliant on high-quality data may exacerbate existing inequities. Through examples in humanitarian, occupational, and environmental health, we examine data science approaches to circumvent data scarcity. We investigate the use of machine learning to model forced migration in humanitarian settings, microsimulation models for high-risk occupational health contexts, and systems-level public health risk estimation for an impending environmental catastrophe. Taken together, this body of work demonstrates how unconventional data sources, novel approaches, and rigorous study design can be employed to advance health equity and environmental justice in the absence of high-quality data
- Also online at
-
Online 11. Advances in multivariate statistics and its applications [2022]
- Tuzhilina, Elena, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
My research focuses on multivariate statistics, dimension reduction, and applied statistical modeling. During my Ph.D. studies at the Department of Statistics at Stanford, I took part in various collaborative projects, developing methodology and tools for analysis of complex phenomena in such areas as biology, genetics, and neuroscience. This dissertation will cover four big branches of my research, which I will present in separate chapters. Conformation reconstruction is one of the main challenges in computational biology. In this study we develop a model for the 3D spatial organization of chromatin, a crucial component of numerous cellular processes (e.g. transcription). The central object in this study is the so-called contact matrix. It represents the frequency of contacts between each pair of genomic loci and thus can be used to infer the 3D structure. Most of the existing algorithms operating on contact matrices are based on multidimensional scaling (MDS) and produce reconstructed 3D configurations in the form of a polygonal chain. However, none of the methods exploit the fact that the target solution is a smooth curve in 3D. The smoothness attribute is either ignored or indirectly addressed via introducing highly non-convex penalties in the model. This typically leads to increased computational complexity and instability of the reconstruction algorithm. In our work we develop Principal Curve Metric Scaling (PCMS), a novel approach modeling chromatin directly by a smooth curve. We subsequently use PCMS as a building block to create more complex distribution-based models for the conformation. The resulting reconstruction technique therefore combines advantages of MDS and smoothness penalties whereas being computationally efficient. Low-rank matrix approximation (LRMA) is one of the central concepts in machine learning. It is closely related to such areas as dimension reduction and de-noising. A recent extension to LRMA is called low-rank matrix completion. It solves the LRMA problem when some observations are missing and is especially useful for recommender systems (see, for example, the famous Netflix Prize competition). In this study we consider a weighted generalization of LRMA. We build an algorithm for solving the weighted problem as well as two important modifications: one for high-dimensional, one for sparse data. In addition, we propose an efficient way to accelerate the WLRMA algorithm. Although our previous research mainly focuses on developing the WLRMA methodology, the technique has a strong potential for applications. Beyond matrix completion, which it covers as a special case, it can serve as a building block for generalized linear models (GLM) with a matrix structure. For example, in ecology, populations of species can be modeled via Poisson GLMs. In this case, the population matrices (with rows and columns corresponding to sites and species, respectively) can be analyzed using low-rank models and the WLRMA technique will be of great importance. Canonical correlation analysis (CCA) is one of the core approaches in multivariate statistics. It is a technique for measuring the association between two multivariate sets of variables, which has a wide variety of applications. This part of the research was motivated by a study in neuroscience aimed to explore the influence of emotional disorders on brain activity. While working with the brain imaging data we encountered the following challenges. First, the measurements are made for a very dense grid of brain loci leading to extremely high-dimensional data. Second, the data was collected only for a few patients; therefore, it is underrepresented. Finally, the data has a structure, which is defined by the brain geometry. To address the first two challenges we consider Regularized CCA and develop a ``kernel trick'' that allows us to handle extreme data size. We subsequently incorporate brain structure in the regularization introducing Group Regularized CCA (GRCCA) and extend the "kernel trick" to the structured data setting. The resulting GRCCA technique has demonstrated strong potential for brain imaging applications while being computationally efficient. Epidemic forecasting became a very in-demand area during the COVID-19 era. In this research, we have studied the trajectory of the COVID-19 pandemic by means of the open-source COVIDcast dataset collected by Delphi Group. This dataset contains a wide variety of features such as cases, deaths, hospitalizations, and many auxiliary indicators of COVID-19 activity and therefore opens up a wealth of research directions. In particular, we develop the multi-period forecasting (MPF) methodology, which aims to predict the number of cases for multiple ``ahead'' values. The MPF technique solves a multi-response regression problem, where the response columns represent the same phenomenon measured at different time points. To incorporate this time dependence, it assumes the model coefficients to be smooth functions depending on time. We test this idea for the point estimation of the COVID-19 cases and subsequently extend it to predicting the cases' confidence intervals via quantile regression
- Also online at
-
Online 12. Advances towards the development of an artificial pancreas [2022]
- Thomson, Ella Ainsley, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Type 1 diabetes is characterized by autoimmune islet beta cell destruction and insulin deficiency. Islet transplantation is a heavily targeted therapy for type 1 diabetes, as it would provide a continually replenishing source of insulin. However, current therapy requires lifelong immunosuppression. Cell encapsulation would overcome the need for immunosuppression but introduces new challenges to providing insulin release on a physiologically relevant timescale. Here, I present a novel approach for electronically actuated insulin release from encapsulated islets and beta cells, via applied pressure. I demonstrate the efficacy of this release approach for insulin bolus delivery and improved glycemic control. I also discuss the development of a line of genetically engineered beta cells to release various "peptides-of-interest" targeted for a multi-hormonal artificial pancreas. Finally, I introduce the design and characterization of a miniaturized wireless potentiostat for sensing of salivary glucose and lactate. Detection of salivary lactate could provide insight into exercise intensity and improve glycemic control for an artificial pancreas. Together, these developments overcome key limitations of current artificial pancreas designs
- Also online at
-
Online 13. Advancing equity in higher education : the zone of proximal self [2022]
- Nguyen, Judy, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
This dissertation is motivated by a need in the higher education learning sciences field to understand the cultural, social, and emotional processes of student learning and development in advancing equity for students from first-generation, low-income, and marginalized backgrounds. To address this need, I develop and examine a sociocultural conceptual framework called the zone of proximal self. The zone of proximal self is the distance between a learner's current self and their possible selves which can be bridged through support from institutional figures, resources, and materials across a broad learning ecology in higher education. The structure of this dissertation paper includes the three papers followed by an integrated discussion in the conclusion. In a series of three papers, I use the zone of proximal self as a lens to study college students' interactions in postsecondary settings during the Covid-19 pandemic. In my first paper, I use mixed methods approaches to analyze survey results from 524 students at a private four-year university to explore first-generation students' experiences with institutional resources. In my second and third paper, I draw on a study of 50 students at a public four-year university who were asked to meet with a counselor or advisor throughout an academic term and share their experiences through diary entries, surveys, and an interview across a 14-week semester. Results from the first paper reveal how barriers persist for students in interacting with institutional resources; and it reveals the importance of shifting the focus from the role of student meritocracy to that of institutional structures and variability in access to interpersonal and material resources. The second and third paper highlight three pillars of effective practices from counselors and advisors — creating a brave space, validating students, and supporting students' social-emotional competencies — which are associated with higher growth in students' professional and personal goals for their possible selves. Taken together, a zone of proximal self lens illuminates how advancing equity in higher education will require humanizing notions of college student success beyond a narrow focus on academic outcomes. The findings which surfaced from an "interpersonal plane of analysis" (Rogoff, 2003) offer design implications for formal and informal learning environments in higher education to incorporate dimensions of humanizing relational practices between students and institutional figures and resources. Moving forward, this work opens up new lines of research to better theorize and assess how humanizing relational practices emerge and foster growth in the zone of proximal self for students from first-generation, low-income, and marginalized backgrounds
- Also online at
-
Online 14. Advancing the use of remote sensing data and models to understand hydrologic processes in California [2022]
- Ahamed, Aakash, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Satellite remote sensing has emerged as a powerful tool in water resources management. However, the extent to which remote sensing data and models can be used to derive novel insights about groundwater systems remains unclear, and the full spectrum of applications for remote sensing in water resources management remains unrealized. In this thesis, we use remote sensing data and models, integrating on-the-ground datasets where appropriate, to recover hydrologic properties related to groundwater systems -- (1) the change in groundwater storage, estimated across three spatial orders of magnitude through a mass balance approach and compared to independent estimates for each spatial scale, and (2) the source areas where rainfall and snowmelt strongly influence downstream baseflow, determined through application of baseflow separation, baseflow recession, signal processing and information theoretic methods. Remote sensing data describing precipitation, evapotranspiration, soil moisture, and snow-water-equivalent within California are used, for the first time, in a mass balance approach to estimate changes in stored groundwater for study regions spanning ~1,000 km2 to > 100,000 km2. Results of the remotely sensed mass balance agree across scales with independent estimates of changes in groundwater storage derived from (1) the Gravity Recovery and Climate Experiment satellites, (2) well-based measurements of the water table, and (3) regional groundwater flow models. The method is an appealing supplementary tool to estimate changes in groundwater storage relative to traditional methods for a number of key factors: (a) the ability to produce low-latency estimates -- well and model-based methods lag years behind the present due to extensive on-the-ground data requirements and model calibration, (b) quantification of uncertainty, both for estimates of changes in groundwater storage and among water balance components -- traditional methods produce only a single estimate of changes in storage, and (c) a growing number of satellite-based datasets which can be used to accurately estimate the required parameters, as well as capture the uncertainty in water balance components and mass balance results. Promising results were obtained for three out of four study areas, but mass balance results obtained at the finest spatial scale do not agree well with independent estimates, suggesting there are important scale-dependent limitations associated with the remotely sensed mass balance approach. Baseflow, the persistent component of streamflow fed by groundwater discharge to stream channels, is critical for water supply, hydropower generation, and habitat for ecosystems. For these reasons, it is of great interest to identify the areas which strongly influence baseflow through the processes of rainfall and snowmelt. To accomplish this, we combined remotely sensed data describing rainfall and snowmelt with ground-based streamflow estimates in a physics-guided statistical analysis in order to identify the areas in California's Sierra Nevada which have a prevailing influence on baseflow. An important finding suggests that the areas with the highest annual rates of rainfall and snowmelt do not necessarily exhibit the greatest influence on downstream baseflow, and that snowmelt occurring in the 3000-meter to 3700-meter elevation range has the strongest overall influence on baseflow. Our findings provide novel ways to utilize remote sensing data and models to recover essential properties of groundwater systems, and generally support the combined use of remote sensing data and models with on the ground measurements in order to address problems in groundwater hydrology and water resources management. As new sensors are launched into orbit, such as the Surface Water Ocean Topography satellite in late 2022, the spectrum of possible hydrologic applications widens, and the potential for remote sensing in water resources management will broaden
- Also online at
-
Online 15. The African turquoise killifish, Nothobranchius furzeri, as a model for genetic regulation of suspended animation phenotypes [2022]
- Reeves, Gregory Adam, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Suspended animation states allow animals to persist in extreme environments while also slowing or resisting the progression of molecular damage and aging hallmarks, effectively gaining free 'biological time'. The African turquoise killifish, Nothobranchius fureri, has evolved a form of suspended development to survive complete drought of its natural habitat. This state, termed diapause, is extreme even among vertebrate states of suspended animation, able to persist for months or even years. But the mechanisms underlying the evolution of extreme survival states are unknown. To understand the evolution of diapause, we performed integrative multi-omics, including gene expression, chromatin accessibility, and lipidomics, in the embryos of multiple killifish species. Additionally, we generated an embryonic CRISPR screening platform to evaluate the role of key genes and transcription factors during diapause. Our analyses reveal that diapause evolved by a very recent remodeling of regulatory elements at very ancient gene duplicates (paralogs) present in all vertebrates. Transcription factors such as REST/NRSF, FOXOs, and PPARs are central to diapause evolution. These factors are likely implicated in a unique lipid metabolism, leading to accumulation of specific triglycerides -- with very long chain fatty acids -- in diapause. Our work suggests a mechanism for the evolution of complex adaptations and offers strategies to promote long-term survival by activating suspended animation programs in other species
- Also online at
-
Online 16. AI-enabled palliative care : from algorithms to clinical deployment [2022]
- Avati, Anand Vishweswaran, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Healthcare is one of the most promising application areas for Artificial Intelligence (AI) to have a positive impact on society. There has been impressive progress in predictive modeling with health data in recent literature, even matching or exceeding expert-human level performance on a variety of tasks. Yet, translating these machine learning advances into improved patient care has proven to be particularly challenging. While developing accurate and well calibrated models (i.e. the machine learning problem) is necessary to make AI-enabled healthcare applications even possible, a careful understanding and analysis of the healthcare problem is just as essential for bridging the gap between accurate predictions and improved clinical care for the patient. Acknowledging and addressing both these problems is crucial for a successful AI clinical deployment. In this work, we consider the healthcare problem of improving access to palliative care for hospitalized patients. We frame it as a machine learning problem and validate that the framing is indeed appropriate for the healthcare problem at hand by conducting a prospective analysis study involving palliative care specialists. Our technical contributions include a novel survival loss (SurvivalCRPS), evaluation metric (SurvivalAUPRC), a gradient boosting algorithm for probabilistic prediction (NGBoost), among others. We perform a cost-benefit analysis and study the impact of various factors affecting care delivery to inform the design of a clinical workflow to increase access to palliative care services of hospitalized patients. We report on our experiences in operationalizing this workflow, powered by the above algorithmic advances, at the General Medicine service line of Stanford Hospital
- Also online at
-
Online 17. AI-mediated communication : examining agency, ownership, expertise, and roles of AI systems [2022]
- Mieczkowski, Hannah Nicole, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
AI-Mediated Communication (AI-MC) occurs when an AI system operates on behalf of an individual in communication between people. Building on literature in psychology and human-machine communication, this dissertation aimed to answer two overarching questions: 1) how do people perceive their own agency when an AI system operates on their behalf in interpersonal communication? and 2) how do people perceive the role(s) of the system under these circumstances? Through a think-aloud study, as well as an online experiment, this dissertation revealed that people strive to maintain their agency in AI-MC, and devise various strategies to do so. Additionally, perceptions of ownership over interpersonal messages are likely just as important as maintaining one's agency in AI-MC. Moreover, even though AI systems can play many roles in a communication task, people view them as social actors instead of as extensions of the self, which impacts how they attribute agency, perceive ownership over messages, and behave in human-AI interactions. Lastly, people show increased reliance on the AI system when they lack expertise, which underlies decreased feelings of agency and ownership, as well as changes in collaboration dynamics and linguistic features of a message. Taken together, this dissertation applies research on agency and human-AI interaction to the nascent study of AI-MC. The results provide insights into the effects of AI-involvement and topical expertise on people's perceptions and behaviors, as well as how they think about the roles of AI systems
- Also online at
-
Online 18. All the lenses : large-scale hierarchical inference of the hubble constant from strong gravitational lenses with Bayesian deep learning [2022]
- Park, Ji Won, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Unprecedented volumes of data from upcoming sky surveys will yield precise constraints on parameters governing the evolution history of the Universe. One that has received particular attention over the past decade is the Hubble constant (H0) describing the expansion rate of the Universe. This thesis focuses on measuring H0 from an astrophysical phenomenon called strong gravitational lensing. The Vera Rubin Observatory's Legacy Survey of Space and Time (LSST) will increase the sample size of strong lenses from ~100 to ~100,000. This creates an opportunity to obtain the most precise measurement of H0 to date. Fully realizing the potential of LSST data entails rapidly extracting cosmological information from the images, tables, and time series associated with these lenses. My research has focused on developing analysis techniques using Bayesian deep learning, which combines the efficiency of deep learning with principled uncertainty quantification. The techniques promise to automate the analysis of tens of thousands of strong lensing systems in a robust manner. They constitute core methodology that can combine information from all the LSST lenses -- with varying types and signal-to-noise ratios -- into a large-scale hierarchical inference of H0
- Also online at
-
Online 19. All your base are belong to us (or shouldn't it?) : an empirical analysis of the digital first sale doctrine for a second-hand e-good market in the U.S. [2022]
- Terra Ibáñez, Antoni, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
The resale of physical goods is a day-to-day practice legally protected by the first sale doctrine (U.S. 1976 Copyright Act §109(a)). However, end-user license agreements (EULAs) that "buyers"—-or rather licensees—-of intangible products accept at purchase invalidate the application of the digital first sale in the United States (Vernor v. Autodesk, Inc., 2010). In the European Union, on the other hand, the exhaustion of rights principle was extended to some digital products, hence making used software trade—-including that transmitted online—-legal (UsedSoft GmbH v. Oracle International Corp., 2012). The industry is afraid of opening the door to second-hand markets for digital goods (books, movies, TV shows, music, software, video games), which may severely damage sales of new products. Nevertheless, EULAs impose serious limitations on consumer rights with respect to the resale of intangible products and also block competition from secondary markets. In this dissertation, I first explore why such industry practices are compatible (or not) with the copyright, antitrust, and consumer protection laws of the European Union and the United States. My findings suggest that, because of the European more interventionist and consumer-oriented rationale in regulating human behavior, those EULAs are deemed illegal and unenforceable under EU law with respect to software. They are consistent with the U.S. regime, however, since the American approach to consumer matters is rather free market-driven ('laissez faire'). The digital first sale would expectedly have a positive impact on customers. Secondary markets allow to better price discriminate, covering more demand and generating more transactions. They also originate competition for firms in the first-hand market, helping to more quickly drive prices down after release. But policy considerations on general efficiencies are not enough. This dissertation also tackles the introduction of the digital first sale from an empirical perspective, specifically through a survey-based experiment to assess whether copyright owners would benefit from, or at least be indifferent to, creating secondary markets for digital goods in the U.S. in terms of economic surplus. Without producer initiative, legal reform alone will not likely be sufficient to materialize the digital first sale as copyright owners would find alternatives to continue enforcing their rights (e.g., streaming or subscription services). My experiment concludes that the digital first sale harms copyright owners by nature. As soon as secondary digital markets become attractive enough for a critical mass of consumers to start thinking of using them, instead of purchasing from the first-hand market, producers lose significant sales of new products, which cannot be offset by charging more upfront. This outcome applies to e-books and video games. Software producers, on the contrary, would not be negatively affected by secondary e-good markets because consumers are less interested in buying used software (as compared to e-book and video game users). Those second-hand markets would actually allow software companies to do a better job at price discriminating (many users would be willing to pay higher prices for first-hand, one-off software purchases), and copyright owners may even extract some additional surplus from the (small) secondary software market. Keywords: first sale doctrine, second-hand markets, digital products, e-goods, end-user license agreements, law and economics, experiment design, copyright, antitrust, consumer protection, comparative law
- Also online at
-
- Garrison, Catherine Elizabeth, author.
- [Stanford, California] : [Stanford University], 2022
- Description
- Book — 1 online resource
- Summary
-
Voltage-gated sodium channels (NaVs) are large, transmembrane proteins critical for bioelectrical signaling. NaVs permit sodium ion influx into cells by opening in response to membrane depolarization, a process involving large protein conformational changes over millisecond response times. Channel dysregulation is associated with a number of human pathologies, including chronic pain, epilepsies, and arrhythmias. Therefore, efforts to better understand NaV dynamics and to rationally design allosteric modulators of channels are of considerable scientific interest. NaVs are the targets of numerous natural product neurotoxins, including batrachotoxin (BTX), found in the skin secretions of poison dart frogs, which function as allosteric ligands. A full channel agonist, BTX eliminates NaV inactivation and shifts the voltage-dependence of channel activation, among other effects, making it a privileged tool for biophysical studies of NaVs. We have utilized BTX and related toxins to provide insight into the mechanisms by which these toxins modulate NaVs and to better understand how small molecules can influence NaV dynamics. Herein, we describe efforts to elucidate the molecular determinants of the BTX binding site as well as to advance our understanding of toxin structure-function relationships. Aided by computational docking, we propose a novel binding pose for BTX in the central pore of the channel, distal from the canonical local anesthetic binding site. Our studies have led to the identification of multiple amino acid residues in the NaV inner pore that discriminate between BTX and its mirror-image enantiomer. Through this work, we have identified the first known mutations in NaV that increase BTX affinity. Our efforts additionally identify a single point mutation in NaV that seemingly decouples toxin binding from its functional effects on the channel. Finally, studies in collaboration with Dr. Tim MacKenzie have produced a novel toxin derivative, BTX-yne, that eliminates fast inactivation in NaV while permitting slow inactivation. Use of this toxin derivative has enabled direct measurement of slow inactivation in both wild-type and mutant channels, advancing our understanding of this poorly-understood mechanism. Our studies of BTX have been done in parallel with analogous experiments using the neurotoxin, veratridine (VTD). We show VTD, a compound with a binding site that overlaps that of BTX, has disparate functional effects on NaV, dependent on the toxin equilibration protocol. We demonstrate this Janus-faced behavior is not isoform-specific and provide evidence that the characteristic tail current observed in VTD-agonized channels arises from so-called window current. Finally, we expand our understanding of the effects of BTX to include the influence of this toxin on cultured neuronal cells. In these experiments, we find that BTX induces hyperexcitability at concentrations two orders of magnitude below the measured EC50. We provide preliminary evidence that BTX has membrane protein targets beyond NaV with experiments studying voltage-gated calcium channels in neurons. These results provide insight into the profound lethality of this toxin and serve as a foundation for future experiments to examine the influence of BTX, VTD, and related toxins on neuronal cell activity and action potentials
- Also online at
-