Problem
The PNT Defense & Threat Library is a framework implemented in Python for mapping out and visualizing the space of PNT threats & defenses. The user specifies defenses (and which threats they mitigate) and threats as well as the relationships between entries.
Task
We wanted a way to identify implicit mitigation links without the user having to specify every single relationship.
Result
Implemented a function to parse the library, look through the directed acyclic graph structure, and identify which threats are mitigated by a subset of selected threats.
Problem
To aid in PNT Assurance work, we wanted to mathematically model the PVT output by PNT user equipment under different classes of threats with different parameters.
Task
My task was to help the team improve the Julia script that performs the monte carlo simulation.
Result
I revamped and formalized the team's software development process to improve visibility and organization. Rather than working on separate remote repositories, everyone works on the same remote, but uses separate branches. This makes it easier for everyone on the team to see changes and make comments in merge requests before changes make their way into the main branch. I also pushed for the use of issue tracking so discussions around new features and bugs can all be collected in one place. We can also make references to specific commits / branches / changes rather than having disjointed conversations with changing line numbers. Finally, I created a wiki documenting how to use git and some best practices.
Problem
The AI Learning Track was the first of two intern events I participated in. This was an 8 hour hackathon (spread over 3 days) with a team of 5.
Task
The goal is to classify URLs as malicious or benign using URL strings and information on the age of the domain and when the site was last updated.
Result
Our team went through the steps of feature engineering, training, validation, and testing. We generated new features like the length of the URL string, number of non-alphanumeric symbols in the URL, etc. that we read were good indicators of malicious URLs. We ran our training dataset through several models: Logistic Regression (LOGIT), Linear Support Vector Machine, Random Forest, eXtreme Gradient Boosting Trees (XGBoost), and a Simple Feed Forward Neural Network (FFNN). We found that the FFNN performed best and we were able to achieve an F1 score of 0.94 on the testing dataset.
Problem
AWS DeepRacer was the second of two intern events I participated in. This involved the use of cloud computing resources (provided by Amazon Web Services) to train and simulate a Reinforcement Learning model for an autonomous vehicle.
Task
The goal was to train a model that would allow the autonomous vehicle to drive around a simulated track as quickly as possible. You would receive time penalties for driving off track. The vehicles have a camera for sensing the environment and can drive at speeds of 0.5 - 1.75 m/s.
Result
The model I trained finished 3rd in the first two events (same track as training) and finished 2nd in the final event (a more difficult, unseen track). This may indicate that my model was more generalizable and less overfit to the track that I trained on.
Problem
Existing software components needed to be tested to ensure that requirements were being met.
Task
My job was to translate software component requirements into Model-in-Loop unit tests in Simulink.
Result
Created Model-in-Loop test cases for ~50 requirements using Simulink test. I was able to identify and resolve 7 issues at the model level to prevent them from reaching in-vehicle testing. To improve testing, I created a script to automatically run all test cases and generate a report for easy identification of issues. Also created documentation in Confluence to streamline future test case generation.
Problem
Co-worker had developed a new algorithm that needed to be integrated into the Simulink environment for controls software.
Task
My job was to communicate with my co-worker to implement the algorithm using the existing signals, processing them, and resolving issues related to Simulink Coder C/C++ code generation.
Result
I was able to identify and suggest modifications to the algorithm to work within the limitations of Simulink Coder. Learned about embedded systems, computing limitations, and how to increase software efficiency to prevent processing time delay.
Problem
Vehicle testing data needed to be processed to evaluate performance against requirements and determine controls parameters to tune.
Task
My job was to modify existing data processing scripts to work with new test data.
Result
I was able to map the signals from new test data logs to work with the existing scripts and manually calculate intermediate signals that were not recorded. This allowed me to generate plots that could easily be compared to previous test runs. From these plots, I identified potential issues that could be investigated by the controls team.
Problem
Hughes sells mobile satellite terminals that provide internet and phone service to customers. Some customers who purchased terminals in bulk (~1000) required non-default settings. The configuration process of manually changing settings on each terminal was tedious and error-prone.
Task
My job was to develop a Windows 7 / 10 program that would automate the terminal configuration process.
Result
I designed and developed a GUI program in C# that detects which terminal model is connected and uses the available interface (REST API or FTP) to copy the configuration of the master terminal. This can then be used to automatically configure subsequent terminals.
Customer Feedback
"Btw the tool you sent us is helping a lot. Thank you so much."
Problem
Hughes mobile terminals allow customers to have internet and phone access in remote areas via a satellite connection. Some customers may find value in a GPS feature built into the terminal. In this situation, the terminal would send GPS coordinates to a server at regular intervals of time, distance, and/or velocity as configured by the end-user.
Task
My task was to investigate a way to minimize the data packet size of GPS coordinates and to update the terminal's software to send these data packets to a server configured by the end-user.
Result
This project was assigned to me after the completion of my previous project, which was intended to last the entire internship. I was able to determine that MQTT-SN was a good option as it simply sends a data packet to a server without checking for acknowledgement and the packet header is sufficiently small. I successfully implemented the packet structure and delivery to the server on the terminal software written in C. However, I was unable to get the server to recognize that it received the packet in my remaining time at Hughes. Using Wireshark, I confirmed the reception of the packet, but none of the open-source MQTT-SN brokers I used worked.
Problem
Rivian had an existing process for analyzing bill of materials data that needed to be further developed and maintained.
Task
My job was to communicate with project management to determine and implement improvements to mass and cost analysis.
Result
I improved the existing Excel Macro to check for inconsistencies between parent and child items so mass and cost was not double counted. Additionally, improved the user interface by adding macro buttons to allow for quick filtering of pivot tables and highlight potential discrepancies.
Problem
Requirements management in JAMA was very much a manual process to check for inconsistent relationships between the different types of requirements.
Task
I proposed an automated system that would enforce relationships set by the systems engineering team and highlight issues that required manual intervention.
Result
I designed and implemented an internal-use website that collected and displayed data from business systems like JAMA. This made use of JAMA's REST API to automatically pull requirements metadata and enforce the determined relational rules. Additionally integrated Bill of Materials analysis macro into website so users would receive the processed spreadsheet via email. Front-end design was written in HTML and back-end scripts were written in Python using the Django Web Framework.
Problem
Rivian's previous IT service desk in KACE Systems Management had issues tracking the status of tickets resulting in an SLA met percentage of ~75%.
Task
My job was to migrate the IT service desk to Jira and improve the ticket tracking process.
Result
I created custom automation rules to assign tickets based on request type and location. Created queues and reports to track response time, workload, request types, and other metrics. These efforts resulted in an SLA met percentage of ~95%.
Problem
Rivian needed an organized way of tracking interfaces between components and managing requirements.
Task
My job was to create system architecture models to help distribute and track requirements from the vehicle level to component level. Additionally, these requirements needed to be managed and distributed via DOORS Next Generation.
Result
I was able to speak with managers of various vehicle subsystems to understand cross-system signal interfaces. These conversations were translated to system architecture models so teams could understand what signals they are consuming and outputting. To aid in cross-team communication, I created custom report templates in DOORS Next Generation.
Problem
Rivian had collected simulation test data and needed to see the effect of changing vehicle parameters on performance metrics, without re-running expensive tests.
Task
My job was to develop a tool that could translate the collected data into a simple user interface to analyze how to optimize vehicle performance.
Result
I designed and developed a GUI program in Java that provided the user with sliders for the input vehicle parameters. The tool would then interpolate the performance metrics from the collected points using a neural network determined multi-variate function. The GUI tool then outputs plots showing the effect of changing each input on the projected performance output.