Semi Supervised Learning Using Sparse Autoencoder
Goals:
 To implement a sparse autoencoder for MNIST dataset. Plot a mosaic of the first 100 rows for the weight matrices W1 for different sparsities p = [0.01, 0.1, 0.5, 0.8] .
 Using the same architecutre, train a model for sparsity = 0.1 using 1000 images from MNIST dataset  100 for each digit. Retrain the encoder output representation of the data with a softmax layer for the 1000 labeled points. Plug original data, encoding and softmax layers together and finetune the model.
 Compare performance with a generic neural network of same layers where the weights are initialized randomly for the remaining dataset which is considered to be unlabelled.
Implementation:
 The the architecture and cost function is as follows
 The model is developed using TensorFlow. The weight matrix mosiac for a given sparsity value is as follows

For semi supervised learning the same tensorflow model was used for initial training. Subsequent implementation of generiac neural network model and training of encodersoftmax & finetuning of inputencodersoftmax model was done using keras.

Since the already generated weights were requried to be reused during finetuning, an custom initializer class was implemented to pass them to the keras neural network layers.
Output:
 Weight matrix and regenerated input mosiacs for different sparsity values
 Performance comparison between generic neural network and the finetuned model obtained using encoder
Formation Control of UAVs with a FourthOrder Flight Dynamics and Model Predictive Control
Goals:
To analyze & simulate consensus and leaderfollower based formation control for a multiUAV system with FourthOrder flight dynamics. Also to Verify the robustness of the proposed algorithm in the paper : Y. Kuriki and T. Namerikawa: Formation control of UAVs with a fourthorder flight dynamics, SICE Journal of Control, Mea surement, and System Integration, Vol. 7, No. 2, pp. 74–81, 2014 by modifying different parameter like sampling time, UAV connection structure and weights to state parameters like velocity, position. A more robust approach of using MPC for formation flight was also implemented based on the paper Y. Kuriki and T. Namerikawa, Formation Control with Collision Avoidance for a MultiUAV System Using Decentralized MPC and ConsensusBased Control
Implementation:
 The control law and MPC proposed in the papers were implemented in Matlab. A custom Cost function was developed for MPC using the Matlab MPC toolbox.
 Effectiveness and Robustness of the alogrithm for different connections among UAVs was tested and an imporvement to the algorithm was proposed and implemented to handle some failure scenarios.
 UAV Convergence was tested for different sampling time and Beta (weights) values and the results were published in the final report
 The software implementation was enhanced to scale up the simulation for more UAVs by dynamically generating new positions and desired locations for each UAV added. This simple framework has been opensourced.
Output:
 Control law algorithm enhancement
 Results that depict the robustness of the algorithm
 Software implementation framework.
Analysis of Proximal Policy Optimization algorithm Using OpenAI Gym
Goals:
 Comparing the algorithm performance with other baseline techniques
 Exploring performance based on input data preprocessing , using different Neural Network architectures & CPU vs GPU training
 Modifying different hyperparameters to analyze their impact on the overall performance of the algorithm
Implementation:

The model is developed using TensorFlow and input data is collected from OpenAI GYM’s MSPACMAN environment.

Performance of different neural network architectures is explored:

GPU based training was done using Google Collaboratory

Reference : OpenAI GYM Baselines
Output:
 Different models based on the modified hyperparemeters, CPU training & GPU training.
 Performance comparison(rewards & loss function) plots.
Neural Network Classifier for MNIST DataSet
Goals:
To implement a two layer neural network for a binary classifier and a multi layer neural network for a multiclass classifier. Compare performance with KNearest Neighbour approach for same dataset size.
Implementation:
 The two layer network has 1 hidden layer dimension=500, for binary classification. The multilayer neural network program is able to create and train a multilayer network based on command line arguments.
 Two approaches for KNearest Neighbour have been tried  using the default python package and building the algorithm from scratch.
 Since the MNIST dataset has 60,000 images which is too large for batch gradient descent. Therefore, training is done with 6000 samples and test with 1000 samples.
Output:
 The training and testing accuracies.
 Plot of train error vs iterations
 Plot of Kvalue vs Error (for kNN)