For project 2, you can propose own data sets that relate to your research interests or just use existing data sets from say
The approach to the analysis of these new data sets should follow to a large extent what you did in project 1. That is: Whether you end up with a regression or a classification problem, you should employ at least two of the methods we have discussed among linear regression (including Ridge and Lasso), Logistic Regression, Neural Networks, Support Vector Machines (not covered during the lectures) and Decision Trees, Random Forests, Bagging and Boosting. If you wish to venture into convolutional neural networks or recurrent neural networks, or extensions of neural networks, feel free to do so. For project 2, you should feel free to write your own code or use the available functionality of Scikit-learn, Tensorflow, etc.
The estimates you used and tested in project 1 should also be included, that is the R2-score, MSE, accuracy scores, cross-validation and/or bootstrap etc if these are relevant. If possible, you should link the data sets with exisiting research and analyses thereof. Scientific articles which have used Machine Learning algorithms to analyze the data are highly welcome. Perhaps you can improve previous analyses and even publish a new article?
A critical assessment of the methods with ditto perspectives and recommendations is also something you need to include. All in all, the report should follow the same pattern with abstract, introduction, methods, code, results, conclusions etc as in project 1.
The pulsar classification data set is obtained from Kaggle, where it was posted by Pavan Raj. It offers an interesting possible classification problem. In the field of radio astronomy, pulsars are among the most studied phenomena in nature. But despite astronomers' long history with pulsars, little is actually known with certainty. However, much of the uncertainty likely boils down to the difficulty of confirming pulsar observations. While pulsars radiate unmistakable radio signals, they are often lost in the sheer number of radio signals observed by radio telescopes every day. Furthermore, due to the uniqueness of pulsar radio signals, classifying pulsars in large data sets of radio observations have historically been very difficult as human supervision has been a necessity. However, recent advances in machine learning and data mining has made this task much simpler by introducing incredibly fast, in comparison to humans that is, classification methods.
The article of Bathes et al can serve as a reference for your discussions.
Alternatively, if you would like to test the various algorithms on other data sets, please feel free to do so.
We propose also an alternative to the above. This is a project on using machine learning methods (neural networks mainly) to the solution of ordinary differential equations and partial differential equations, with a final twist on how to diagonalize a symmetric matrix with neural networks.
This is a field with a large interest recently, spanning from studies of turbulence in fluid mechanics and meteorology to the solution of quantum mechanical systems. As reading background you can use the slides from week 42 and/or the textbook by Yadav et al.
Here follows a set up on how to structure your report and analyze the data you have opted for.
The first part deals with structuring and reading the data, much along the same lines as done in project 1. Explain how the data are produced and place them in a proper context.
You need to include at least two central algorithms. Explain the basics of the methods you have chosen to work with. This would be your theory part.
Then describe your algorithm and its implementation and tests you have performed.
Then presents your results and findings, link with existing literature and more.
Finally, here you should present a critical assessment of the methods you have studied and link your results with the existing literature.
For this variant of project 2, we will assume that you have some background in the solution of partial differential equations using finite difference schemes. We will study the solution of the diffusion equation in one dimension using a standard explicit scheme and neural networks to solve the same equations.
For the explicit scheme, you can study for example chapter 10 of the lecture notes in Computational Physics or alternative sources.
The physical problem can be that of the temperature gradient in a rod of length \( L=1 \) at \( x=0 \) and \( x=1 \). We are looking at a one-dimensional problem
$$ \begin{equation*} \frac{\partial^2 u(x,t)}{\partial x^2} =\frac{\partial u(x,t)}{\partial t}, t> 0, x\in [0,L] \end{equation*} $$or
$$ \begin{equation*} u_{xx} = u_t, \end{equation*} $$with initial conditions, i.e., the conditions at \( t=0 \),
$$ \begin{equation*} u(x,0)= \sin{(\pi x)} \hspace{0.5cm} 0 < x < L, \end{equation*} $$with \( L=1 \) the length of the \( x \)-region of interest. The boundary conditions are
$$ \begin{equation*} u(0,t)= 0 \hspace{0.5cm} t \ge 0, \end{equation*} $$and
$$ \begin{equation*} u(L,t)= 0 \hspace{0.5cm} t \ge 0. \end{equation*} $$The function \( u(x,t) \) can be the temperature gradient of a rod. As time increases, the velocity approaches a linear variation with \( x \).
We will limit ourselves to the so-called explicit forward Euler algorithm with discretized versions of time given by a forward formula and a centered difference in space resulting in
$$ \begin{equation*} u_t\approx \frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}=\frac{u(x_i,t_j+\Delta t)-u(x_i,t_j)}{\Delta t} \end{equation*} $$and
$$ \begin{equation*} u_{xx}\approx \frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)}{\Delta x^2}, \end{equation*} $$or
$$ \begin{equation*} u_{xx}\approx \frac{u(x_i+\Delta x,t_j)-2u(x_i,t_j)+u(x_i-\Delta x,t_j)}{\Delta x^2}. \end{equation*} $$Write down the algorithm and the equations you need to implement. Find also the analytical solution to the problem.
Implement the explicit scheme algorithm and perform tests of the solution for \( \Delta x=1/10 \), \( \Delta x=1/100 \) using \( \Delta t \) as dictated by the stability limit of the explicit scheme. The stability criterion for the explicit scheme requires that \( \Delta t/\Delta x^2 \leq 1/2 \).
Study the solutions at two time points \( t_1 \) and \( t_2 \) where \( u(x,t_1) \) is smooth but still significantly curved and \( u(x,t_2) \) is almost linear, close to the stationary state.
Study now the lecture notes on solving ODEs and PDEs with neural network and use either your own code from project 2 or the functionality of tensorflow/keras to solve the same equation as in part b). Discuss your results and compare them with the standard explicit scheme. Include also the analytical solution and compare with that.
Finally, present a critical assessment of the methods you have studied and discuss the potential for solving differential equations with machine learning methods.
Here follows a brief recipe and recommendation on how to write a report for each project.
The preferred format for the report is a PDF file. You can also use DOC or postscript formats or as an ipython notebook file. As programming language we prefer that you choose between C/C++, Fortran2008 or Python. The following prescription should be followed when preparing the report:
Finally, we encourage you to collaborate. Optimal working groups consist of 2-3 students. You can then hand in a common report.
If you have Python installed (we recommend Python3) and you feel pretty familiar with installing different packages, we recommend that you install the following Python packages via pip as
For Python3, replace pip with pip3.
See below for a discussion of tensorflow and scikit-learn.
For OSX users we recommend also, after having installed Xcode, to install brew. Brew allows for a seamless installation of additional software via for example
For Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution you can use pip as well and simply install Python as
etc etc.
If you don't want to install various Python packages with their dependencies separately, we recommend two widely used distrubutions which set up all relevant dependencies for Python, namely
Popular software packages written in Python for ML are
These are all freely available at their respective GitHub sites. They encompass communities of developers in the thousands or more. And the number of code developers and contributors keeps increasing.