Wednesday, April 9, 2014

MOOC Review: Image and Video Processing: From Mars to Hollywood with a Stop at the Hospital

by Kuei-Ti Lu

Course Name:
Image and Video Processing: From Mars to Hollywood with a Stop at the Hospital
Link:
https://www.coursera.org/course/images
Institution:
Duke University
Instructor:
Guillermo Sapiro

-----

I took a digital image processing course in my undergraduate study before taking this course, and this might affect how I view this course. Still, I learned more about partial differential equations (PDEs) in this course, which I had barely touched before taking this class.

Video Lectures:

The lengths of the videos varied. In the longer ones, there were often reminders of suggested break time, which were good for those who had little idea when to take a break.

Most materials were about the basic concepts, but in later part of the course, more math was was introduced. The math was more like optional materials considering little coverage of it in the quizzes. The materials covered spatial processing, image restoration, image segmentation, and image/video compression, which were the main topics for a typical image processing course. (Frequency domain processing was almost not mentioned at all.)

In addition to these common topics, geometric PDEs, image/video inpainting, and sparse modeling were touched. These required advanced math, but a student can skip these topics and still obtain the certificate for the course. However, those who had no trouble with math might find these materials good introductions to more advanced areas of studies.

At the end of the course was a brief introduction to medical imaging, which should be an interesting topic. Little technical material was covered for this topic (which is reasonable due to the nature of the field).

Quizzes:

The quiz problems were on the basic concepts, instead of the technical details, covered in the lectures. Multiple attempts could be made without penalty, and a student should be able to complete them without much technical background. The exceptions were those on PDEs, which required at least some qualitative analysis ability.

MATLAB:

Inside the quizzes, one could find optional programming exercises. These did not count toward the grade but could serve as good exercises. One could obtain a special edition of MATLAB for free for a limited period of time to work on these exercises (or simply play around with MATLAB, given the purpose was legal). These resources should be good for those who wanted to enhance their MATLAB skills.

Difficulty:

Except the PDEs, the course should be easy unless one was too far from having some technical knowledge.

Overall:

The course could serve as a survey on the digital image/video processing field. Students who wanted to learn more could also find resources through this course.

Miscellaneous:

One of the suggested textbooks was one I used when I took the digital image processing course in my undergraduate study. I read part of the textbook at that time and read more when taking this course. The textbook provides much detail for the first half of the course. It also has some topics not covered in this course.

Monday, March 31, 2014

MOOC Review: Computational Neuroscience

by Kuei-Ti Lu

Course Name: 
Computational Neuroscience
Link: 
https://www.coursera.org/course/compneuro
Institution: 
University of Washington
Instructors: 
Rajesh P. N. Rao, Adrienne Fairhall

-----

Videos: 

The total length of the videos per week was typical for a MOOC, but the breadth was large - the subject was interdisciplinary. The first week was an introduction to neurobiology; biology and chemistry were the main focus. However, the following weeks were mainly on mathematical/statistical models in neuroscience. Some of these involved signal processing; some information theory; some machine learning; and etc. Week 5 involved modeling neurons using circuits, which are mainly used in electrical engineering.

Because of the variety of the areas covered, one must have solid mathematics basics to understand the materials. Derivations beyond the prerequisites were done in the lectures, and only simple models were covered in the course.

The instructors spoke clearly, and slides as well as subtitles were provided. Following the lectures should not be a big issue for most students (provided that they have met the prerequisites).

Homework Quizzes: 

The quizzes followed the main topics in the videos. Some MATLAB programming was used for a few problems. For most such problems, one might have to read the comments in the code provided to know what code to write to meet the formats of the variables given by the code provided. Other than that, the programming part should not be difficult.

Supplementary Materials: 

Additional materials about the topics could be found (in the formats of texts, videos, and etc.). Moreover, math tutorials were provided by a community TA. so that people who had to learn or review the math used in the course had resources. MATLAB tutorials could also be found.

Difficulty: 

The difficulties might vary a lot for people of different backgrounds. For those from math, statistics, sciences, and engineering, most materials should not be problems. For those from biology and chemistry, this course might be quite challenging due to the math used in the course but not typically used in most areas in undergraduate biology and chemistry.

Overall: 

The course was a survey on computational neuroscience. The breadth of the topics was large, and therefore, those who liked to learn more in depth had to find other resources. Nevertheless, this course should serve as a good introduction. The course might also be interesting to those who like to apply math to different areas.

Saturday, March 22, 2014

MOOC Review: Introduction to Databases

by Kuei-Ti Lu

Course Name:
Introduction to Databases
Link:
https://class.stanford.edu/courses/Engineering/db/2014_1/about
Institution:
Stanford University
Instructor:
Jennifer Widom

-----

Before taking this course, I had learned SQL, and I took MongoDB for Java Developers when taking this course. The review might be influenced by these backgrounds.

Videos:

The lectures covered a broad range of topics related to databases. Some of them involved languages dealing with data, including SOL. The professor mainly used examples to teach the languages and their features. This teaching method should be good for those who learn better by example. For those who prefer learning by reading, just like me, the subtitle should help. Code was provided, and main points about each piece of code were given, too. Therefore, it was easy to refer to the code (instead of hurting eyes with code in the videos).

When explaining the examples, the professor introduced features of the languages and sometimes their weird phenomena. The flow of the examples was organized.

The professor's pronunciations were clear but maybe a little fast to those who were not familiar with English. Still, it should be easy to listen to what she said. (For those who needed help with English, check the subtitle.)

Some areas changed rapidly with time, so the professor simply covered the classic parts that should not change much. In the videos, she also noted which area might change. These helped students be aware of possible evolution of the database field.

Quizzes:

The quizzes covered the concepts covered in the videos. The questions usually changed per attempt. Most of them required using multiple concepts but were not beyond the lectures' levels.

Exercises:

The exercises required using the languages covered in the lectures. There were core sets and challenge sets. The core sets required applying the main concepts covered in the lectures while the challenge ones required applying the advanced concepts covered and extra knowledge (that could be found online). They were not difficult, given the difficulties of the lectures. However, one had to pay attention to the type of environment and features supported (such as Postgres v.s. MySQL). Such difference was covered in the lectures, which was good.

Exams:

The exam questions were like those in the quizzes and multiple-choice versions of the exercises. The length was more than needed (for a typical student). It was good for those who preferred doing problems slowly.

Difficulty:

Although there were lots of topics covered, given the materials covered in the lectures, the course was not difficult. Little related background was needed, but one definitely had to have clear logic.

Miscellaneous:

There were staff and other students helping on the discussion forums - help was available.

Optional readings could be found - good for the reading-type learners.

Overall:

Videos and online assessments were combined well in this course, supplementing each other. The level of detail covered in the materials was just right and self-contained. Extra learning materials, such as extra problems and readings, could be found on the website. The quality of the course, in my opinion, was high.

Sunday, March 16, 2014

MOOC Review: Cryptography I

Course Name:
Cryptography I
Link:
https://www.coursera.org/course/crypto
Institution:
Stanford University
Instructor:
Dan Boneh

-----

Some Personal Experience:

I have taken this course twice. The last time was in 2012, when I was still working toward my Bachelor's degree. I was busy with the schoolwork and had little time working on the course, but I still tried to at least learn the concepts. I did not complete any programming assignment except one and did not take the final. The result is:

The above is the certificate for the session back that time. Although the result met my personal goal at that time, it can clearly be seen that improvement can be made.

As a result, I took the course again. The result looks more brilliant than that in 2012. This time, I completed all programming assignments and took the final. The percentage I got for the course is now 91.7 %, with distinction (the new certificate can be found at https://drive.google.com/folderview?id=0B53jbj-6Ew8QNEtMbDJUbUdfSlk&usp=sharing).

Lectures:

The lectures for the two sessions I have taken are similar. The materials covered are similar. Ciphers (including some stream ciphers and block ciphers) and attacks were introduced. How to and how not to use the ciphers, as well as why, were covered. Some message integrity mechanisms were introduced, which were combined with block ciphers for authenticated encryption. Attacks on message integrity and incorrect use of authenticated encryption mechanisms were covered.

The flow for the ciphers and attacks was quite similar through the course, so it was easy to get used to the lecture flow. Something more out of the flow was the introduction to the number theory that would be used for public key encryption, which certainly could not follow the mechanism-attack flow (because the number theory is not a cipher at all).

The most difficult part, in my opinion, was the math involved, especially the number theory, but math is almost always cute in my opinion, so I did not have any issue with it throughout the course.

As for the professor, his pronunciation was clear, so with the captions, there was no issue for me to get what he said.

There were lecture slides to view online or download, which is convenient for visual learners.

Problem Sets and Final:

The problems sets included both multiple choice and non-multiple-choice problems. Usually, math was used to examine security of certain usages of ciphers. The problems usually required thinking (instead of simply memorizing and pick). However, since they were based on the lectures, it was not difficult to answer them if one understands well the materials covered.

The types of problems in the final was similar to those in the problem sets.

Programming Assignments:

They extended from the lectures. Most were about attacking (virtually) on something that was known broken in terms of security. One could choose the language he/she liked for the assignments. For the number theory programming assignment, some special algorithms were the tasks. The programming assignments, like the problem sets, were not difficult if one understands the materials covered. The instruction sometimes contained useful information. The most tricky part, in my opinion, was familiarizing oneself with the libraries of the language used. (DO NOT implement your own cipher.) However, using the Internet to find the documentation and other informational articles or discussions, one should be able to use those libraries in terms of completing the assignments.

For the number theory programming assignment, I used Java to make use of BigInteger class and encountered one issue, which I solved (I wrote about this in another article: http://csdspnest.blogspot.com/2014/03/bigintegersqrt.html).

Discussion Forums:

As in other MOOCs, discussion forums are often helpful. There were other students sharing useful information such as proving certain probability in the lectures that the professor did not prove in the lectures.

Summary:

The course was not very different from other MOOCs in terms of teaching methods. The video quality and quality of materials were fine.

(I do not recommend this course to anyone who does not have any programming knowledge unless the person does not plan to complete the programming assignments.)

Friday, March 14, 2014

Compute the Inverse Gamma PDF, CDF, and iCDF in MATLAB Using Built-In Functions for the Gamma Distribution

I wrote about computing the inverse Gamma PDF and CDF in MATLAB using the known formula. For something I am working on, I have to compute the inverse CDF (iCDF) for the inverse Gamma distribution, which is not an easy task. As a result, I was thinking about using the built-in gampdf, gamcdf, and gaminv functions to compute the inverse Gamma PDF, CDF, and iCDF.

PDF


Given a random variable (RV) Y ~ Gamma(a, 1/b), define the transformation
$g(Y) = 1/Y$
and let
$X = g(Y) = 1/Y$.

Based on the definitions, X ~ Inv-Gamma(a, b). Because the transformation is invertible, the PDF for X, denoted by
$P_{X}(x)$ 
can be represented using the PDF for Y, denoted by 
$P_{Y}(y)$: 
$P_{X}(x) = \frac1{|dx/dy|}P_{Y}(y)\large|_{y = g^{-1}(x)}$. 
This leads to
$P_{X}(x) = y^2P_{Y}(y)\large|_{y = g^{-1}(x)} = P_{Y}(\frac1x)/x^2$. 

Therefore, in MATLAB, the inverse Gamma PDF for x for a shape parameter a and scale parameter b can be computed using gampdf(y,a,1/b)./(x.^2), or gampdf(1./x,a,1/b)/(x.^2).

A function can be created for this so that the similar code does not have to be rewritten every time when computing the PDF:

function [ Y ] = inversegampdfgam( X,A,B )
%inversegampdfgam Inverse gamma probability density function.
%   Y = inversegampdfgam(X,A,B) returns the inverse gamma probability
%   density function with shape and scale parameters A and B, respectively,
%   at the values in X.

%Y = B^A/gamma(A)*X.^(-A-1).*exp(-B./X);
Y = gampdf(1./X,A,1/B)./(X.^2);

end

Compare the results with those from computing the inverse Gamma PDF directly by finding the absolute difference:


In the figure above, the vertical axis represents the absolute difference between the results obtained using the formula and those obtained using gampdf. It can be seen that the difference is small (possibly resulting from estimation) and negligible.

CDF


Given a random variable (RV) Y ~ Gamma(a, 1/b), let
$X = 1/Y$.
Based on the definitions, X ~ Inv-Gamma(ab). 

For X ≤ x, it follows that 1/Y ≤ 1/y, which leads to y ≤ Y. As a result, the probability for X ≤ x is equal to that of y ≤ Y. Therefore, the CDF for an inverse Gamma distribution can be computed using the iCDF for a Gamma distribution. In MATLAB, the inverse Gamma CDF for x for a shape parameter a and scale parameter b can then be computed using 1 - gamcdf(y,a,1/b), or 1 - gamcdf(1./x,a,1/b).

A function can be created for this so that the similar code does not have to be rewritten every time when computing the CDF:

function [ P ] = inversegamcdfgam( X,A,B )
%inversegamcdfgam Inverse gamma cumulative distribution function.
%   Y = inversegamcdfgam(X,A,B) returns the inverse gamma cumulative
%   distribution function with shape and scale parameters A and B,
%   respectively, at the values in X. The size of P is the common size of
%   the input arguments. A scalar input functions is a constant matrix of
%   the same size as the other inputs.

%P = gammainc(B./X,A,'upper');
P = 1 - gamcdf(1./X,A,1/B);

end

Compare the results with those from computing the inverse Gamma CDF directly by finding the absolute difference:


In the figure above, the vertical axis represents the absolute difference between the results obtained using the formula and those obtained using gamcdf. It can be seen that the difference is small (possibly resulting from estimation) and negligible.

iCDF


Based on the results for the CDF, the iCDF for the inverse Gamma distribution can be computed using the iCDF for the Gamma distribution.

Given a random variable (RV) Y ~ Gamma(a, 1/b), let
$X = g(Y) = 1/Y$.
Based on the definitions, X ~ Inv-Gamma(ab).

Denote the CDF for X by F(x|a, b) and that for Y by F(y|a, 1/b), a being the shape parameter for X and b the scale parameter. It is known that
$F(x|a, b) = 1 - F(y|a, 1/b)$. 

Based on this, given 1 - F(x|a, b), the iCDF value for F(y|a, 1/b) is y, which is 1/x. Therefore, given the CDF value p, shape parameter a, and scale parameter b, the corresponding inverse Gamma RV x can be found in MATLAB using 1./gaminv(1-p,a,1/b).

A function can be created for this so that the similar code does not have to be rewritten every time when computing the PDF:

function [ X ] = inversegaminv( P,A,B )
%inversegaminv Inverse of the inverse gamma cumulative distribution
%function (cdf).
%   X = inversegaminv(P,A,B) returns the inverse cdf for the inverse gamma
%   distribution with shape A and scale B, evaluated at the values in P.
%   The size of X is the common size of the input arguments. A scalar input
%   functions is a constant matrix of the same size as the other inputs.

X = 1./gaminv(1 - P,A,1./B);

end

Given a set of CDF values, compare the CDF values for the iCDF results:


In the figure above, the vertical axis represents the absolute difference between the given CDF values and the CDF values computed using the formula for the iCDF results for the given CDF values. It can be seen that the difference is small (possibly resulting from estimation) and negligible.

-----

The above has not been peer-reviewed, but I will continue the project I am working on using the above information for now. If it is wrong, I will definitely see something wrong for the project I am working on (this is not a project that will cause any harm, so it is fine for me to continue for now).

Any constructive feedback will be appreciated. 

Wednesday, March 12, 2014

Compute Inverse Gamma PDF and CDF in MATLAB

Although MATLAB does not have built-in functions for the PDF and CDF of the inverse gamma distribution, the two functions can be implemented in MATLAB easily using the known formula.

PDF

The PDF of the inverse gamma distribution for a random variable (RV) x is
a: shape parameter
b: scale parameter
Γ(•): gamma function.

The gamma function can be computed in MATLAB using the gamma function.

The above PDF formula can be implemented as

function [ Y ] = inversegampdf( X,A,B )
%inversegampdf Inverse gamma probability density function.
%   Y = inversegampdf(X,A,B) returns the inverse gamma probability density
%   function with shape and scale parameters A and B, respectively, at the
%   values in X. The size of Y is the common size of the input arguments. A
%   scalar input functions is a constant matrix of the same size as the
%   other inputs.

Y = B^A/gamma(A)*X.^(-A-1).*exp(-B./X);

end

Examples of the results of the above function are shown in this figure:


You can compare the results with those on Wikipedia.

CDF

The CDF of the inverse gamma distribution for a random variable (RV) x is
a: shape parameter
b: scale parameter.

The numerator is the upper incomplete gamma function, which in MATLAB can be computed using the gammainc function.

The above CDF formula can be implemented in MATLAB as

function [ P ] = inversegamcdf( X,A,B )
%inversegamcdf Inverse gamma cumulative distribution function.
%   Y = inversegamcdf(X,A,B) returns the inverse gamma cumulative
%   distribution function with shape and scale parameters A and B,
%   respectively, at the values in X. The size of P is the common size of
%   the input arguments. A scalar input functions is a constant matrix of
%   the same size as the other inputs.

P = gammainc(B./X,A,'upper');

end

Examples of the results of the above function are shown in this figure:


You can compare the results with those on Wikipedia.

Thursday, March 6, 2014

MOOC Review: MongoDB for Java Developers

* This review is my personal review on the course and is not sponsored by MongoDB, Inc. 

Course Name:
M101J: MongoDB for Java Developers
Link:
https://education.mongodb.com/courses/10gen/M101J/2014_March/about
Institution:
MongoDB
Instructors:
Andrew Erlichson, Jeff Yemin

The most difficult part of the course, in my opinion, was the first week, in which I had to install things on Windows, which was not too easy. This was partially because the videos were mainly for... (I forgot the OS used in the demo, but it should be the like of Linux.) Still, with the discussion forum and other online resources, I was still able to install and use things on Windows.

Although something later in the course still varied from OS to OS, the cases were handled easily and well by some text explanations. This is why I did not encounter many issues after the first week.

I usually learn better by reading than by watching videos, so it scared me in the first week that the course was mainly composed of videos. However, the instructors' voice was clear, and subtitles were available. Therefore, I got used to the mode as the course progressed.

The videos introduced MongoDB and its Java driver. The non-driver part is almost the same as that for two other courses offered by MongoDB University - M101JS: MongoDB for Node.js Developers and M101P: MongoDB for Developers as far as what I completed (I did not complete the other two). The Java driver part can be learned more easily with the API documentation.

Most videos were short, and it was not difficult for me to pay attention for the duration of a video.

Sometimes comparisons between MongoDB and relational database were introduced. It might be interesting to those who know relational database as I do.

Most videos were followed by quizzes, which helped understand what was covered by the videos and did not count toward the certificate. They were helpful in my opinion. The types included multiple choice and scripting, and the problems were similar to some of the weekly assignment problems.

The assignments included multiple choice, scripting, and programming problems. The scripting ones were done either on the browser or on MongoDB Shell, depending on the problem. These helped me get familiar with commands on MongoDB Shell. The programming ones typically involved using the Java driver and building a blog (the user interface design was provided, so one did not have to make one).

The blog project was interesting to me in that it increased experience working on something complicated.

The final exam was like the assignments, but the problems required more thinking to deal with the data.

The last week of the course contained the final exam and case studies. The case studies were interviews with Jon Hoffman from Foursquare and Ryan Bubinksi from Codecademy. It was interesting to learn more about these two websites.

In general, the course was organized, and the quality of the materials was good. The difficulty was okay (except for the first week for Windows users who are not familiar with IT stuff). There was also help on the discussion forum from the staff and other participants. The course should serve as a good introduction to MongoDB.

Wednesday, March 5, 2014

MOOC Review: Artificial Intelligence Planning

Course Name:
Artificial Intelligence Planning
Link:
https://www.coursera.org/course/aiplan
Institution:
The University of Edinburgh
Instructors:
Gerhard Wickler, Austin Tate


Although I could not spend much time on this course, I managed to complete the Awareness Level of the course (rather the Foundation Level and Performance Level, which required more commitment to the course). Below are two badges I got from this course (I like to collect badges).


Badge awarded to all participants who start the MOOC. 


Badge awarded to all participants who successfully complete the MOOC at the Awareness Level. 

The requirements for the Awareness Level were watching certain videos and completing the final exam for the Awareness Level. For the Foundation Level, one had to take the final exams for the Awareness Level and for the Foundation Level. Both the levels had their own minimum marks to pass. As for the Performance Level, one had to take both the final exams and choose from two programming assignments and one creative challenge to pass the required minimum marks.

The videos included "Feature" videos and typical lecture videos. The Feature videos introduced Artificial Intelligence (AI) Planning's history and applications. These did not have much technical detail and were suitable for people of little technical background. For those who did not intend to get a certificate, simply watching the Feature videos might be good enough.

The lecture videos contained much more technical stuff, including algorithm introduction and pseudocode. Even for the Awareness Level, one had to watch some of these. These help with the programming assignments. The instructors' voice was clear, and they paused appropriately. It was easy to listen even without the pause function and subtitles of the video.

The programming assignments were based on the lectures and did not have tricky problems. One simply had to make use of the algorithms and concepts learned. I managed to complete the first one when I had time. I did not take much time.

The creative challenge is flexible, but most people who completed chose to introduce fields where AI planning is used and technology related to AI planning. Their works were linked on discussion forums. It was interesting to browse through the classmates' projects. I thought of introducing AI planning for image processing, which has been researched more than 15 years (I do not know exactly how long the history of this field is), but was unable to allocate time for this.

After the 3rd week of the course was a break week, which was very useful for busy people to catch up. Without this break week, I would have had no time to complete even the first programming assignment.

On Twitter, the course had its tag #aiplan, and there were many updates and tweets about the course, by the course team or students. The course team handle is @aiplanner

In general, the course difficulty was just right in my opinion. Maybe it is because the instructors were not trying to give the students hard time. The variety of resources (such as reference readings) was good. The discussion on the forum was good, too. The creative challenge works by other students were among my favorite about the course.

The course will start again in 2015. Maybe I will take it again to reach a higher level.

Tuesday, March 4, 2014

Ceiling of Square Root of BigInteger in Java

I was working on the programming assignment* of a MOOC dealing with big integers. I chose Java as the language and used BigInteger to handle the giant integers. In the assignment, I had to find the ceiling of the square root of a big integer (to be used somewhere else in the assignment), so I tried to find a built-in function for this. As some people I saw on online discussion forums, I could not find such a built-in function. As a result, I had to either use one coded by someone or implement one myself. The later was easier for me because by doing that, I did not have to worry about possible lack of background knowledge to understand a decent but difficult algorithm. 

(* Because the assignment is not about finding the ceiling of the square root of an integer, the code in this article does not violate any code of conduct. In fact, finding the ceiling of the square root of an integer for this assignment can be done by programming in another language using that language's libraries.)

Because what I wanted was the ceiling, which is also an integer, a simple algorithm for the approximation of the square root can lead to such a ceiling. I wrote a function for this for positive integers (for the assignment's purpose, many checks were unnecessary and therefore omitted): 

    /***
     * Finds the ceiling of the square root of an integer
     * @param N: the integer whose square root to find
     * @return the ceiling of the square root of N
     */
    public static BigInteger biSqrt(BigInteger N) {
        BigInteger o = new BigInteger("1");
        BigInteger t = new BigInteger("2");
        
        BigInteger u = N; // upper bound of search region
        // initial search value = ceil(0.5*N)
        BigInteger r = N.mod(t).compareTo(o) == 0 ? N.divide(t).add(o) : N.divide(t);
        BigInteger b = new BigInteger("0"); // lower bound of search region
        
        // new search value is ceiling of mid of upper and lower bounds
        while (true) {
            BigInteger r2 = r.pow(2); // square of search value
            
            if (r2.compareTo(N) > 0) {
                // too large, lower search upper bound
                u = r;
                BigInteger s = u.add(b);
                r = s.mod(t).compareTo(o) == 0 ? s.divide(t).add(o) : s.divide(t);
            } else if (r2.compareTo(N) < 0) {
                // too small, increase search lower bound
                b = r;
                BigInteger s = u.add(b);
                r = s.mod(t).compareTo(o) == 0 ? s.divide(t).add(o) : s.divide(t);
            } else {
                // find the value
                break;
            }
            
            // handle non-integer square root, 
            // whose ceiling = final upper bound
            if (u.compareTo(r) == 0) {
                break;
            }
        }
        
        return r;
    }

If executing these: 

        BigInteger z = new BigInteger("0");
        BigInteger o = new BigInteger("1");
        BigInteger t0 = new BigInteger("9");
        BigInteger t1 = new BigInteger("11");
        BigInteger t2 = new BigInteger("1099511627776"); // 2^40
        BigInteger t3 = new BigInteger("1099511627777"); // 2^40 + 1

The results are: 

0
1
3
4
1048576
1048577

The code was not reviewed by anyone other than myself, so it might have issues. Also, there are better algorithms for the same task. However, this should suffice for completing the assignment in Java. 

Any constructive feedback will be appreciated. 

Thursday, January 30, 2014

Data Mining with Weka: Enrolments open for a new session of Data Mining w...

Data Mining with Weka: Enrolments open for a new session of Data Mining w...: Enrolments have opened for a new session of  Data Mining with Weka :   http://weka.waikato.ac.nz   The course will start on 3rd Marc...



The MOOC on data mining will start again on March 3rd, 2014. Visit its blog for more information.

Tuesday, January 28, 2014

Signal Detection Theory and Rods: Thoughts on Fred Rieke's Guest Lecture

This week, the Computational Neuroscience course on Coursera has a guest lecture by Fred Rieke, a professor in Department of Physiology & Biophysics at University of Washington. In the lecture, he talks about single photon detection with rod signals and noise under dim light.

-----
My summary of his talk: 

Between the rods and the rod bipolar cell connected to them, there can be a nonlinear filter having a threshold to determine whether a photon is detected or not. The threshold is one for the amplitude of the received signal, which can be contributed by the signal resulted from the photon or noise.

How to pick a threshold to have a higher possibility of correctly determining whether a photon is received is the main topic of the talk. If applying the maximum likelihood using the probability distribution of the amplitude of the signal and noise without considering the probability of receiving a photon, the threshold picked is lower than that if the probability of receiving a photon, which is quite low, is considered.

Therefore, a reasonable pick on the threshold is one at which the probability of the received signal being noise is extremely low. What is emphasized is that the prior probability matters.
-----

In his lecture, my favorite part is the application of the signal detection theory, which I used when learning basic radar detection concepts, to single photon detection in neuroscience. For me, this is a new application of the same theory I learned before, which is interesting.

From his lecture, I learned more about the rods beyond what I had learned before about the nervous system and a beautiful model (I assume it is a simplified one) about the rod signals.

The most difficult part in the lecture, for me, is the new materials about the cells. Although not long, it took me a little time learning about the rod and cone bipolar cells.

Thanks to my random signal processing professor, whose help made me able to enjoy this lecture quite a lot, and thanks to the Computational Neuroscience course instructors and staff, as well as Fred Rieke, who brought this good experience for me.

Thursday, January 9, 2014

Step into Databases

I finally started learning about databases. I found many useful resources online, but the resources I am using are the Stanford Online and MongoDB University MOOCs. Udacity's data science courses seem interesting, too, but I do not have time for them for now. (I have added the two websites to the MOOC Resources page.)

I had played with SQL before, but I did not have enough (in my own opinion) understanding about databases. I hope these courses can help me be comfortable with databases.

The webpages of the courses I mentioned:

Stanford Online - Introduction to Databases https://class.stanford.edu/courses/Engineering/db/2014_1/about
MongoDB University - MongoDB for Java Developers https://education.mongodb.com/courses/10gen/M101J/2014_January/about