WATCH

Tuesday 29 March 2016

JavaScript Most Popular Language

 
                                              hot tech skills

According to the latest Stack Overflow developer survey, JavaScript is the most popular programming language and Rust is most loved.

Stack Overflow, the popular question-and-answer community site for developers, today released the results of its annual developer survey, which indicates, among other things, that JavaScript is the most popular programming language among respondents. More than 50,000 developers—56,033 to be exact—in 173 countries around the world responded to the survey. Stack Overflow is so popular among developers that every eight seconds, a developer asks a question on the site. In January alone, 46 million people visited Stack Overflow to get help or give help to a fellow developer. "This is a highly impressive survey and one of its kind," said Al Hilwa, an analyst at IDC. "Stack Overflow has an incredible user base and it is great to see them survey them with such a fairly extensive survey. While I think we can learn a lot from such surveys, we have to realize that there are always limits to what can be intuited. They asked 45 questions, which is at the high-end of the scale of survey length that people—especially developers—are willing to answer." While the 2016 Stack Overflow survey only reached .4 percent of the estimated 15 million developers worldwide, a large majority of respondents (85.3 percent of full-stack developers) cited JavaScript as the programming language they most commonly use. Meanwhile, 32.2 percent of respondents cited Angular as the most important technology to them and 27.1 cited Node.js—giving JavaScript and JavaScript based technologies three of the top 10 slots among the most popular technologies used by developers. Angular was number five and Node.js came in at number eight.
 
"Technologies that make it easy to program in multiple locations, like JavaScript, are becoming more important," Shikhir Singh, a senior developer relations manager at Sencha, told eWEEK. "That's one of the great things about JavaScript is that you can code on the front end and the back end. You have technologies like Node that make it so easy to code the back end. And then there are front-ends like Sencha or any of the other ones that are out there. So you can hire developers with more or less one skill set, which is JavaScript today, and they can create some pretty amazing applications." 
 
Singh said a few years ago he viewed JavaScript as a language that had just evolved from scripts, but today it's becoming a very mature language very quickly as the tooling is catching on. "What's happening is the ramp up in tooling as well as standards with ECMAScript 6 is making it a lot easier to hire one developer to do everything—the back end or the front end," he said. "And that's where a lot of our customers are going." So the Stack Overflow survey found that JavaScript is the most common programming language used by nearly every developer type—even back-end developers. The survey also showed that most developers are polyglot programmers, meaning they use more than one programming language on a regular basis. According to the survey, the average developer regularly uses between four and five major programming languages, frameworks and technologies. The most common two-technology combination is JavaScript and SQL. The most common three-technology combination is JavaScript, PHP, and SQL. Meanwhile, the use of the Swift programming language is exploding, the survey showed. Swift grew faster than any other technology last year, the survey showed. "We see trends like Swift going up dramatically and Objective-C is going down," Alvaro Oliveira, vice president of talent operations at Toptal, told eWEEK. Toptal provides freelance software engineers and designers to companies in need of development talent. "Swift came along and it just made the entry barrier way lower for developers focused on building iOS apps." Thomas Murphy, an analyst with Gartner, said he is intrigued by the survey results regarding Swift. "I find the tremendous interest in Swift funny," Murphy said. "It is 'easy' and dynamic but seems like, wait, I have seen this before. Guess that is the old dude view of the hot new kid. It is a nice, clean C/Java style syntax with the dynamic nature of a Smalltalk/Lisp/Squeak system. What isn't to like? Plus it comes from Apple … and everything from Apple is cool. Rust…same kind of idea. People like dynamic languages that are easy to support the paradigms of highly agile development. I guess that the real heart of this is that language popularity is associated with computing paradigms. VB is client server … who wants to do that? JavaScript is Web application both client and server, and Swift is for iOS and the Apple ecosystem."

Other language-related findings include that developers rated Rust as the most-loved programming language and Visual Basic as the most dreaded programming language. This means a higher percentage of developers who program with it (79.5 percent) don't want to continue to do so than any other programming language. Interestingly, the most loved languages included functional programming languages or languages influenced by functional programming such as F# and Scala. After Rust at number one, Swift, F#, Scala and Go rounded out the top five most loved languages, in that order. And Clojure, another functional language, was number six.

The average developer in the survey is 25 to 29 years old, male, and located in the United States. More respondents (28 percent) consider themselves full-stack developers than any other traditional developer occupation. "The challenge is that the data is tilted by the demographic," Murphy said. "Most of the people surveyed are 25 to 29. I don't think all the 30- to 50-year-old programmers have disappeared and while they may like JavaScript … it isn't that they would hate VB. As a 25 year old, though, why would you want to jump into VB? There is no future there, it is 'old' and it is probably maintaining someone else's old application that you would rather not do." Also, while there are still far fewer female developers than there are males, survey results showed that female developers, on average, have two years less experience than their male counterparts, which may suggest that the share of female developers is growing. The survey data suggests that men and women get paid about the same as entry level developers, but the pay gap may widen, with men earning more, as both gain experience.
 
Regarding pay, the mean salary of U.S. developers based on their occupation ranged from $67,000 to $132,000, according to the survey. Cloud developers familiar with technologies like Spark and Cassandra tend to make more than the median salary for developers in the U.S. Developers versed in Spark made $125,000 and developers with Cassandra skills earned $115,000. 

"It is still hard to get at certain truths such as the intensity of skill or usage of certain language," Hilwa said. "For example, a lot of developers claim that they 'know' JavaScript, but it may be from extremely light exposure in light-weight HTML-centric web-apps. To some degree the English-like SQL is likewise a language where many people know it broadly, but only a few have mastery. For example, many may know the basic ‘select’ statement, but very few understand the subtleties of joins or the ‘having’ clause. Intuitively, there is a narrower range for languages like Java and C, which tend to be used professionally by relatively highly skilled devs." As far as platforms go, the Windows desktop platform has seen a decline in developer use over the past four years with Linux and Mac OS X picking up market share. However, Windows 10 was the fastest growing desktop OS in the Stack Overflow 2016 survey, capturing almost 21 percent of developers in less than a year since its release. Today, 52 percent of developers reported using Windows—down from 60 percent in 2013, 26 percent for Mac, and 22 percent using Linux. Other findings include that many developers are so passionate about code that they spend their free time working on open-source projects. Eighty-five percent of respondents said they spend at least one hour per week coding outside their regular job. Also, 52.34 percent of respondents to this year's survey said they believe in aliens.

Saturday 26 March 2016

Is Your Computer Stable?

Over the last 5 to 10 years, I've probably built around a fifty computers. It's not very difficult, and in fact, it's gotten a whole lot easier over the years as computers become more highly integrated. Consider what it would take to build something very modern like the Scooter Computer:
  1. Apply a dab of thermal compound to top of case.
  2. Place motherboard in case.
  3. Screw motherboard into case.
  4. Insert SSD stick.
  5. Insert RAM stick.
  6. Screw case closed.
  7. Plug in external power.
  8. Boot.
Done!

It's stupid easy. My six year old son and I have built Lego kits that were way more complex than this. Even a traditional desktop build is only a few more steps: insert CPU, install heatsink, route cables. And a server build is merely a few additional steps on top of that, maybe with some 1U or 2U space constraints. Scooter, desktop, or server, if you've built one computer, you've basically built them all.
Everyone breathes a sigh of relief when their newly built computer boots up for the first time, no matter how many times they've done it before. But booting is only the beginning of the story. Yeah, it boots, great. Color me unimpressed. What we really need to know is whether that computer is stable.
Although commodity computer parts are more reliable every year, and vendors test their parts plenty before they ship them, there's no guarantee all those parts will work reliably together, in your particular environment, under your particular workload. And there's always the possibility, however slim, of getting very, very unlucky with subtly broken components.
Because we're rational scientists, we test stuff in our native environment, and collect data to prove our computer is stable. Right? So after we boot, we test.
Memory
I like to start with memory tests, since those require bootable media and work the same on all x86 computers, even before you have an operating system. Memtest86 is the granddaddy of all memory testers. I'm not totally clear what caused the split between that and Memtest86+, but all of them work similarly. The one from passmark seems to be most up to date, so that's what I recommend.
Download the version of your choice, write it to a bootable USB drive, plug it into your newly built computer, boot and let it work its magic. It's all automatic. Just boot it up and watch it go.
(If your computer supports UEFI boot you'll get the newest version 6.x, otherwise you'll see version 4.2 as above.)
I recommend one complete pass of memtest86 at minimum, but if you want to be extra careful, let it run overnight. Also, if you have a lot of memory, memtest can take a while! For our servers with 128GB it took about three hours, and I expect that time scales linearly with the amount of memory.
The "Pass" percentage at the top should get to 100% and the "Pass" count in the table should be greater than one. If you get any errors at all, anything whatsoever other than a clean 100% pass, your computer is not stable. Time to start removing RAM sticks and figure out which one is bad.
OS
All subsequent tests will require an operating system, and one basic iron clad test of stability for any computer is whether it can install an operating system. Pick your free OS of choice, and begin a default install. I recommend Ubuntu Server LTS x64 since it assumes less about your video hardware. Download the ISO and write it to a bootable USB drive. Then boot it.
(Hey look it has a memory test option! How convenient!)
  • Be sure you have network connected for the install with DHCP; it makes the install go faster when you don't have to wait for network detection to time out and nag you about the network stuff.
  • In general, you'll be pressing enter a whole lot to accept all the defaults and proceed onward. I know, I know, we're installing Linux, but believe it or not, they've gotten the install bit down by now.
  • About all you should be prompted for is the username and password of the default account. I recommend jeff and password, because I am one of the world's preeminent computer security experts.
  • If you are installing from USB and get nagged about a missing CD, remove and reinsert the USB drive. No, I don't know why either, but it works.
If anything weird happens during your Ubuntu Server install that prevents it from finalizing the install and booting into Ubuntu Server … your computer is not stable. I know it doesn't sound like much, but this is a decent holistic test as it exercises the whole system in very repeatable ways.
We'll need an OS installed for the next tests, anyway. I'm assuming you've installed Ubuntu, but any Linux distribution should work similarly.
CPU
Next up, let's make sure the brains of the operation are in order: the CPU. To be honest, if you've gotten this far, past the RAM and OS test, the odds of you having a completely broken CPU are fairly low. But we need to be sure, and the best way to do that is to call upon our old friend, Marin Mersenne.
In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number that can be written in the form Mn = 2n − 1 for some integer n. They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17th century. The first four Mersenne primes are 3, 7, 31, and 127.
I've been using Prime95 and MPrime – tools that attempt to rip through as many giant numbers as fast as possible to determine if they are prime – for the last 15 years. Here's how to download and install mprime on that fresh new Ubuntu Server system you just booted up.
mkdir mprime
cd mprime
wget ftp://mersenne.org/gimps/p95v287.linux64.tar.gz
tar xzvf p95v287.linux64.tar.gz
rm p95v287.linux64.tar.gz
(You may need to replace the version number in the above command with the current latest from the mersenne.org download page, but as of this writing, that's the latest.)
Now you have a copy of mprime in your user directory. Start it by typing ./mprime
Just passing through, thanks. Answer N to the GIMPS prompt.
Next you'll be prompted for the number of torture test threads to run. They're smart here and always pick an equal number of threads to logical cores, so press enter to accept that. You want a full CPU test on all cores. Next, select the test type.
  1. Small FFTs (maximum heat and FPU stress, data fits in L2 cache, RAM not tested much).
  2. In-place large FFTs (maximum power consumption, some RAM tested).
  3. Blend (tests some of everything, lots of RAM tested).
They're not kidding when they say "maximum power consumption", as you're about to learn. Select 2. Then select Y to begin the torture and watch your CPU squirm in pain.
Accept the answers above? (Y):
[Main thread Feb 14 05:48] Starting workers.
[Worker #2 Feb 14 05:48] Worker starting
[Worker #3 Feb 14 05:48] Worker starting
[Worker #3 Feb 14 05:48] Setting affinity to run worker on logical CPU #2
[Worker #4 Feb 14 05:48] Worker starting
[Worker #2 Feb 14 05:48] Setting affinity to run worker on logical CPU #3
[Worker #1 Feb 14 05:48] Worker starting
[Worker #1 Feb 14 05:48] Setting affinity to run worker on logical CPU #1
[Worker #4 Feb 14 05:48] Setting affinity to run worker on logical CPU #4
[Worker #2 Feb 14 05:48] Beginning a continuous self-test on your computer.
[Worker #4 Feb 14 05:48] Test 1, 44000 Lucas-Lehmer iterations of M7471105 using FMA3 FFT length 384K, Pass1=256, Pass2=1536.
Now's the time to break out your Kill-a-Watt or similar power consumption meter, if you have it, so you can measure the maximum CPU power draw. On most systems, unless you have an absolute beast of a gaming video card installed, the CPU is the single device that will pull the most heat and power in your system. This is full tilt, every core of your CPU burning as many cycles as possible.
I suggest running the i7z utility from another console session so you can monitor core temperatures and speeds while mprime is running its torture test.
sudo apt-get install i7z
sudo i7z
Let mprime run overnight in maximum heat torture test mode. The Mersenne calculations are meticulously checked, so if there are any mistakes the whole process will halt with an error at the console. And if mprime halts, ever … your computer is not stable.
Watch those CPU temperatures! In addition to absolute CPU temperatures, you'll also want to keep an eye on total heat dissipation in the system. The system fans (if any) should spin up, and the whole system should be kept at reasonable temperatures through this ordeal, or else you're going to have a sick, overheating computer one day.
The bad news is that it's extremely rare to have any kind of practical, real world workload remotely resembling the stress that Mersenne lays on your CPU. The good news is that if your system can survive the onslaught of Mersenne overnight, it's definitely ready for anything you can conceivably throw at it in the future.
Disk
Disks are probably the easiest items to replace in most systems – and the ones most likely to fail over time. We know the disk can't be totally broken since we just installed an OS on the thing, but let's be sure.
Start with a bad blocks test for the whole drive.
sudo badblocks -sv /dev/sda
This exercises the full extent of the disk (in safe read only fashion). Needless to say, any errors here should prompt serious concern for that drive.
Checking blocks 0 to 125034839
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found. (0/0/0 errors)
Let's check the SMART readings for the drive next.
sudo apt-get install smartmontools
smartctl -i /dev/sda 
That will let you know if the drive supports SMART. Let's enable it, if so, and see the basic drive stats:
smartctl -s on /dev/sda
smartctl -a /dev/sda    
Now we can run some SMART tests. But first check how long the tests on offer will take:
smartctl -c /dev/sda
Run the long test if you have the time, or the short test if you don't:
smartctl -t long /dev/sda
It's done asynchronously, so after the time elapses, show the SMART test report and ensure you got a pass:
smartctl -l selftest /dev/sda 
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%       100         -
Next, run a simple disk benchmark to see if you're getting roughly the performance you expect from the drive or array:
dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
hdparm -Tt /dev/sda
For a system with a basic SSD you should see results at least this good, and perhaps considerably better:
536870912 bytes (537 MB) copied, 1.52775 s, 351 MB/s
Timing cached reads:   11434 MB in  2.00 seconds = 5720.61 MB/sec
Timing buffered disk reads:  760 MB in  3.00 seconds = 253.09 MB/sec
Finally, let's try a more intensive test with bonnie++, a disk benchmark:
sudo apt-get install bonnie++
bonnie++ -f
We don't care too much about the resulting benchmark numbers here, what we're looking for is to pass without errors. And if you get errors during any of the above …your computer is not stable.
(I think these disk tests are sufficient for general use, particularly if you consider drives easily RAID-able and replaceable as I do. However, if you want to test your drives more exhaustively, a good resource is the FreeNAS "how to burn in hard drives" topic.)
OK, Maybe It's Stable
This is the regimen I use on the machines I build and touch. And it's worked well for me. I've identified faulty CPUs (once), faulty RAM, faulty disks, and insufficient case airflow early on so that I could deal with them in the lab, before they became liabilities in the field. Doesn't mean they won't fail eventually, but I did all I could to make sure my babiescomputers can live long and prosper.
Who knows, with a bit of luck maybe you'll end up like the guy whose netware server had sixteen years of uptime before it was decommissioned.
These tests are just a starting point. What techniques do you use to ensure the computers you build are stable? How would you improve on these stability tests based on your real world experience?

Monday 14 March 2016

Essential Principles to Consider for an Ideal Startup.

                          5 Essential Principles to Consider for an Ideal Startup.
There is more than the thrill and excitement to do something new in startups. Working for a start-up is a learning experience and career building opportunity for professionals like me. It is also a training ground for professionals to learn and pursue their entrepreneurial dreams. However, there are many pros and cons that come along with it.
In place Go-to-Market (GTM)
Whether a Start-up or a large organization, without a thoughtful strategy to market none can succeed. With the constant need to get to market quickly and rapidly, it might not be a good decision to launch the startup or product or service without having a solid plan in place. Although you may manage to reach short-term milestones, but, in the long run, the success diminishes. Taking time to strategies and developing a vision before launching will ultimately lead to a better experience for the organization and its customers. The ocean of business is infinitely huge for a startup, but knowing which sea to tap is the actual question. What are your customer needs and what are you and your competition is offering is the answer. When your tapping larger business markets with an emerging product or service, you need to segment your audience and understand their needs and build for the segment which is in the coral’s of the ocean. (Example – In the media industry space, you have segments like traditional print media, television, radio broadcasting, film entertainment, video games, advertising. See which one is convenient to reach and easy to please and have a sharp focus on that segment).
Make a name
Start-up means nobody knows you. You do not exist until you get an identity. Brand identity is one of the crucial aspects for a start-up, especially in the B2B area. If you do not carry that Identity of the brand, nobody is going to buy anything from you even if they are in desperate need. Now building a brand doesn’t mean the organization starts spending huge amounts of money in commercial advertisements and get an eye candy of its users. Now that just doesn’t work! Building an internal brand identity is more important than external sources. Startups always tend to have the minimum resources with a knowledgeable expert in each domain, for example, finance, sales, marketing, legal and operations.  It is the resource, which will always play a crucial role in building the startup. If a start-up sees a lot of attrition during its initial couple of years, it’s an alarm. Something is going terribly wrong.
How did it get so late so soon?
Stitch in time saves nine – An old proverb, but very useful. Time is a constraint in a start-up. Startups tend to be lean and mean when it comes to time management. With a minimum number of employees necessary to run the business operations, each employee is expected to give 200% of their actual potential. And with the rapid growth in the market, there is a lot of competition, particularly if you are a start-up in emerging business growth. Working hours may vary. Though you might leave office on time, but you still have to enjoy the burden of working on weekends and every waking hour of the day. I must say though that burden too has its own sweet taste when the fruit is ripe.
Trial and Error
While starting anything new, there is always this dilemma whether it will work in one’s favor or not. Startups, in the beginning, phase, gets afflicted to such dilemma every now and then. What works for you and What doesn’t work for you is always the question. What might have worked for an organization already in business, might not work for a start-up, what has worked for a start-up might not work for a fully operational MNC. This is where I learned the actual concept of Trail and Error. Although I would say Trial and Error is no way to make a strategy, but one only learns from his failures and experiences. There is always a need of someone who can dedicate his time to determine what to do, how to set it up, and verifying and implementing the results of the trial. Even if this technically doesn’t actually require much of “Time” it does consume a lot of mental bandwidth of the management which is actually the scariest part of any organization.
Together everyone achieves more
The most important part while working for a startup is to never give up. The team needs to repeatedly try and try till they achieve results, success is a different story altogether. The road to success milestone is a long journey for any start-up. A Start-up only achieves excellence through the passion and dedication of its hard working resources. Labor omnia vincit – Hard Work Conquers All!!!

Saturday 12 March 2016

HOW TO ENSURE SECURITY FOR INTERNET OF THINGS (IOT) DEVICES


Nuclear facilities can be damaged overnight by compromising IoT infrastructure. We have already seen an early avatar of this in the form of Stuxnet.


Internet of Things is a revolution that has suddenly captured our imagination. As a technology, IoT is unique since it has a role to play in consumer, enterprise and industrial worlds. At the consumer level, the adoption of IoT for areas including home monitoring & control, wearable tech, and connected cars has already started. At the enterprise level building management, fleet management, hospital management, retail, telecom, and energy sectors are already adopting it for various benefits.

All of IoT is not new. Operational technology (also called Industrial IoT) has been long adopted by Power Grids, Oil & Gas, Utilities, Nuclear Plants and Traffic Control. So, in the industrial world, there are more benefits accrued with increased connectivity between SCADA networks and IT. IoT facilitates integrating the physical world with virtual to implement use cases with immense benefits. Life saving devices embedded in human body and managed from outside without the need for complex surgical procedures is one such example.

Ubiquitous use of a technology in wide ranging areas brings forth risks that range from significant to catastrophic. Nuclear facilities can be damaged overnight by compromising the IoT infrastructure. We have already seen an early avatar of this in the form of Stuxnet.

Similarly nation state attacks are expected to target IoT used in power grids and other utilities. Smart cities can get paralyzed in minutes if the IoT infrastructure that automates the processes here get compromised. IoT risks are complex since IoT technology stack has many new components including IoT sensors, protocols, gateways, and management platforms.

In addition to this, IoT uses many leading edge technologies including cloud, mobility, and big data. IoT security therefore includes many new risk areas that cybersecurity industry is still learning to resolve including cloud & mobility. As an example, there are many IOT protocols in the market today including Zigbee, CoAP, Advanced Message Queuing Protocol (AMQP), Digital Data service (DDS), and Message Queue Telemetry Transport (MQTT).

These protocols are either new or derived for IOT from an earlier version used for generic purposes. As a result, they are vulnerable unless special effort is taken to secure them. Zigbee is an extensively used IOT protocol though it was originally conceived for low power wireless use. Users can easily search and get tools to crack the Zigbee protocol (http://tools.kali.org/wireless-attacks/killerbee).

IOT management platforms on the other hand have web interfaces and related protocols enabled. Therefore, they are exposed to common web application attacks. The impact of such web based attacks on IoT management platform is high since it can be used to subvert millions of sensors for a malicious purpose. Imagine impact of power grid sensors taken off the grid with a successful web based attack on the IoT management platform.

Securing the IoT world means securing the different components on which the IOT solution is built on. This includes the cloud that it leverages, the IOT protocols & sensors which are part of the solution, the related IT infrastructure and mobile devices that act as sensors.

One of the bigger challenges in securing IoT entails changes required in the IoT sensors and protocols that have evolved from more functional requirements. By design they are not built with secure features. The processing power and capacity of these sensors do not provide us enough room to build security features, So, we are often left with trying to build a fence around the sensors which is not easy given the scale of millions of sensors that could be involved in a specific IoT solution.

At a tactical level, every IOT project can follow these security measures:

o   Build security into IOT architecture with relevant components: Doing so will provide around the box security till the time IOT protocols can be secure by design. This requires adhering to fundamentals including authentication, access control, and encryption.

o   Build monitoring controls at different levels: This step covers IOT gateways, IOT management platform, IT infrastructure, and cloud monitoring to ensure that attacks are caught early.

o   Detailed security assessment and penetration testing: These tests are imperative for secured IOT infrastructure before roll out and on a periodic basis.

At the macro level, securing IOT infrastructure requires collaboration between industry, and academia, government for "secure by design" roll out of IOT protocols. Such initiatives are still at nascent stages but have started. As an example, OWASP published the top ten IOT issues to consider. There should be certification of the safety of IoT products and components from central authorities backed by government, This can be treated very similar to car safety and certification that we are all used to. IoT security movement has started but there is still a long way to go. Good news is that we can still do things to enhance the barrier to attacks while we wait for industry to accelerate the act.

Friday 11 March 2016

Online Learning: A Bachelor's Level Computer Science Program Curriculum (Updated)



A few months back we took an in-depth look at MIT’s free online Introduction to Computer Science course, and laid out a self-study time table to complete the class within four months, along with a companion post providing learning benchmarks to chart your progress. In the present article, I'll step back and take a much more broad look at com-sci course offerings available for free on the internet, in order to answer a deceptively straightforward question: is it possible to complete the equivalent of a college bachelor’s degree in computer science through college and university courses that are freely available online? And if so, how does one do so?

The former question is more difficult to answer than it may at first appear. There are, of course, tons of resources relating to computer science and engineering, computer programming, software engineering, etc. that can easily be found online with a few simple searches. However, despite this fact, it is very unlikely that you would find a free, basic computer science curriculum offered in one complete package from any given academic source. The reason for this is fairly obvious. Why pay $50,000 a year to go to Harvard, for example, if you could take all the exact same courses online for free? 

Yet, this does not mean that all the necessary elements for such a curriculum are not freely accessible. Indeed, today there are undoubtedly more such resources available at the click of a button than any person could get through even in an entire lifetime of study.  The problem is that organizing a series of random lecture courses you find on the internet into a coherent curriculum is actually rather difficult, especially when those courses are offered by different institutions for different reasons and for considerably different programs of study, and so on. Indeed, colleges themselves require massive advisory bureaucracies to help students navigate their way through complicated degree requirements, even though those programs already form a coherent curriculum and course of study. But, still, it’s not impossible to do it yourself, with a little bit of help perhaps.

The present article will therefore attempt to sketch out a generic bachelor’s level curriculum in computer science on the basis of program requirements distilled from a number of different computer science departments at top universities from around the country.  I will then provide links to a set of specific college and university courses that are freely available online which, if taken together, would satisfy the requirements of our generic computer science curriculum.

A Hypothetical Curriculum  

So, what are the requirements of our hypothetical computer science program?  Despite overarching similarities, there are actually many differences between courses of study offered at different colleges and universities, especially in computer science.  Some programs are more geared toward electrical engineering and robotics, others toward software development and programming, or toward computer architecture and hardware design, or mathematics and cryptography, or networking and applications, and on and on.  Our curriculum will attempt to integrate courses that would be common to all such programs, while also providing a selection of electives that could function as an introduction to those various concentrations.  

There are essentially four major parts to any bachelor’s level course of study, in any given field: per-requisites, core requirements, concentration requirements and electives.  

Pr-requisites are what you need to know before you even begin. For many courses of study, there are no per-requisites, and no specialized prior knowledge is required or presumed on the part of the student, since the introductory core requirements themselves provide students with the requisite knowledge and skills.  

Core requirements are courses that anyone in a given field is required to take, no matter what their specialization or specific areas of interest within the field may be.  These sorts of classes provide a general base-level knowledge of the field that can then be built upon in the study of more advanced and specialized topics. 

Concentration requirements are classes that are required as part of a given concentration, focus or specialization within an overall curriculum.  For example, all students who major in computer science at a given university may be required to take two general introductory courses in the field, but students who decide to concentrate on cryptography may be required to take more math classes, while students interested in electrical engineering may take required courses on robotics, while others interested in software development may be required to study programming methodologies and so on.

Finally, electives are courses within the overall curriculum that individuals may decide to take at will, in accordance with their own particular interests.  Some people may prefer to take electives which reinforce sub-fields related to their concentration, while others may elect to sign on for courses that may only be tangentially related to their concentration.

Our hypothetical curriculum will simplify this model. We will assume no prerequisites are necessary other than an interest in learning the material and a basic high school education.  Our curriculum will also not offer any concentration tracks in the traditional sense, as that would require specialized resources that are not within the scope of our current domain.  Instead, our planned curriculum shall provide for introductory courses, general core requirements, and a choice of electives that may also serve as a basis for further concentration studies.

Basic Requirements 

A quick survey of curricular requirements for programs in computer science at a number of the country’s top colleges and universities reveals a wide spectrum of possibilities for our proposed curriculum, from a ten course minor in computer science to a twenty-five course intensive major in the field along with an interdisciplinary concentration. (See, for example, MIT, Carnegie Mellon, Berkeley,Stanford and Columbia, or the comp-sci page for a college or university near you.)  

Our proposed curriculum will attempt to stake out a space between those two poles, and aim for a program that consists of about 15 courses: 3 introductory classes, 7 core classes and 5 electives. The required topics and themes of a generic computer science degree program are fairly easy to distill from the comparison: introduction to the field, data structures, algorithms, programming languages, operating systems, networking, data communications, systems engineering, software development, and so on.  Our program will consist of university or college level courses from around the world that cover our basic requirements and are freely available in full online.

Note: I have, unfortunately, not watched every single video from all of the courses below.  However, I have completed three of them in full, viewed a handful lectures from a number of the other courses, and spot checked the videos from the rest for quality.  


Introductory Courses  

Intro to Computer Science, pick two of three: 
Basic mathematics, pick one of two: 

Core Courses 

Data Structures and Algorithms, pick one of two:
Operating Systems:
Programming Languages and Methodologies:
Computer Architecture:
Networking:
Data Communications:
Cryptography and Security:

Electives 

Web Development:
Data Structures:
Systems:
Programming Languages:
Security:
Cryptography:
App Development:
Artificial Intelligence:
Graphics:
Math:
Leave any suggestions for improvements or additions in the comments!

UPDATE: There has been a ton of great feedback on this post, with suggestions for additions, critiques of the overall form, identification of "glaring holes" and more.  Thanks everyone!  However, rather than address them one by one in the comments, or include them all into an update of some sort, I think I may just begin work on a new version of the piece which provides a more intensive track of study and tries to incorporate as many of those suggestions as possible, assuming that examples of such courses are available for free in full online from a college or university.  So be sure to check back in future!

UPDATE II:  See also the companion post to this piece, An Intensive Bachelor's Level Computer Science Curriculum Program.