So, a short while ago, I decided to buy a MacBook Pro. I did this for a couple of reasons.
First, I was able to get it for a very reasonable price (in a closed bid). This was important to me, because I try to buy for performance/substance over looks. Looks are still important, and if all things are equal, I will buy the one that I think looks better... but not so important that I won't buy the better specs/price.
Second, I finally decided that being as tech and mobile oriented as I am, I would eventually need to be familiar with the OSX platform. That way, if I ever have to write iOS apps, I would be able to at least avoid the OS learning curve.
So, I bought the MacBook Pro from the bid, and even if it were a PC it would probably have been nearly the same cost. It is the 13" model, came with 8 Gb of ram, a Core2Duo @ 2.3 Ghz and a 250 Gb hard drive. Then, soon after that, I was assigned a new project at work and will have to work in OSX. Because of this, I was issued a MacBook Air. When my boss handed me the box and I saw it said 13" MacBook Air, I couldn't help but grin. It is an amazing machine. First, I had originally thought I would get the cheapest machine possible, but instead, they got a very nice package. It came with 8 Gb of ram, a 512 Gb solid state drive and the Core i7 processor. The only thing I would have liked better would be the 13" retina MacBook Pro (just for the resolution).
There are actually quite a few things I like and a couple I don't. First, what I like. I like the hardware. The battery life is good and it charges fast. If you read my earlier post about my HP laptop, you know that resolution is important to me, and even though the screen is only 13", the resolution isn't too bad at 1440x900. The keyboard is great and has backlighting. It also comes with bluetooth. I like being able to drop into a pretty familiar terminal (since I've used Linux for several years). Finally, I love the gestures. I like being able to switch between workspaces and that is easily the gesture I use the most (besides scrolling).
As an aside, there was recently a post by Linus Torvalds on Google+ about screen resolution, and I completely agree with him. I don't know why screen resolution stagnated, but it just now seems to be important again. Personally, I'd say this can be attributed to the Apple push on 'Retina' displays. I don't necessarily want a huge screen, just a high resolution display, and now with the Nexus 10 pushing 2560x1600 on a $400, 10" tablet, it seems a little ridiculous that it isn't on nearly every laptop out there.
Now for what I don't like. I don't like that several of the programs I use regularly aren't available. This isn't really the fault of OSX, just my inexperience with the OS. I'm sure that if I look around, I can find suitable replacements. It isn't a gaming machine (on the MacBook Air, the only graphics are provided by the Intel HD 4000), but since it is a work laptop, that isn't a big loss (plus the battery gains are nice). There are a few things I miss from PCs, like the delete button that removes text to the right of the cursor. I'm sure there is a way to do this, I just haven't figured it out yet. Because there isn't a ton of vertical resolution, I like to 'full screen' my applications when possible, but there are a few that I can't seem to get to go full screen (like GIMP). On that same note, I like my programs to take up as much space as possible, and so on a PC when I hit maximize, it takes the whole screen. On OSX, hitting the plus button... generally makes them take up the size of the window content. Pushing shift+plus sometimes forces it to take up the full size of the screen, but not in all applications.
Finally, there is the so called "Apple tax" to get such nice hardware, you pay a premium. I don't mind paying for quality so much, but I don't like over paying. Based on tear downs and pricing, people guesstimate the price to produce products, and it seems that the Nexus and Kindle devices are sold at nearly cost. Google and Amazon both make their money in apps/media sales, so this isn't a big problem for them. The thing is that Apple also makes money from apps/media sales, but they still have a huge markup (30% or more if you believe what you read on the internet). This bothers me a bit, since they dig the customer when they buy the hardware, and then when you buy apps, they take a 30% cut from the apps makers. I know they have to make their money, but it would be great if they could lower their profit margin just a little, since they seem to have (literally) BILLIONS of dollars laying around.
As you can probably tell from the list, these are pretty nit-picky things and overall, my experience has been quite positive. I don't know if I would say that I would convert completely but I can see myself using a Mac as my daily driver.
11.03.2012
10.13.2012
Learning new stuff
I love learning new things, especially when it is something as accessible as an app for the computer that lives in your pocket, or a webpage that can be accessed from anywhere in the world.
I know that JavaScript is not the worlds most robust or efficient language, and that many people don't like the way it does this or implements that, but I personally think that it would be a great Intro to Programming language. It teaches if/then, loop, variables, functions and program flow. It is free and easy to set up. All you need is notepad and a browser, and to compile? just reload the webpage. Sure, it won't teach you about data types, inheritance is a bit of a mess for me still, and there is so much hack-job code out there that it can get ugly fast, but for accessibility, ease of setup and how quickly you can feel powerful, I think it's wonderful.
That is one of the things I loved about it. I'd been in the computer science program and finished my associates degree, and I still wondered when I'd get to build something usable. So far, I'd learned a lot about concepts, theories, algorithms. I'd written a slew of command line programs, but I still had no idea how this could translate into something like a complex webpage, a phone app, or a media playing program on your PC. Once I took the classes on web design and databases, I could finally start to see how they could tie together to build something powerful and useful.
Recently, I wrote a web app that relied heavily on html 5. This meant a lot of trial and error, research and reading random blogs on the internet. I loved it. The needs for my app were the ability to store data offline, run on an Android tablet, and accept signatures. Other than that, it was pretty standard mobile webpage stuff. When presented with problems like this, where I'm not even sure if it's possible, my first approach is to proof-of-concept each step.
Can I store data offline? I went with an offline SQLite database, I know it's deprecated and isn't implemented by everyone, but in my mind, it's the only sane way to store any amount of data offline. I started by creating a database, inserting and reading back out some data. I tested it in Chrome for desktop, the Android Browser, and then when it was released, Chrome for Android and it worked across all, so I said why not. I have control over what browser my users will have access to, so as long as it works in the one I tell them to use, than we are good to go.
Then, optimizing it for a mobile interface. This was easy in the sense that there were several options out there, but hard in that there were several options out there. The most robust delivery method I found was jQueryMobile. This made it a cinch to get things styled and laid out for a mobile device. Their ThemeRoller is awesome, and there are several really good prebuilt themes out there. Once I figured out how page navigation worked, then things suddenly clicked and it was a good decision. It 'enhances' webpages after you've written them, so things look great, perform well, and it makes webpages feel much more like a native app. It even adds transitions between pages to smooth out navigation.
Finally, the signature capture. I looked at the few examples I could find online. Many seemed to be some sort of paid service, and I don't like relying on someone else like that. The best tutorial I found was by a guy named Thomas J Bradley (1). He had a working example on his webpage, downloadable source code, and it worked across browsers. My problems was that I was delivering to a controlled, mobile device. That meant that I didn't need the Flash plugin (which he included non-canvas browsers)... and I also just like to build my own (I learn so much more that way). I looked at several different ways to capture the input. First, I looked at using an SVG (2) of some sort and storing the points in an array and then converting that to a jpeg to send to the server. This proved to be more of a hassle that I thought it was worth, so I started down the long road that is Canvas. Canvas is awesome if you know what you're doing. I had no clue. I'd never done any programmatic 'drawing' and so the vocabulary, examples and concepts were pretty foreign to me. Finally, after dissecting several different online 'drawing' examples (one showed how to read touch events for a touch screen device, another showed how to accept multitouch events, one showed how to draw the line to sign on, one showed the logic behind drawing the actual signature line as they draw), I started writing my own signature class. It seemed (relatively) simple at first until I ran into an Android Browser bug where, when drawing on a canvas, it would occasionally throw a mouse event during a touch event... this caused it to draw a line to the middle of the canvas. This was nearly impossible to debug (because it only happened on the device, not on the desktop). I would look at the ADB log from Android, and it had some obscure error that led me to a bug on the Android website... which was quite old... and still had no solution. But I got around that, and then kept tweaking, improving, simplifying, and improving my 'library'.
The reason I mention all this is just to say that I love prototyping, learning and building. The area where I have a harder time is finishing and deploying a project, and cleaning up bugs. I'll have to write about that in another post.
1. http://thomasjbradley.ca/lab/signature-pad/
2. http://mcc.id.au/2010/signature.html
I know that JavaScript is not the worlds most robust or efficient language, and that many people don't like the way it does this or implements that, but I personally think that it would be a great Intro to Programming language. It teaches if/then, loop, variables, functions and program flow. It is free and easy to set up. All you need is notepad and a browser, and to compile? just reload the webpage. Sure, it won't teach you about data types, inheritance is a bit of a mess for me still, and there is so much hack-job code out there that it can get ugly fast, but for accessibility, ease of setup and how quickly you can feel powerful, I think it's wonderful.
That is one of the things I loved about it. I'd been in the computer science program and finished my associates degree, and I still wondered when I'd get to build something usable. So far, I'd learned a lot about concepts, theories, algorithms. I'd written a slew of command line programs, but I still had no idea how this could translate into something like a complex webpage, a phone app, or a media playing program on your PC. Once I took the classes on web design and databases, I could finally start to see how they could tie together to build something powerful and useful.
Recently, I wrote a web app that relied heavily on html 5. This meant a lot of trial and error, research and reading random blogs on the internet. I loved it. The needs for my app were the ability to store data offline, run on an Android tablet, and accept signatures. Other than that, it was pretty standard mobile webpage stuff. When presented with problems like this, where I'm not even sure if it's possible, my first approach is to proof-of-concept each step.
Can I store data offline? I went with an offline SQLite database, I know it's deprecated and isn't implemented by everyone, but in my mind, it's the only sane way to store any amount of data offline. I started by creating a database, inserting and reading back out some data. I tested it in Chrome for desktop, the Android Browser, and then when it was released, Chrome for Android and it worked across all, so I said why not. I have control over what browser my users will have access to, so as long as it works in the one I tell them to use, than we are good to go.
Then, optimizing it for a mobile interface. This was easy in the sense that there were several options out there, but hard in that there were several options out there. The most robust delivery method I found was jQueryMobile. This made it a cinch to get things styled and laid out for a mobile device. Their ThemeRoller is awesome, and there are several really good prebuilt themes out there. Once I figured out how page navigation worked, then things suddenly clicked and it was a good decision. It 'enhances' webpages after you've written them, so things look great, perform well, and it makes webpages feel much more like a native app. It even adds transitions between pages to smooth out navigation.
Finally, the signature capture. I looked at the few examples I could find online. Many seemed to be some sort of paid service, and I don't like relying on someone else like that. The best tutorial I found was by a guy named Thomas J Bradley (1). He had a working example on his webpage, downloadable source code, and it worked across browsers. My problems was that I was delivering to a controlled, mobile device. That meant that I didn't need the Flash plugin (which he included non-canvas browsers)... and I also just like to build my own (I learn so much more that way). I looked at several different ways to capture the input. First, I looked at using an SVG (2) of some sort and storing the points in an array and then converting that to a jpeg to send to the server. This proved to be more of a hassle that I thought it was worth, so I started down the long road that is Canvas. Canvas is awesome if you know what you're doing. I had no clue. I'd never done any programmatic 'drawing' and so the vocabulary, examples and concepts were pretty foreign to me. Finally, after dissecting several different online 'drawing' examples (one showed how to read touch events for a touch screen device, another showed how to accept multitouch events, one showed how to draw the line to sign on, one showed the logic behind drawing the actual signature line as they draw), I started writing my own signature class. It seemed (relatively) simple at first until I ran into an Android Browser bug where, when drawing on a canvas, it would occasionally throw a mouse event during a touch event... this caused it to draw a line to the middle of the canvas. This was nearly impossible to debug (because it only happened on the device, not on the desktop). I would look at the ADB log from Android, and it had some obscure error that led me to a bug on the Android website... which was quite old... and still had no solution. But I got around that, and then kept tweaking, improving, simplifying, and improving my 'library'.
The reason I mention all this is just to say that I love prototyping, learning and building. The area where I have a harder time is finishing and deploying a project, and cleaning up bugs. I'll have to write about that in another post.
1. http://thomasjbradley.ca/lab/signature-pad/
2. http://mcc.id.au/2010/signature.html
9.21.2012
How I Feel About Technology Patent Law
So, there are cars. They have been designed for a long time to generally share interfaces. When you sell your old Toyota and buy a new Chevrolet, the steering wheel doesn't move, the brake is still the pedal on the left, and it will still have headlights. You will have some kind of HUD in front of the driver, and if you buy a car that doesn't match this expectation, you would think that something was probably wrong with it's manufacturer. The biggest difference is if you travel somewhere else in the world where the driver sits on a different side of the car. This is a standard interface, and if Toyota tried to sue Chevrolet for copying it's "brakes-go-on-the-left" design, we would laugh at Toyota.
Now, medicinal patents. These are generally issued to a company who pours millions (or billions) of dollars into research. Between design, testing, trials and approval, medical breakthroughs are generally very expensive. In this instance, I won't begrudge a company from charging a lot of money for their drug to recoup their investment. The thing with medical patents is that after twelve years (in the US)(1), generics can be produced. There is a very clearly defined "expiration" on this exclusivity. Once this is over, the market takes over, prices drop, and more people have access to the drug.
And now a word on artificial vendor lock in. This is when a company uses some "feature" of their system to make it very hard to switch from their system to somebody elses. This is good for the company in the short term, because it will help them "keep" customers longer. If you drove Fords your whole life, and were used to some special interface (brakes on the steering wheel for example), then there would be a "friction" for switching to anybody else, because you would have to learn a new interface. Like I said above, this can be good in the short term for a company, but in the long run, it can introduce two major problems. First, once their customers are locked in, many companies cease to innovate. They have someone "hooked" and they decide to just coast. Second, customers begin to feel a desire to switch, to find another product. The problem is they have this barrier to overcome, and it creates resentment for their vendor.
Some vendor lock in is inevitable. When you buy an iPhone or Android, they use fundamentally different systems. Some software written for one is not available on the other. This is not artificial, it is just how the system is. What is artificial is when Apple used to DRM their music so it would only play on an iPod (I know they had their reasons, such as appeasing the RIAA 'gods', but as a consumer, it feels artificial). This keeps people from switching to an Android. I'm not specifically picking on Apple here, nearly all vendors have something like this, and I believe it is bad for them in the long run. (4)
Now to the part that I think is asinine... technology patent and copyright "abuse." I think that companies have a right to protect their intellectual property, but there NEEDS to be a shorter window on these things.
Citations/notes
Now, medicinal patents. These are generally issued to a company who pours millions (or billions) of dollars into research. Between design, testing, trials and approval, medical breakthroughs are generally very expensive. In this instance, I won't begrudge a company from charging a lot of money for their drug to recoup their investment. The thing with medical patents is that after twelve years (in the US)(1), generics can be produced. There is a very clearly defined "expiration" on this exclusivity. Once this is over, the market takes over, prices drop, and more people have access to the drug.
And now a word on artificial vendor lock in. This is when a company uses some "feature" of their system to make it very hard to switch from their system to somebody elses. This is good for the company in the short term, because it will help them "keep" customers longer. If you drove Fords your whole life, and were used to some special interface (brakes on the steering wheel for example), then there would be a "friction" for switching to anybody else, because you would have to learn a new interface. Like I said above, this can be good in the short term for a company, but in the long run, it can introduce two major problems. First, once their customers are locked in, many companies cease to innovate. They have someone "hooked" and they decide to just coast. Second, customers begin to feel a desire to switch, to find another product. The problem is they have this barrier to overcome, and it creates resentment for their vendor.
Some vendor lock in is inevitable. When you buy an iPhone or Android, they use fundamentally different systems. Some software written for one is not available on the other. This is not artificial, it is just how the system is. What is artificial is when Apple used to DRM their music so it would only play on an iPod (I know they had their reasons, such as appeasing the RIAA 'gods', but as a consumer, it feels artificial). This keeps people from switching to an Android. I'm not specifically picking on Apple here, nearly all vendors have something like this, and I believe it is bad for them in the long run. (4)
Now to the part that I think is asinine... technology patent and copyright "abuse." I think that companies have a right to protect their intellectual property, but there NEEDS to be a shorter window on these things.
- First, Mickey Mouse. When Mickey was originally created, it was 1928, and the copyright life appears to have been about 50 years (which still sounds ridiculous to me). Then over the years, industry lobbying has pushed this to life of the author plus 50, then life plus 70 or 120 years if created by a company. (2) At the rate things are going, Mickey Mouse will never enter public domain.
- Second, technology patents. These desperately need to be short term. The computer world is fueled by... well, technology. When a company gets a patent whose life is very long, they can abuse that patent to keep customers locked to themselves. Without some sort of sharing, we would never have the technology we have today. The invention of computers has been a very iterative process. If one company owned a patent and chose to abuse that patent (either through excessive licensing fees or simply keeping others out), we would not be where we are today.
- Third, if a patent is overly broad, this is also bad (3). This gives very broad ground to litigate.
Henry Ford once said "I invented nothing new. I simply assembled the discoveries of other men behind whom were centuries of work... progress happens when all the factors that make for it are ready and then it is inevitable."
Conclusions: I think patents need to be for a very specific feature. I think they need to have a very clear expiration written into them. I think that expiration should be based on the amount of time it will take to recoup the cost of the patented item (how many times has the cost to produce Mickey been recovered?). I think that companies need to wake up and see that consumers aren't stupid, and can feel vendor lock in (and it doesn't produce positive feelings towards a company).
Update:
And now I've found a much better version of this posting as a youtube video
http://www.youtube.com/watch?v=zd-dqUuvLk4Conclusions: I think patents need to be for a very specific feature. I think they need to have a very clear expiration written into them. I think that expiration should be based on the amount of time it will take to recoup the cost of the patented item (how many times has the cost to produce Mickey been recovered?). I think that companies need to wake up and see that consumers aren't stupid, and can feel vendor lock in (and it doesn't produce positive feelings towards a company).
Update:
And now I've found a much better version of this posting as a youtube video
Citations/notes
7.23.2012
HP dv6t 7000 qe mini review
So, I recently purchased a new laptop in the hopes of being able to actually get some 'for fun' programming done at home. My last laptop had a 1.6 GHz Celron and 1.5 GB of ram, which is OK for browsing the web etc, but doesn't work so well for crunching programs. After a few months of research, I decided to get the HP dv6t 7000 qe partially because of the price, the known-name company (as opposed to white label like Eurocom or similar re-sellers with some shady sounding reviews), and it didn't hurt that it came with a 'free' Xbox (since I'm alumnus of a University). It also came with a few upgrades, and I added a few upgrades to it, so the end result is:
The 9-cell battery has been my only regret, not because I don't appreciate the battery life (I haven't had a chance to kill it in one sitting, but it lasts for quite a while). No, my regret is mostly how it looks... It is larger, which is completely understandable, but it 'sticks out' vertically (as opposed to most Dell laptops, which stick out behind the laptop). This means that the laptop sits up off the desk, which probably helps with cooling (it also tilts the keyboard a bit). The downside is that the battery sits off-center, which makes me feel self-conscious about the way it looks (it's dumb, I know, but it is otherwise a very good looking laptop). What I wish is that they had something like HP's Envy line, which has a 'sheet' battery that covers the whole bottom of the laptop, and raises the whole thing up just a tiny bit. The other problem is that it makes finding a case for the laptop hard, because it sticks out. Why is hindsight always 20/20?
Anyway, other than that, I've really liked the laptop. There were some reviews that say that the buttons on the touch pad weren't very good, but I've actually like them. The cover over the Ethernet port is goofy, but I probably won't use it that often anyway, the same with the SD card slot (weird position, but I don't use it much). I've yet to run something that I've noticed kicks on the nVidia card, but other than that, it's just a really fast, nice looking laptop at a decent price point.
- 1080p matte screen - pretty much a required upgrade, the screen is every bit as gorgeous as other reviews say it is. I didn't see the stock screen, but this one is awesome.
- Blu-ray - I don't know how much I'll use it, but it was a free upgrade, so why not
- 8 GB ram - The ram that comes with it is a little slow (not that it's humanly noticeable), but I'll eventually upgrade to 16 GB of the faster stuff.
- 1Tb 5400 rpm HDD - it's really slow, and I'll eventually get a 120-ish GB SSD. The space is nice, but most of the work I do doesn't take up much room and I'd much rather have the speed. I lived off of an 80 GB drive for years, and didn't have too many problems with it (except when I tried to triple boot and have a few VM's... don't ask), so 120 GB would be great.
- Back lit keyboard and on board Bluetooth - fairly cheap upgrades, and I've always wanted a back lit keyboard
- One requirement I had was I wanted a fast processor, and this one came with an Intel Core i7 3610, running at 2.3 GHz, but 'turbos' up to 3.3, quad-core, hyper-threaded. My only worry is that Android emulators have been single-threaded (which would ignore the multi-core, and only rely on single-core clock speed), but I think that's changing
- nVidia GT650m 1GB - From the reviews I've read, the 2 GB version wasn't worth the extra money in nearly every case (including mine), so I didn't get that upgrade
- And finally, the battery upgrade was discounted, so I got the 9-cell battery
The 9-cell battery has been my only regret, not because I don't appreciate the battery life (I haven't had a chance to kill it in one sitting, but it lasts for quite a while). No, my regret is mostly how it looks... It is larger, which is completely understandable, but it 'sticks out' vertically (as opposed to most Dell laptops, which stick out behind the laptop). This means that the laptop sits up off the desk, which probably helps with cooling (it also tilts the keyboard a bit). The downside is that the battery sits off-center, which makes me feel self-conscious about the way it looks (it's dumb, I know, but it is otherwise a very good looking laptop). What I wish is that they had something like HP's Envy line, which has a 'sheet' battery that covers the whole bottom of the laptop, and raises the whole thing up just a tiny bit. The other problem is that it makes finding a case for the laptop hard, because it sticks out. Why is hindsight always 20/20?
Anyway, other than that, I've really liked the laptop. There were some reviews that say that the buttons on the touch pad weren't very good, but I've actually like them. The cover over the Ethernet port is goofy, but I probably won't use it that often anyway, the same with the SD card slot (weird position, but I don't use it much). I've yet to run something that I've noticed kicks on the nVidia card, but other than that, it's just a really fast, nice looking laptop at a decent price point.
6.04.2012
Work
So, today I thought I'd write about some different kinds of work. It's just something that was on my mind this morning, and I thought I'd explore it here.
First, there is the kind of work that you just slog through. This includes things like mowing a lawn, or implementing simple code. It is often something you have done hundreds of times, and know you will do hundreds more. You just have to get it done, and your brain isn't entirely needed. It is fairly easy to estimate time required for this kind of work. Previous experience can reduce the time here, because you know the tricks to get things done more efficiently or more quickly, but it still takes time.
Then there is just hard physical labor. This is one I sometimes overlook (since I try to avoid it). It actually requires mental fortitude to force yourself to keep doing it, since your body doesn't like it. This is related to the first kind above, but different in that it is hard enough that you have to focus. After doing it for a while, things here can move into category one above, but until it becomes easier, it is... well, hard.
Next, there is the research kind of work. This takes mental focus, and can be exhausting and frustrating, because there are so many dead ends you will probably encounter on your path. In most cases, it requires an indeterminate amount of time. It could be the first thing you try that works, and then you move on... or it could be something that is un-implementable, and you have to learn enough about the subject matter in order to figure this out. Foreknowledge of the topic can reduce this time because your field of research is narrowed, but it can still be a big fat question mark as to how long it will take.
Finally related to research, but subtly different, is the thinking kind of work. This can be tricky, because it is generally involved when you create something entirely new. It involves understanding of the subject matter, but also deep thought about it. Sometimes, thinking about it, focusing hard, and trying to force results can make things worse. Sometimes, just percolating the idea while you shower (or while doing something from the first kind of work above), will allow your subconscious to mull it over, and the idea will just come to you. This is also nearly impossible to guess how much time it will take, but can also be the most rewarding kind of project. Things that truly advance humanity generally fall into this category.
To some extent, I think all of these are required in any serious profession, be it software engineering, medical doctor, CEO, or teacher.
First, there is the kind of work that you just slog through. This includes things like mowing a lawn, or implementing simple code. It is often something you have done hundreds of times, and know you will do hundreds more. You just have to get it done, and your brain isn't entirely needed. It is fairly easy to estimate time required for this kind of work. Previous experience can reduce the time here, because you know the tricks to get things done more efficiently or more quickly, but it still takes time.
Then there is just hard physical labor. This is one I sometimes overlook (since I try to avoid it). It actually requires mental fortitude to force yourself to keep doing it, since your body doesn't like it. This is related to the first kind above, but different in that it is hard enough that you have to focus. After doing it for a while, things here can move into category one above, but until it becomes easier, it is... well, hard.
Next, there is the research kind of work. This takes mental focus, and can be exhausting and frustrating, because there are so many dead ends you will probably encounter on your path. In most cases, it requires an indeterminate amount of time. It could be the first thing you try that works, and then you move on... or it could be something that is un-implementable, and you have to learn enough about the subject matter in order to figure this out. Foreknowledge of the topic can reduce this time because your field of research is narrowed, but it can still be a big fat question mark as to how long it will take.
Finally related to research, but subtly different, is the thinking kind of work. This can be tricky, because it is generally involved when you create something entirely new. It involves understanding of the subject matter, but also deep thought about it. Sometimes, thinking about it, focusing hard, and trying to force results can make things worse. Sometimes, just percolating the idea while you shower (or while doing something from the first kind of work above), will allow your subconscious to mull it over, and the idea will just come to you. This is also nearly impossible to guess how much time it will take, but can also be the most rewarding kind of project. Things that truly advance humanity generally fall into this category.
To some extent, I think all of these are required in any serious profession, be it software engineering, medical doctor, CEO, or teacher.
5.30.2012
Subscribe to:
Comments (Atom)