Saving, investing and making money with technology

Tag: toptal

The Vital Guide to Hiring Financial Modeling Experts

Financial modeling has advanced tremendously over the last two decades, evolving into a true science. Nobel Prizes have been awarded on the back of financial modeling and research, 2013 Prize in Economics to Eugene F. Fama, Lars Peter Hansen, and Robert J. Shiller “for their empirical analysis of asset prices”. Multibillion-dollar hedge funds have been set up around specific financial models, such as David Li’s Gaussian Copula function or the notorious Black Scholes model, all of which spurned a wave of new investment vehicles looking to leverage the models’ determination of price movements in different asset classes.

Whilst these are clear examples of extremely advanced financial models, the overall point remains clear: financial modeling plays a fundamental role in modern day financial and business decisions.

Finding top financial modelers is no easy task. The ability to assess financial modeling skills comes with practice. This guide is aimed at helping you source such talent effectively by putting candidates through their diverse paces.

Organized modeler, organized model

The bread and butter of a good financial model is that it be well-structured, clean, and easy to follow. After all, these models are usually shown to or referenced by other members of an organization. If financial models are sprawling instead of streamlined, they will largely be useless. A CFO cannot be spending precious time to understand a document as opposed to focusing on its outputs and takeaways.

Reflecting this, good financial modelers will tend to be structured, organized people, who enjoy making well-constructed and easy to grasp models. A love of minutiae, such as Excel formatting, is likely to reveal a top-notch financial modeler, who takes pride in what they craft.

A good first step, therefore, is to use a simple modeling exercise. This is not aimed at determining their financial skills or their intrinsic Excel knowledge, but mainly their propensity to create clean-looking models that are orderly and coherent.

An appropriate simple modeling exercise could be the following:

Q: Company X has been approached by a local boutique investment bank with an opportunity to purchase a smaller competitor in an adjacent market. The CFO has asked you to create a financial model with an upside case, base case, and downside scenario to assess the growth prospects and risks of the company.

The following guidelines on a financial model’s structure will rapidly identify good financial modelers, no matter the financials:

  • Inputs: A model must clearly show where the inputs are. This is particularly important for questions of portability. As the model is handed over to colleagues, they must be able to determine where the inputs are, in case they need to perform a sensitivity analysis on the key inputs, for example.
  • Number of tabs: Financial models require interaction between many different sections (P&L, Balance Sheet, etc.). The right build will be structured so as to lead users naturally between various financial statements, sheets containing operating models, sheets that model returns, sheets that model particular functions such as capital expenditure, and more.
  • Output tabs: Financial modeling almost always involves reaching an output or conclusion. In the scenario depicted above, the CFO is looking to understand what the prospects of the company are, meaning the outcome would concern growth and profitability projections Whatever the purpose of the model, good modelers will make sure that the outputs or the conclusions are clearly laid out.
  • Key to financial terms used: It is unwise to assume every person looking at a model is fully financially literate. Capex and ROIC are common enough abbreviations, but if a model mentions NCAVPS, a key explaining that this means Net Current Asset Value Per Share can be a blessing.

Excel still rules the roost

While Excel may have a reputation as the aging veteran of financial modeling tools, there is a good reason for its staying power. The breadth of options it provides allows a financial modeler to dominate any inputs needed. There are hundreds of relevant Excel formulas with a good financial modeler needing to be familiar with a multitude of these to achieve any particular result needed.

The best formulas have two advantages:

  • They achieve a particular result in a lightweight way, without encumbering the model too much so that the file remains fairly light and easy to load.
  • They can be understood and modified by others and the formula must continue to work when other parts of the model are altered. This is particularly relevant when formulas are combined to achieve a particular result.

Q: Company Y is updating its business model ahead of a Series A round of fundraising. However, its first attempt at financial modeling in Excel was not done proficiently, resulting in a very heavy model, which is unclear to use and takes a long time to recalculate. Please update, improve and simplify it.

Examples of how a modeler should be making models more lightweight are as follows:

  • Non-volatile formulas. Certain types of formulas in Excel are considered “volatile” as they get recalculated every time a change is made. This makes an Excel sheet much slower. A good modeler will identify these and change them to non-volatile types. A specific example would be changing OFFSET formulas to INDEX formulas.
  • Pivot tables: A common reason for slow Excel sheets is large amounts of data. If summary tables or output tabs are made using formulas referencing large amounts of data, the model will slow down significantly. A good way to avoid this is through the use of pivot tables.
  • Tables can hold data: Tables are a good way to store data due to their more dynamic nature. If data is added or removed, the table still works, which allows for easier referencing in formulas (for instance, by referencing a table column as opposed to the exact array of cells).
  • Macros: Macros allow for a host of added functionality. Whilst many models will not need a macro, a candidate who understands how to create and use focused macros can clearly provide more advanced Excel functionality when the time comes.

A real data sleuth

“An unsophisticated forecaster uses statistics as a drunken man uses lamp-posts – for support rather than illumination.”
—Andrew Lang

Financial modelers will often be tasked with producing models in the absence of much guidance or data. When faced with this situation, experienced modelers will know which type of information they need to source, where to find it, and most importantly, how to filter out data which is irrelevant to their exercise. In other words, strong modelers should be able to construct their own scenarios and guidance to produce the results they have been asked to produce.

A good way to test a candidate’s ability in this regard could be as follows:

Q: The CEO of a Fortune 500 company is considering a takeover bid for a major competitor. He has asked you to perform a valuation analysis of the takeover candidate, through a financial model, while applying the right methodology.

A financial modeling expert must be familiar with the main methods of company valuation. But more importantly, for this exercise, they should be familiar with the sources of data one would use in order to perform a company valuation. The candidate would first need to model some estimates of the company’s performance going forward.

Forward financials can be estimated by analyzing historical figures and predicting future performance, learning about overall market dynamics and using the forward estimates of comparable companies.

A good financial modeler would be expected to investigate in the following way:

  • Request specific historical company information.
  • Analyze historical performance and ask questions to understand numbers driving factors behind the company’s performance.
  • Become familiar with market the company operates in: competitors, products, market share, etc.
  • Seek information on comparable companies, such as private industry reports ( Pitchbook, Thomson One, Capital 1Q, Bloomberg ). For American companies, the SEC provides original filings of the companies’ financials, while the likes of Google Finance and Yahoo Finance can be used to look into publicly listed companies.
  • Look for external analysis (analyst research reports, media articles), so as to build an understanding of company’s reputation and overall industry drivers.

The above exercise is not to have the candidate perform a valuation analysis but to hear from how they would go about sourcing the information needed to build their model and provide the analysis.

Step back and see the bigger picture

Despite the above, an extremely common pitfall in financial modeling is the inability to take a step back and see the bigger picture. It is all too easy to get bogged down in the details of a model and forget that a model is supposed to simulate a real world situation. seeing the bigger picture. Top financial modelers need to constantly be asking themselves the following questions:

  • Do the numbers I am showing make sense?
  • Is the outcome of my model feasible?
  • How do my numbers compare to real-life comparables?

If financial models are meant to guide senior executives in their decision-making, the modelers must anticipate what the executives need and the questions they expect the model to answer. If done well, this answers the need for structure, no superfluous data being included, and avoids time loss right due to unreasonable outcomes in the financial model.

It is worth testing a candidate’s thought process at this stage.

Q: A CFO is considering investing in a new production facility, and has been soliciting bank proposals for financing. His bankers have sent him a model of what sort of financing structure they could provide as well as their forecasts on the investment’s performance. The CFO has asked you to analyze the numbers and draw some conclusions and recommendations.

In performing the analysis, a top financial modeler would be expected to do the following in order to immediately understand the structure of the model:

  • Understand where the key inputs are (pertaining to operational projections as well as to the financing structure).
  • Define key outputs/metrics CFO is looking for to make a decision. Here, the most appropriate metric is likely to be an analysis of the Net Present Value (NPV) of the projected investment. Strong modelers would be familiar with the concept behind NPV but should also ask what key metrics the decision will be based on to include them prominently in the model.
  • How the model ties everything together. Once the key inputs and outputs have been determined, the modeler should figure out how the model uses the inputs to create the desired outputs. In this case, the two key parts of the model will be the operational projections of the investment and the financing structure being proposed by the bank.

Once this is established, a stress testing process should be carried out to put the model through its paces.

  • Providing for variables – Once the model structure is defined, the modeler should see how key outputs change with a given change in the inputs. This also allows for forecasts to cover more extreme scenarios.
  • Trying different financing structures – In this example, the modeler can see how different financing structures might influence the outcome. For instance, a comparison of how debt and equity instruments affect the desired outputs provides the CFO with some good perspective.
  • Test operational assumptions – The model can only work if the assumptions within it make sense. Wildly optimistic assumptions on the cash flows of the investment would produce skewed results and help no one. A good modeler is likely to question these assumptions until they are satisfied.

To summarize, a good modeler cannot focus only on the model and its complexities. Putting themselves in the shoes of the decision-maker allows them to understand what drives these decisions and make the model reflect that.

Pride and joy in their work

As a final note, it has become very clear that the best modelers tend to really enjoy modeling. Like an architect approaching the completion of a building, they will tend to see it as an art. They will enjoy discussing modeling tips and tricks with their coworkers. And they will spend time on their own learning about new skills.

When assessing a candidate, they should be allowed to wax lyrical, to share their favorite model or speak of their proudest creation, revealing from where their pride comes from or how its complex parts were put together. An interviewer should leave this process to flow organically, allowing themselves to be given the grand tour of a financial modeler’s portfolio. The pride and fulfillment they derive from their work is a clear benchmark of how they will approach future challenges set before them.

Financial modeling is a mission-critical task for many companies. Having strong financial modelers on the team can make a CFO’s life far easier, streamline decision-making and often help gain an edge over one’s competitors. As such, hiring expertise talent in this area is fundamental. Nevertheless, judging a candidate’s ability to model is not easy. Much like a model itself, the interview process cannot be static but ebb and flow to test out the various pointers laid out in this guide. Dive in.

This article was originally published on toptal, see it here.

Continue Reading

Are We Creating An Insecure Internet of Things (IoT)? Security Challenges and Concerns

The Internet of Things (IoT) has been an industry buzzword for years, but sluggish development and limited commercialization have led some industry watchers to start calling it the “Internet of NoThings”.

Double puns aside, IoT development is in trouble. Aside from spawning geeky jokes unfit for most social occasions, the hype did not help; and, in fact, I believe it actually caused a lot more harm than good. There are a few problems with IoT, but all the positive coverage and baseless hype are one we could do without. The upside of generating more attention is clear: more investment, more VC funding, more consumer interest.

security and the internet of things

However, these come with an added level of scrutiny, which has made a number of shortcomings painfully obvious. After a couple of years of bullish forecasts and big promises, IoT security seems to be the biggest concern. The first few weeks of 2015 were not kind to this emerging industry, and most of the negative press revolved around security.

Was it justified? Was it just “fear, uncertainty and doubt” (FUD), brought about by years of hype? It was a bit of both; although some issues may have been overblown, the problems are very real, indeed.

From “Year Of IoT” To Annus Horribilis For IoT

Many commentators described 2015 as “the year of IoT,” but so far, it has been a year of bad press. Granted, there are still ten months to go, but negative reports keep piling on. Security firm Kaspersky recently ran a damning critique of IoT security challenges, with an unflattering headline, ” Internet of Crappy Things “.

Kaspersky is no stranger to IoT criticism and controversy; the firm has been sounding alarm bells for a while, backing them up with examples of hacked smart homes, carwashes and even police surveillance systems. Whether a hacker wants to wash their ride free of charge, or stalk someone using their fitness tracker – IoT security flaws could make it possible.

Wind River published a white paper on IoT security in January 2015, and the report starts off with a sobering introduction. Titled Searching For The Silver Bullet, it summarizes the problem in just three paragraphs, which I will condense into a few points:

  • Security must be the foundational enabler for IoT.
  • There is currently no consensus on how to implement security in IoT on the device.
  • A prevalent, and unrealistic, expectation is that it is somehow possible to compress 25 years of security evolution into novel IoT devices.
  • There is no silver bullet that can effectively mitigate the threats.

However, there is some good news; the knowledge and experience are already here, but they have to be adapted to fit the unique constraints of IoT devices.

Unfortunately, this is where we as system security developers stumble upon another problem, a hardware problem.

U.S. Federal Trade Commission chairwoman, Edith Ramirez, addressed the Consumer Electronics Show in Las Vegas earlier this year, warning that embedding sensors into everyday devices, and letting them record what we do, could pose a massive security risk.

Ramirez outlined three key challenges for the future of loT :

  • Ubiquitous data collection.
  • Potential for unexpected uses of consumer data.
  • Heightened security risks.

She urged companies to enhance privacy and built secure IoT devices by adopting a security-focused approach, reducing the amount of data collected by IoT devices, and increasing transparency and providing consumers with a choice to opt-out of data collection.

Ramirez went on to say that developers of IoT devices have not spent time thinking about how to secure their devices and services from cyberattacks.

“The small size and limited processing power of many connected devices could inhibit encryption and other robust security measures,” said Ramirez. “Moreover, some connected devices are low-cost and essentially disposable. If a vulnerability is discovered on that type of device, it may be difficult to update the software or apply a patch – or even to get news of a fix to consumers.”

While Ramirez is spot on in most respects, I should note that the Internet went through a similar phase two decades ago. There were a lot of security concerns, and the nineties saw the emergence of the internet-borne malware, DDoS attacks, sophisticated phishing and more. Even though Hollywood depicted a dystopian future in some films, we have ended up with kittens on social networks and a high-profile security breach here and there.

The Internet is still not secure, so we can’t expect IoT to be secure, either. However, security is constantly evolving to meet new challenges, we’ve seen it before, and we’ll see it again, with IoT and subsequent connected technologies.

IoT Hardware Is And Will Remain A Problem

Some of you will be thinking that the hardware issues mentioned by the FTC boss will be addressed; yes, some of them probably will.

As the IoT market grows, we will see more investment, and as hardware matures, we will get improved security. Chipmakers like Intel and ARM will be keen to offer better security with each new generation, since security could be a market differentiator, allowing them to grab more design wins and gain a bigger share.

Technology always advances, so why not? New manufacturing processes generally result in faster and more efficient processors, and sooner or later, the gap will close, thus providing developers with enough processing power to implement better security features. However, I am not so sure this is a realistic scenario.

insecure iot

First of all IoT chips won’t be big money-makers since they are tiny and usually based on outdated architectures. For example, the first-generation Intel Edison platform is based on Quark processors, which essentially use the same CPU instruction set and much of the design of the ancient Pentium P54C. However, the next-generation Edison microcomputer is based on a much faster processor, based on Atom Silvermont cores, which is in many Windows and Android tablets, today. (Intel shipped ~46m Bay Trail SoCs in 2014.)

On the face of it, we could end up with relatively modern 64-bit x86 CPU cores in IoT devices, but they won’t come cheap, they will still be substantially more complex than the smallest ARM cores, and therefore will need more battery power.

Cheap and disposable wearables, which appear to be the FTC’s biggest concern, won’t be powered by such chips, at least, not anytime soon. Consumers may end up with more powerful processors, such as Intel Atoms or ARMv8 chips, in some smart products, like smart refrigerators or washing machines with touchscreens, but they are impractical for disposable devices with no displays and with limited battery capacity.

Selling complete platforms, or reference designs for various IoT devices, could help chipmakers generate more revenue, while at the same time introduce more standardisation and security. The last thing the industry needs is more unstandardized devices and more fragmentation. This may sound like a logical and sound approach, since developers would end up with fewer platforms and more resources would be allocated for security, however, security breaches would also affect a bigger number of devices.

Money Is Pouring In, Analysts Remain Bullish, What Could Possibly Go Wrong?

One of the most common ways of tackling any problem in the tech industry is to simply throw money at it. So, let’s see where we stand right now in terms of funding rather than technology.

According to research firms IDC and Gartner, IoT will grow to such an extent that it will transform the data centre industry by the end of the decade. Gartner expects the IoT market will have 26 billion installed units by 2020, creating huge opportunities for all parties, from data centres and hardware makers, to developers and designers. IDC also expects the IoT industry to end up with “billions of devices and trillions of dollars” by the end of the decade.

Gartner’s latest comprehensive loT forecast was published in May 2014 and it also includes a list of potential challenges, some of which I’ve already covered:

  • Security: Increased automation and digitization creates new security concerns.
  • Enterprise: Security issues could pose safety risks.
  • Consumer Privacy: Potential of privacy breaches.
  • Data: Lots of data will be generated, both for big data and personal data.
  • Storage Management: Industry needs to figure out what to do with the data in a cost-effective manner.
  • Server Technologies: More investment in servers will be necessary.
  • Data Centre Network: WAN links are optimised for human interface applications, IoT is expected to dramatically change patterns by transmitting data automatically.

All these points (and more) must be addressed sooner or later, often at a substantial cost. We are no longer talking about tiny IoT chips and cheap toys based on such chips, this is infrastructure. This is a lot of silicon in server CPUs, expensive DDR4 ECC RAM and even bigger SSDs, all housed in expensive servers, in even bigger data centres.

That’s just the tip of the iceberg; industry must tackle bandwidth concerns, data management and privacy policies, and security. So how much money does that leave for security, which is on top of Gartner’s list of IoT challenges?

A lot of money is already pouring into the industry, VCs are getting on board and the pace of investment appears to be picking up. There were also a number of acquisitions, often involving big players like Google, Qualcomm, Samsung, Gemalto, Intel and others. There is a list of IoT-related investments on Postscapes. The trouble with many of these investments, especially those coming from VCs, is that they tend to focus on “shiny” things, devices that can be marketed soon, with a potentially spectacular ROI. These investments don’t do much for security or infrastructure, which would basically have to trail IoT demand.

Big players will have to do the heavy lifting, not VC-backed startups and toymakers. Agile and innovative startups will certainly play a big role by boosting adoption and creating demand, but they can’t do everything.

So let’s think of it this way, even a small company can build a car, or tens of thousands of cars, but it can’t build highways, roads, petrol stations and refineries. That same small company can build a safe vehicle using off-the-shelf technology to meet basic road safety standards, but it couldn’t build a Segway-like vehicle that would meet the same safety standards, nor could anyone else. Automotive safety standards could never apply to such vechicles, we don’t see people commuting to work on Segways, so we cannot expect the traditional tech security standard to apply to underpowered IoT devices, either.

Having commuters checking their email or playing Candy Crush while riding their Segways through rush hour traffic does not sound very safe, does it? So why should we expect IoT devices to be as safe as other connected devices, with vastly more powerful hardware and mature operating systems? It may be a strange analogy, but the bottom line is that IoT devices cannot be expected to conform to the same security standards as fully fledged computers.

But Wait, There Weren’t That Many IoT Security Debacles…

True, we don’t see a lot of headlines about spectacular IoT security breaches, but let me put it this way: how many security related headlines did you see about Android Wear? One? Two? None? It is estimated there are fewer than a million Android Wear devices in the wild, so they’re simply not a prime target for hackers, or a subject for security researchers.

How many IoT devices do you own and use right now? How many does your business use? That’s where the “Internet of NoThings” joke comes from, most people don’t have any. The numbers keep going up, but the average consumer is not buying many, so where is that growth coming from? IoT devices are out there and the numbers are booming, driven by enterprise rather than the consumer market.

Verizon and ABI Research estimate that there were 1.2 billion different devices connected to the internet last year, but by 2020, they expect as many as 5.4 billion B2B IoT connections.

Smart wristbands, toasters and dog collars aren’t a huge concern from a security perspective, but Verizon’s latest loT report focuses on something a bit more interesting: enterprise.

The number of Verizon’s machine-to-machine (M2M) connections in the manufacturing sector increased by 204 percent from 2013 to 2014, followed by finance and insurance, media and entertainment, healthcare, retail and transportation. The Verizon report includes a breakdown of IoT trends in various industries, so it offers insight into the business side of things.

The overall tone of the report is upbeat, but it also lists a number of security concerns. Verizon describes security breaches in the energy industry as “unthinkable,” describes IoT security as “paramount” in manufacturing, and let’s not even bring up potential risks in healthcare and transportation.

How And When Will We Get A Secure Internet of Things?

I will not try to offer a definitive answer on how IoT security challenges can be resolved, or when. The industry is still searching for answers and there is a long way to go. Recent studies indicate that the majority of currently available IoT devices have security vulnerabilities. HP found that as many 70 percent of IoT devices are vulnerable to attack.

While growth offers a lot of opportunities, IoT is still not mature, or secure. Adding millions of new devices, hardware endpoints, billions of lines of code, along with more infrastructure to cope with the load, creates a vast set of challenges, unmatched by anything we have experienced over the past two decades.

That is why I am not an optimist.

I don’t believe the industry can apply a lot of security lessons to IoT, at least not quickly enough, not over the next couple of years. In my mind, the Internet analogy is a fallacy, simply because the internet of the nineties did not have to deal with such vastly different types of hardware. Using encryption and wasting clock cycles on security is not a problem on big x86 CPUs or ARM SoCs, but it won’t work the same way with tiny IoT devices with a fraction of the processing power and a much different power consumption envelope.

More elaborate processors, with a biger die, need bigger packaging and have to dissipate more heat. They also need more power, which means bigger, heavier, more expensive batteries. To shave off weight and reduce bulk, manufacturers would have to resort to using exotic materials and production techniques. All of the above would entail more R&D spending, longer time-to-market and a bigger bill of materials. With substantially higher prices and a premium build, such devices could hardly be considered disposable.

the internet of things - iot

So what has to be done to make IoT secure? A lot. And everyone has a role to play, from tech giants to individual developers.

Let’s take a look at a few basic points, such as what can be done, and what is being done, to improve IoT security now:

  • Emphasise security from day one
  • Lifecycle, future-proofing, updates
  • Access control and device authentication
  • Know your enemy
  • Prepare for security breaches

A clear emphasis on security from day one is always a good thing, especially when dealing with immature technologies and underdeveloped markets. If you are planning to develop your own IoT infrastructure, or deploy an existing solution, do your research and stay as informed as possible. This may involve trade-offs, as you could be presented with a choice of boosting security at the cost of compromising the user experience, but it’s worth it as long as you strike the right balance. This cannot be done on the fly, you have to plan ahead, and plan well.

In the rush to bring new products and services to market, many companies are likely to overlook long-term support. It happens all the time, even in the big leagues, so we always end up with millions of unpatched and insecure computers and mobile devices. They are simply too old for most companies to bother with, and it is bound to be even worse with disposable IoT devices. Major phone vendors don’t update their software on 2-3 year old phones, so imagine what will happen with $20 IoT devices that might be on your network for years. Planned obsolescence may be a part of it, but the truth is that updating old devices does not make much financial sense for the manufacturer since they have better things to do with their resources. Secure IoT devices would either have to be secure by design and impervious from the start, or receive vital updates throughout their lifecycle, and I’m sure you will agree neither option sounds realistic, at least, not yet.

Implementing secure access control and device authentication sounds like an obvious thing to bring up, but we are not dealing with your average connected device here. Creating access controls, and authentication methods, that can be implemented on cheap and compact IoT devices without compromising the user experience, or adding unnecessary hardware, is harder than it seems. As I mentioned earlier, lack of processing power is another problem, as most advanced encryption techniques simply wouldn’t work very well, if at all. In a previous post, I looked at one alternative, outsourcing encryption via the blockchain technology; I am not referring to the Bitcoin blockchain, but similar crypto technologies that are already being studied by several industry leaders.

Si vis pacem, para bellum – if you want peace, prepare for war. It is vital to study threats and potential attackers before tackling IoT security. The threat level is not the same for all devices and there are countless considerations to take into account; would someone rather hack your daughter’s teddy bear, or something a bit more serious? It’s necessary to reduce data risk, keep as much personal data as possible from IoT devices, properly secure necessary data transfers, and so on. However, to do all this, you first need to study the threat.

If all else fails, at least be prepared for potential security breaches. Sooner or later they will happen, to you or someone else (well, preferably a competitor). Always have an exit strategy, a way of securing as much data as possible and rendering compromised data useless without wrecking your IoT infrastructure. It is also necessary to educate customers, employees and everyone else involved in the process about the risks of such breaches. Instruct them in what to do in case of a breach, and what to do to avoid one.

Of course, a good disclaimer and TOS will also help if you end up dealing with the worst-case scenario.

Hiring? Meet the Top 10 Freelance IT Developers for Hire in January 2017

This article was originally posted on toptal.

Continue Reading

Power Efficient Home Offices Can Save Money And Polar Bears

Does your father still use that fancy fishing rod you got him last spring? Did your significant other like the new watch, and that witty custom engraving? Do you enjoy working on your new 4K monitor, sharp enough to slice through a medium rare steak?

These might be the perks and gifts you could have afforded, had you paid more attention to power efficiency in your home offices. While it’s true that Toptal’s goal is to screen and select the top 3% of freelance software engineers, it’s also true that many remote developers overlook being power efficient. Also, we don’t exactly have a corporate headquarters with fancy plaques, reserved parking, and corner offices for top brass.

Most of us work from home, or from our own offices – most of us pay our own bills.

Of course, most freelancers didn’t start off that way. About 15 years ago, I was sweating and freezing at the same time as I was overseeing a green screen shoot in a cramped studio. There were a couple of dozen kilowatts worth of lighting and other equipment and the AC was on full blast. But it wasn’t enough, because the studio was like a sauna from the waist up and a fridge from the waist down. The AC couldn’t keep up and all the cold air just sank to the floor in seconds. It went on for days, and my employers weren’t too happy to see the damage at the end of the month. Our makeup artist almost ran out of supplies too.

But I wasn’t paying the electric bill, the company was.

Granted, this is a drastic example and most remote workers don’t burn hundreds of kilowatts of power per week, but saving a few dozen watts every hour can make a big difference at the end of the year. So, if you want to save a few hundred dollars or more and use them to treat your loved ones or yourself to something nice (rather than burning dead dinosaurs for fuel), keep reading.

Obvious Home Office Power Efficiency Tips

Let’s start with some more or less obvious tips. I won’t waste much of your time with the basics. Anyone can Google “home office power saving” and come up with loads of different guides, but I’ll save you the trouble and list the most important points:

  • Use power efficient lighting
  • Set your thermostat correctly (if available)
  • Use air conditioning unit for heating when possible
  • Check your insulation
  • Don’t forget to turn off lights and hardware
  • Select the right hardware and set it up for efficiency
  • Use good power strips, or smart sockets

Let’s take a closer look at these points, with an emphasis on the needs of the average Toptaler.

Power Efficient Home Office Lighting

Lighting is a good place to start. New bulbs are easy to retrofit, and high quality LED lighting is available around the world and on many e-commerce sites with worldwide shipping.

Traditional incandescent lighting is dead, and has been replaced by CCFL and LED lighting. Halogens are still used, but they can’t keep up with LEDs in terms of efficiency. Basically, you should focus on LED bulbs or modern fluorescent tubes. Both are available in a range of different color temperatures, so if you’re a “cool white” person like myself you should have no trouble finding something to match your needs – just make sure to check the Kevin temperature rating and find the right one.

power efficient home offices

Since we cater to a tech savvy audience, we should also mention connected LED RGB bulbs. Philips pioneered the concept a few years ago with its Hue LED lighting range , and cheaper alternatives are starting to show up as well. These solutions allow you to change the color temperature and intensity with just a few taps on your mobile, so they are ideal for people whose living room doubles as their office. They allow you to work under daylight or cool white light, then switch to low intensity warm light when you unwind and get in the sofa to watch TV.

Heating and Cooling Your Home Offices

If you happen to have a big home, this may be a big item on your power efficiency list. In case you are used to heading out to the office and coming back home 10 hours later, you’ve probably set your heating and air conditioning accordingly. But what happens when you start working from home?

If you have central aircon and heating, the most obvious approach would be to keep everything on because you don’t head out for work anymore – but this may prove very expensive in the long run. If you live in a studio apartment, you don’t have much choice, but if you’re in a house you do.

While you are technically still at home, you don’t have to heat or cool every single room just because you’re working in your home office. You need to focus on one part of your home and that’s it – don’t bother with the rest and treat it as if you were out. Of course, if you have kids or share your home with other people, this is not an option.

You can try to maximize heating efficiency by double-checking insulation in your home office, maybe even investing a small extra compared to the rest of your home, because that’s where you will be spending most of your time. You can also consider using small space heaters and portable fans to cut costs when it’s not necessary to heat or cool your whole home. Depending on the local climate, air conditioning can also be a significant expense during the summer. If you install a standalone AC unit in your home office, you can use it to efficiently cool down just one room without wasting power on the rest of your home. Modern inverter air conditioners deliver exceptional efficiency, and in addition to cooling they are also the most efficient way of heating your office, provided the temperature difference isn’t too big (this really depends on your location).

Also, if you happen to live in a warm climate and use air conditioning several months a year, you also need to take into account heat generated by all hardware in your office and heat generated by yourself. I am not going to quote Morpheus from The Matrix, because his BTU figures were a bit off and we’re looking for watts, but the average person dissipates more than 100W of heat an hour while sitting. Depending on what you do, your computer and monitor could use 100W to 500W, inefficient lighting 50W or more, and so on.

In a warm climate, all this extra heat has to be removed via ventilation or air conditioning. You will pay for each wasted watt.

save money with power efficient offices

This wouldn’t be the Toptal blog if we didn’t mention some geekier alternatives as well, so we will mention smart tech designed to shave off a few pennies from your energy bill. The Nest thermostat is probably the best known solution out there. Home automation is the next big thing, and we will undoubtedly see a lot more connected thermostats and all sorts of clever smart home appliances that will save more energy.

Nest claims its thermostat will pay for itself in about two years, assuming it’s installed in an average American home with a programmable thermostat. In any case, Nest is just a sign of things to come. While smart technology can’t make up for efficient heating systems and good insulation, in some cases it could cut your heating bill by a few percent (and we are talking about relatively big bills).

Of course, the easiest way to keep heating costs down is to drop the temperature, but I am not going to advocate that you freeze just to save a few bucks. Setting a sedentary reminder on a smart device is a good alternative, plus it’s good for your health. Instead of sitting behind your desk for 2-3 hours in one go, make sure to take short breaks every 45-60 minutes or so. If you’re working from home, you can use this time to do some basic chores. You don’t have to exercise, but taking the trash out, doing the dishes, or something along those lines should be enough to get your blood pumping so you won’t feel nearly as cold. Plus, it’s good for your eyes, spine, and cardiovascular system.

Another geeky way of boosting power efficiency comes from Elon Musk’s Tesla. The electric car maker recently announced Powerwall, a massive lithium-ion battery pack designed for homes. It is supposed to store energy during “peak solar” and allow homeowners to reuse it when the sun goes down.

The idea is not entirely new, a similar approach has been used by power companies for decades. For example, power companies use pumped-storage hydroelectric plants to store power during the night and reuse it during the day. Powerwall allows the individual homeowner to store power in pretty much the same way. It’s not just about solar. In many cases utility companies offer better rates during the night, and sell more expensive electricity during peak hours.

The Tesla Powerwall is set to ship later this year, with prices starting at $3000. The price may be a problem, especially if you need to replace the batteries every few years. Another issue may be the whole peak rate angle – if everyone decided to use such solutions, the concept of peak rates would probably disappear altogether, as home power storage would make demand flat around the clock. Economy of scale is another question. As the world moves to sustainable power generation, using solar and wind power, we will need more power storage options. Utilities will be able to create them at a much bigger scale – you can store a lot more energy in an artificial lake than a bunch of expensive lithium ion batteries that have to be replaced every 5-10 years.

Choosing the Right Hardware

Different people use different hardware in their home offices, which raises a few problems when dealing with this issue. For example, someone who spends most of their time working in standard office applications or doing much of their work in a browser doesn’t exactly need a very powerful computer. I know I don’t, but my friend has to play around with EMC Documentum on loads of virtual machines locally, so he needs a lot more processing power, RAM, and storage. Designers need bigger and better screens, more powerful GPUs, and so on.

However, no matter what you do there is probably room for improvement.

power efficient computer hardware

The good news is that the latest generations of Intel and AMD processors offer good power efficiency, and in many cases their integrated graphics are good enough for a lot of users, so many people are choosing to dispense with powerful discrete graphics. In case you don’t need exceptionally fast hardware or graphics capable of running the latest games, a mid-range PC or Mac should be more than enough.

If you are tempted to keep using your ancient desktop just because it’s “good enough”, you may want to do some math and figure out whether it’s worth it or not. The PC update cycle has slowed down to a crawl, and Intel claims the age of the average PC is about four years. If you have such an antiquated system, chances are you could get the same level of performance, or superior performance from a much quieter small form factor (SFF) desktop with vastly improved power efficiency.

Actually, I am typing this on one such machine because my old desktop decided to bid farewell to this cruel world a couple of weeks ago. It’s a PC, not a Mac mini, but the hardware is basically identical – an Intel Core i5-4200U processor with integrated graphics, 8GB of RAM, SSD and HDD. I usually use this machine as my HTPC and a portable backup, but it’s faster than my duly departed ATX desktop yet consumes about one fifth of the power, thanks to an ultra-low voltage mobile processor manufactured in a superior node (22nm vs 45nm) and modern hardware all around.

Of course, many developers use notebooks as their daily drivers, and they tend to offer the same efficiency as SFF systems based on mobile chips. This is also true of all-in-one (AIO) systems like the iMac, as they are usually based on notebook hardware mated to a big screen.

If you think you’re in the clear because you are using your notebook, it all depends on the type of processor you have inside, its age, and efficiency. Also, if you use an old external monitor, for example a five-year-old CCFL-backlit 1080p panel, chances are you are already wasting a lot more power than you would with a new LED-backlit monitor.

Even if you need a very powerful desktop, it’s always a good idea to choose the latest and most efficient components out there. They might cost a bit more than last year’s hardware, but if you will use the system for years and eight hours each day, even small efficiency gains should justify the premium. In some situations, a smart upgrade will pay for itself.

more power efficiency ideas

Another option is to get a secondary system and use it for only some tasks. For example, why use your office machine to watch streamed content on your TV if you can get a mini-PC or a stick PC for $100-$200? It will save power and reduce wear on more expensive components in your primary system. Intel is planning to bring more ultra-low voltage processors to the stick PC form factor, so a few years down the line these tiny machines could be used for more serious stuff.

Setting up Home Office Hardware

In case you’ve decided to get a new, more efficient desktop or notebook with a brand new monitor in tow, you can expect some efficiency gains, but you also need to set everything up correctly.

I know quite a few power users never embraced a lot of power efficiency features on their machines, not because they didn’t care but because many of these early features were immature and impractical. However, the latest versions of Windows and OS X, coupled with modern x86 platforms offer exceptional boot and recovery times. It has a bit to do with software, a bit to do with processors and motherboards, and a lot to do with fast solid state storage. I won’t go into the details, as you need to do a bit of research depending on which platform you use and what sort of hardware you have under the bonnet, but there should be some room for improvement. Set hibernation and sleep profiles to best meet your needs, with an eye on efficiency.

Obviously, on more powerful systems there is more room for improvement. You can’t save a lot if you’re already working on a new notebook, but you can on a desktop. In case you are building your own desktop system, as many enthusiasts do, choosing quality power supply units with a top-notch efficiency rating is important. Oh, one more thing – don’t get a huge PSU in case you don’t absolutely need it – get one that meets your wattage needs and sports a Platinum or Gold efficiency rating.

power efficiency and your computer

Using high-quality power strips is also a good idea, not only because it might protect your precious hardware (and even more precious data) in case of a power surge, but because power strips can be used to save a few watts too. The most obvious way of doing this is by flicking the off switch as soon as you’re done working, but this may not be practical for most people. It is a good idea if you don’t plan to use your office for a few days or weeks. You also need to think about organizing your hardware supply through two or more power strips, allowing you to switch off stuff you don’t use all the time with the flick of a button. All hardware wastes a bit of energy when it’s on standby, but we’re not talking about big numbers here.

One interesting option is to use smart sockets for some devices, effectively turning “dumb” devices like heaters or lamps into smart devices. These connected sockets can be programmed or controlled remotely, allowing you to save power and time in some situations, depending on your needs. Smart sockets and connected LED bulbs may also come in handy for security purposes. If you’re the traveling type, you can set use them to create the appearance of an occupied home, which might prompt local burglars to pay a visit to your neighbor’s home instead.

So How Much Can I Save?

To be honest, I have no idea. It depends on where you’re located, what you do, and which hardware you use.

Electricity prices vary from region to region, and the differences can be staggering. This is why I tried to steer clear of some certainties in this post. For example, a user in Denmark could save a fair amount of money by upgrading from an antiquated desktop to a SFF machine like an iMac mini or Intel NUC; but a user with the same hardware in Ukraine wouldn’t save nearly as much, so the upgrade is probably not worth it from an economic perspective.

Our audience is global, and so is our network, so we can’t offer one-size-fits all advice. Luckily, our audience also tends to know a thing or two about tech and math, so I am sure calculating how much money can be saved by boosting efficiency won’t be a problem.

For example, a conservative 100W reduction in power consumption (8 hours a day, 25 days a month), will save you just $24 a year, assuming a kWh price of $0.10. However, in developed countries you are likely to pay a lot more than 10 cents per kWh, so if you pay 30 cents like most people in Western Europe, you are looking at $72 a year. That’s not bad for a measly 100 watts during office hours. Over the course of your computer’s life cycle of 4+ years, that’s upwards of $300, which you can invest toward more efficient hardware rather than burn it.

Of course, saving 100W eight hours per day is peanuts in the big scheme of things, but bear in mind that we are talking about individual home offices here. You pay the bill, not your employer, and the savings might extend well beyond your home office. After all, you would be investing in your home, not just your home office, so money spent on efficiency tweaks could yield much bigger returns as you would be making both your home and your office a bit more efficient in one blow.

In many cases, savings of a few hundred dollars per year should be possible, if not easy, and you could recoup the initial investment in just a couple of years (excluding computer hardware, which we all have to upgrade anyway).

Granted, many people will argue that it’s simply not a lot of money, especially for well-paid software engineers. But over the course of a decade, hundreds have a habit of turning into thousands. However, it’s not just about the money. By making your office and home more efficient, you’re also making a family of polar bears happier.

Burning one kilogram of coal or oil produces between 2.5kWh and 3kWh of electricity, so a single home office with improved efficiency can save a couple of kilos of fossil fuels each week. To generate 100kWh of power, coal produces about 120kg of CO2, which is roughly the same as a modern compact car would generate over the course of a 1000km drive.

Still, we don’t have to invoke cuddly polar bears to feel good about ourselves. Most of us work from home offices , so we use cheaper household electricity and enjoy better prices than businesses in many parts of the world. Since we don’t commute or drive to work each morning, we already save a bit of money and reduce our carbon footprint.

Plus, we don’t get stuck in traffic.

This is an article from Toptal writers.

Continue Reading

Cryptocurrency for Dummies: Bitcoin and Beyond

Bitcoin created a lot of buzz on the Internet. It was ridiculed, it was attacked, and eventually it was accepted and became a part of our lives. However, Bitcoin is not alone. At this moment, there are over 700 AltCoin implementations, which use similar principles of CryptoCurrency.

So, what do you need to create something like Bitcoin?

Without trying to understand your personal motivation for creating a decentralized, anonymous system for exchanging money/information (but still hoping that it is in scope of moral and legal activities), let’s first break down the basic requirements for our new payment system:

  1. All transactions should be made over the Internet
  2. We do not want to have a central authority that will process transactions
  3. Users should be anonymous and identified only by their virtual identity
  4. A single user can have as many virtual identities as he or she likes
  5. Value supply (new virtual bills) must be added in a controlled way

Decentralized Information Sharing Over Internet

Fulfilling the first two requirements from our list, removing a central authority for information exchange over the Internet, is already possible. What you need is a peer-to-peer (P2P) network.

Information sharing in P2P networks is similar to information sharing among friends and family. If you share information with at least one member of the network, eventually this information will reach every other member of the network. The only difference is that in digital networks this information will not be altered in any way.

Cryptocurrency and Toptal

You have probably heard of BitTorrent, one of the most popular P2P file sharing (content delivery) systems. Another popular application for P2P sharing is Skype, as well as other chat systems.

Bottom line is that you can implement or use one of the existing open-source P2P protocols to support your new cryptocurrency, which we’ll call Topcoin.

Hashing

To understand digital identities, we need to understand how cryptographic hashing works. Hashing is the process of mapping digital data of any arbitrary size to data of a fixed size. In simpler words, hashing is a process of taking some information that is readable and making something that makes no sense at all.

You can compare hashing to getting answers from politicians. Information you provide to them is clear and understandable, while the output they provide looks like random stream of words.

P2P Protocols

There are a few requirements that a good hashing algorithm needs:

  1. Output length of hashing algorithm must be fixed (a good value is 256 bytes)
  2. Even the smallest change in input data must produce significant difference in output
  3. Same input will always produce same output
  4. There must be no way to reverse the output value to calculate the input
  5. Calculating the HASH value should not be compute intensive and should be fast

If you take a look at the simple statistics, we will have a limited (but huge) number of possible HASH values, simply because our HASH length is limited. However, our hashing algorithm (let’s name it Politician256) should be reliable enough that it only produces duplicate hash values for different inputs about as frequently as a monkey in a zoo manages to correctly type Hamlet on a typewriter!

If you think Hamlet is just a name or a word, please stop reading now, or read about the Infinite Monkey Theorem.

Digital Signature

When signing a paper, all you need to do is append your signature to the text of a document. A digital signature is similar: you just need to append your personal data to the document you are signing.

If you understand that the hashing algorithm adheres to the rule where even the smallest change in input data must produce significant difference in output, then it is obvious that the HASH value created for the original document will be different from the HASH value created for the document with the appended signature.

A combination of the original document and the HASH value produced for the document with your personal data appended is a digitally signed document.

And this is how we get to your virtual identity, which is defined as the data you appended to the document before you created that HASH value.

Next, you need to make sure that your signature cannot be copied, and no one can execute any transaction on your behalf. The best way to make sure that your signature is secured, is to keep it yourself, and provide a different method for someone else to validate the signed document. Again, we can fall back on technology and algorithms that are readily available. What we need to use is public-key cryptography also known as asymmetric cryptography.

To make this work, you need to create a private key and a public key. These two keys will be in some kind of mathematical correlation and will depend on each other. The algorithm that you will use to make these keys will assure that each private key will have a different public key. As their names suggest, a private key is information that you will keep just for yourself, while a public key is information that you will share.

If you use your private key (your identity) and original document as input values for the signing algorithm to create a HASH value, assuming you kept your key secret, you can be sure that no one else can produce the same HASH value for that document.

How Bitcoin and Cryptocurrency works

If anyone needs to validate your signature, he or she will use the original document, the HASH value you produced, and your public key as inputs for the signature verifying algorithm to verify that these values match.

How to send Bitcoin/Money

Assuming that you have implemented P2P communication, mechanisms for creating digital identities (private and public keys), and provided ways for users to sign documents using their private keys, you are ready to start sending information to your peers.

Since we do not have a central authority that will validate how much money you have, the system will have to ask you about it every time, and then check if you lied or not. So, your transaction record might contain the following information:

  1. I have 100 Topcoins
  2. I want to send 10 coins to my pharmacist for the medication (you would include your pharmacists public key here)
  3. I want to give one coin as transaction fee to the system (we will come back to this later)
  4. I want to keep the remaining 89 coins

The only thing left to do is digitally sign the transaction record with your private key and transmit the transaction record to your peers in the network. At that point, everyone will receive the information that someone (your virtual identity) is sending money to someone else (your pharmacist’s virtual identity).

Your job is done. However, your medication will not be paid for until the whole network agrees that you really did have 100 coins, and therefore could execute this transaction. Only after your transaction is validated will your pharmacist get the funds and send you the medication.

Miners – New Breed of Agents

Miners are known to be very hard working people who are, in my opinion, heavily underpaid. In the digital world of cryptocurrency, miners play a very similar role, except in this case, they do the computationally-intensive work instead of digging piles of dirt. Unlike real miners, some cryptocurrency miners earned a small fortune over the past five years, but many others lost a fortune on this risky endeavour.

Miners are the core component of the system and their main purpose is to confirm the validity of each and every transaction requested by users.

In order to confirm the validity of your transaction (or a combination of several transactions requested by a few other users), miners will do two things.

First, they will rely on the fact that “everyone knows everything,” meaning that every transaction executed in the system is copied and available to any peer in the network. They will look into the history of your transactions to verify that you actually had 100 coins to begin with. Once your account balance is confirmed, they will generate a specific HASH value. This hash value must have a specific format; it must start with certain number of zeros.

There are two inputs for calculating this HASH value:

  1. Transaction record data
  2. Miner’s proof-of-work

Considering that even the smallest change in input data must produce a significant difference in output HASH value, miners have a very difficult task. They need to find a specific value for a proof-of-work variable that will produce a HASH beginning with zeros. If your system requires a minimum of 40 zeros in each validated transaction, the miner will need to calculate approximately 2^40 different HASH values in order to find the right proof-of-work.

Once a miner finds the proper value for proof-of-work, he or she is entitled to a transaction fee (the single coin you were willing to pay), which can be added as part of the validated transaction. Every validated transaction is transmitted to peers in the network and stored in a specific database format known as the Blockchain.

But what happens if the number of miners goes up, and their hardware becomes much more efficient? Bitcoin used to be mined on CPUs, then GPUs and FPGAs, but ultimately miners started designing their own ASIC chips, which were vastly more powerful than these early solutions. As the hash rate goes up, so does the mining difficulty, thus ensuring equilibrium. When more hashing power is introduced into the network, the difficulty goes up and vice versa; if many miners decide to pull the plug because their operation is no longer profitable, difficulty is readjusted to match the new hash rate.

Blockchain – The Global Cryptocurrency Ledger

The blockchain contains the history of all transactions performed in the system. Every validated transaction, or batch of transactions, becomes another ring in the chain.

So, the Bitcoin blockchain is, essentially, a public ledger where transactions are listed in a chronological order.

The first ring in the Bitcoin blockchain is called the Genesis Block

To read more about how the blockchain works, I suggest reading Blockchain Technology Explained: Powering Bitcoin, by Nermin Hajdarbegovic.

There is no limit to how many miners may be active in your system. This means that it is possible for two or more miners to validate the same transaction. If this happens, the system will check the total effort each miner invested in validating the transaction by simply counting zeros. The miner that invested more effort (found more leading zeros) will prevail and his or her block will be accepted.

Controlling The Money Supply

The first rule of the Bitcoin system is that there can be a maximum of 21,000,000 Bitcoins generated. This number has still not been achieved, and according to current trends, it is thought that this number will be reached by the year 2140.

This may cause you to question the usefulness of such a system, because 21 million units doesn’t sound like much. However, Bitcoin system supports fractional values down to the eight decimal (0.00000001). This smallest unit of a bitcoin is called a Satoshi, in honor of Satoshi Nakamoto, the anonymous developer behind the Bitcoin protocol.

New coins are created as a reward to miners for validating transactions. This reward is not the transaction fee that you specified when you created a transaction record, but it is defined by the system. The reward amount decreases over time and eventually will be set to zero once the total number of coins issued (21m) has been reached. When this happens, transaction fees will play a much more important role since miners might choose to prioritize more valuable transactions for validation.

Apart from setting the upper limit in maximum number of coins, the Bitcoin system also uses an interesting way to limit daily production of new coins. By calibrating the minimum number of leading zeros required for a proof-of-work calculation, the time required to validate the transaction, and get a reward of new coins, is always set to approximately 10 minutes. If the time between adding new blocks to the blockchain decreases, the system might require that proof-of-work generates 45 or 50 leading zeros.

So, by limiting how fast and how many new coins can be generated, the Bitcoin system is effectively controlling the money supply.

Start “Printing” Your Own Currency

As you can see, making your own version of Bitcoin is not that difficult. By utilizing existing technology, implemented in an innovative way, you have everything you need for a cryptocurrency.

  1. All transaction are made over the Internet using P2P communication, thus removing the need for a central authority
  2. Users can perform anonymous transactions by utilizing asynchronous cryptography and they are identified only by their private key/public key combination
  3. You have implemented a validated global ledger of all transactions that has been safely copied to every peer in the network
  4. You have a secured, automated, and controlled money supply, which assures the stability of your currency without the need of central authority

One last thing worth mentioning is that, in its essence, cryptocurrency is a way to transfer anonymous value/information from one user to another in a distributed peer-to-peer network.

Consider replacing coins in your transaction record with random data that might even be encrypted using asynchronous cryptography so only the sender and receiver can decipher it. Now think about applying that to something like the Internet Of Things!

A cryptocurrency system might be an interesting way to enable communication between our stove and toaster.

A number of tech heavyweights are already exploring the use of blockchain technology in IoT platforms, but that’s not the only potential application of this relatively new technology.

If you see no reason to create an alternative currency of your own (other than a practical joke), you could try to use the same or similar approach for something else, such as distributed authentication, creation of virtual currencies used in games, social networks, and other applications, or you could proceed to create a new loyalty program for your e-commerce business, which would reward regular customers with virtual tokens that could be redeemed later on.

This article was originally posted on Toptal by Demir Selmanovic – Lead Technical Editor.

About the author

Demir Selmanovic

View full profile »
Continue Reading