Thứ Năm, 31 tháng 3, 2016

NYFed's Bill Dudley Speech at VAE / VMI

Mike Smitka, Washington and Lee

Tonight I had the opportunity – perhaps I should say privilege – of hearing Bill Dudley, President of the Federal Reserve Bank of New York, speak at the Virginia Association of Economists conference at Virginia Military Institute. (Tomorrow they move next door – literally – to my own Washington and Lee, in a venue 45 seconds from my office.) I've been to many such events, and he spoke largely from script: the text of his talk, The Role of the Federal Reserve—Lessons from Financial Crises is already available on the FRB New York web site. He's been here before, and prefaced his talk with details about VMI and W&L and H. Parker Willis, the first Dean of what is now the Williams School. As a friend of Congressman and then Senator Carter Glass [of Lynchburg VA], Willis helped draft the legislation that set up the Federal Reserve System in 1913, and then went on to become the first Secretary of the Federal Reserve Board. (I'm currently sitting underneath what a century ago was Willis' office.)

It was a fine talk, indeed a very good talk for being accessible to undergraduate economics majors. Dudley is also quite deft at Q&A, which at these events inevitably includes the odd (sometimes very odd) Gold Bug. Of the many central bankers I've known, he may be simultaneously the most ordinary and the most extraordinary. That's because in 2008 the global financial system came very, very close to grinding to a halt. To give one example, only a few central bankers knew enough of the nitty-gritty of markets to understand the implications of money market mutual funds "breaking the buck" – and knew enough of the Federal Reserve System and of Wall Street to understand what the Fed could legally and practically do. Dudley couldn't do it alone, and he and those around him needed high-level backing, but at the operational end he was both the expert and the person who, given the role of the New York Fed, had to oversee the actual implementation of policy. (If this is opaque, post comments and I will elaborate.)

Most of the questions were ordinary, in that he had heard them before. There was, as I said, a Gold Bug who in an audience of 200-plus had the temerity to insist on posing not one but two questions. Dudley is a gentleman and so overruled the moderator-cum-timekeeper to take one more question – and made sure it was from a student. He was consistently clear, direct and careful: the US is in its 7th year of expansion, but growth is only slightly above normal [= a lot of people are still out of work], we're approaching but not yet at "normal" [which in an evolving economy can't be pinned down using some rule grounded in the often distant past] and we have headwinds. So he advocated going slow with interest rate increases, the Fed can always boost rates quickly but has limited room to move in the opposite direction. And so on. Yes, he has had a lot of practice, and as I noted he's encountered these questions before. But so have all the other Fed presidents and governors and Japanese and English central bankers I've known.

One question caused him to pause: what did you learn from your first macro "Principles" course? First, he was honest: to paraphrase, "I don't remember much." He did not stop with an answer that produced a chuckle but no real information. Instead, he continued that when he was young, economics textbooks contained very little on financial markets, and that that remains true even today, and not just of textbooks but of state-of-the-art macroeconomic research. (To his credit he did not mention Dynamic Stochastic Equilibrium Models or let any other such jargon intrude.) Now his hope is that theorists will learn how to incorporate financial markets in a way that allows modeling the impact of a financial crisis. However, that was not the state of the art in the fall 2008, and is not the state of the art 8 years later.

Economic downturns hurt many innocent bystanders, millions of Americans lost their jobs in 2008-2010 through no fault of their own. He was honest that it rankled that in such times "too big to fail" institutions could find policy support while ordinary Americans lost their homes. I won't reiterate his discussion of "systemically important financial institutions," except to note that he never resorted to saying "SIFI". But you can read his comments for yourself, in the link above.

Can we realistically expect economic models to provide practical guidance in times of crisis? I think not for two reasons. The first is that the next financial crisis will be unexpected. Dudley in effect made that point, expressing the hope that thanks to "lessons learned" that he would not live long enough to see another such crisis. Unfortunately models, however sophisticated, can only incorporate the features that those who construct them believe salient. While perhaps optimistic, if we know where the cliff is, we can stop short of going over it. However, while the road may appear familiar, we never take the same route twice, and can be surprised by twists or turns. Or deer in the middle of the day. To use (abuse?) another aphorism, it is simply unrealistic to think that we will be able to position the profession to fight the next war against an as yet unknown enemy. At best we can hope that in refighting the last war we will have keep alive a few skills that prove useful, without having an officer corps incapable when faced with the unexpected.

The second is the matter of simplicity. Advances in analytic tools and computer power are already allowing economists to build models that have more "agents" and are otherwise more complicated. We can include not just one type of consumer, but consumers who can choose their level of saving and consumers who live paycheck to paycheck. We can build financial models (if not full macro models) with "noise" traders who follow the herd, and those who look at fundamentals and get in early and bail out ahead of the crowd. So we can model a panic, even if we can't yet incorporate it in a satisfying manner in a complete macro model. ("Complete" is a technical term…) That may come with time.

A similar elaboration took place in macro models in the 1960s at Wharton Econometrics and elsewhere. The end result was models that were too complicated to handle the unexpected – the "supply side" shocks and inflation of the 1970s – which made them easy targets for proponents of new approaches to macro, starting with rational expectations and real business cycles. This time around the new generation could offer the excuse that they had only just finished constructing the core of the new class of DSGE models, but the reality is that adding layer upon layer of additional features will assuredly lead to a field working with unwieldy constructs that not only fail to explain the unexpected – a straw man critique – but include so many parameters that while they produce a good fit and work well in normal times, they offer no help in handling the unexpected. We will be right back to where were were in 1975.

So back to simplicity. For a model to be useful, it needs to be simple. Such models must then however be either quite specific, or be so generic as to be … innocuous is not quite the right word, maybe unhelpful is better. If models are specific, you then need an array of them, and the wisdom to be able to choose the right one for the task at hand. In my mind models serve a pedagogic role. They help you learn to be more sophisticated about treating interactions seriously, to confront the conceptual and technical challenge of seeing that all the pieces add up to 100%. The problem is that the economics profession is now at the point where career incentives mandate that you work on elaborating a particular model. Only in that way can you do something to demonstrate through your technical "chops" that you are professionally trained and yet "advance the frontier" of the field. Developing familiarity with a wide array of models does not lead to publications and tenure, or even a Ph.D..

Thankfully in 2008 we had Bill Dudley at the New York Fed. He had sufficient background in economics to be a careful thinker, to be able to communicate with the profession and (probably more important) have the respect of the staff economists who had to join the fight to contain the Lehman Shock, as it's termed in much of the world. At the same time, it was important that his knowledge was not so specialized that he only had one narrow perspective on the macroeconomy. At the same time, he had a sufficiently distinguished career in the financial sector to let him interact with the players in financial markets, who had their own narrow specializations. I can spin stories of why individuals with that particular background are likely to end up at the New York Fed. But to be honest, I think we were just very lucky. Personally I am grateful, a technical term of a different sort that comes to my mind so soon after Easter.

News coverage: I will add links as I come across more stories.

Thứ Sáu, 25 tháng 3, 2016

"Kei" Cars in Japan: A "Galapagos" Sector?

Mike Smitka

Under Japanese taxation, licensing and inspection systems there are 3 broad classes of cars (and light commercial vehicles): full-sized, compact and "kei" minicars. The latter have to have a small engine, a narrow width and a short length (the BMW "Mini" is too big, even the Smart is too wide!). They get a different (yellow) license plate, face lower taxes and inspection fees (annual taxes of ¥10,800 – in contrast a friend pays about ¥70,000 a year for his C-class Mercedes). They also face less stringent safety regulations (basically, don't get in an accident in one). The leaders in this segment – Suzuki, Daihatsu [a subsidiary of Toyota], and Honda – spend significant resources developing new models. But as with Japanese cell phones – if you haven't seen one, there's a reason why! – is this a niche peculiar to Japan. Are they – like Japanese cell-phones – a dead-end strategy, a niche that due to the peculiar development of domestic cell phones have no market in the rest of the world? If so, it's a commercial disaster that will drag down these players, as over the next decade the level of sales confronts an aging and domestic population.

Despite the claim of a widely cited New York Times story, Japan Seeks to Squelch Its Tiny Cars (June 8, 2014), this segment has steadily increased in importance over the past 15 years. The gist of the article is that the key tax was going to be hiked 25%. Yes, but that meant from (approx) US$80 to US$100. Of course industry officials complained, but no one expected it to be a big deal, and it wasn't. (I assume that an editor rewrote the article, because the reporter whose byline is on the story is generally careful.) That in itself is bad news for Japanese domestic carmakers, because a "kei" goes for about half the price of a comparable full-sized car: at ¥113 the base Toyota Prius – if you can actually find a base one! – goes for $22,000 while the base Daihatsu Tanto will set you back on $11,000. The good news is that, as per the graph, sales of full-sized cars have likewise increased, and they carry full-sized margins.

Are these Galapagos products? Diesels aren't unique to Europe, but their market share is far higher there than anywhere else in the world. To my knowledge, they are uniform sellers across all segments, so neither directly hurt nor enhance products. Furthermore, virtually all European models come with a variety of drivetrains, so the same core vehicle can be marketed globally. Similarly, in the US we have our own odd category of light trucks. As with Japan's "kei" cars, there's little market elsewhere, and few exports other than crossovers and SUVs. But unlike "kei" cars, light trucks carry the biggest margins outside of the luxury segment; the money they generate keeps the entire North American industry operating in the black, in many years making it the US the most profitable market in the industry.

Unfortunately "kei" cars have not done well in any market of size other than India (they're a small slice of China's market, but since it's a big market the aggregate numbers may be larger than I realize). Furthermore, several firms have exited the segment, including Mitsubishi [I'll fact-check the latter three later], Nissan, Subaru and Mazda.

Can countries encourage local R&D by creating niches unique to their market? China may attempt that by pushing the adoption of vehicles with electric & hybrid drivetrains. As long as energy prices remain low, that's a risky strategy. Doing so may require continued subsidies, and despite its size leave Chinese carmakers with "galapagos" products.

Thứ Sáu, 18 tháng 3, 2016

Peak Oil Revisited: Did I get anything right?

Mike Smitka, Economics, Washington and Lee University

In 2014 I wrote on the arrival of peak oil. That now appears at best premature. So what did I get wrong, and what (if anything) did I get right?

In my initial post I made two key assumptions. The first was that fracking would not be that important, because it remained a high-cost method of extraction. The second was that Saudi Arabia still had market power, and that therefore would respond to rising costs by cutting output. The third that changes in demand would be modest. Finally, though not (then and now) relevant, I assumed that alternative energy sources would remain marginal and substitution away from petroleum small.

Let me begin with the last, irrelevant point: alternative energy. In fact there's a lot of progress, progress that may be outstripping improvements in the (mature) thermal energy sector. Both wind and solar technologies see steady improvement, while thermal technology is changing less rapidly. (The key innovation of which I'm aware, but about which I know no details, is the ability to ramp up and down "base" generating capacity quickly. That improves the business case for intermittent power sources – solar and wind – because "base" capacity can be cut or boosted within a shorter time frame. As a result, as several papers at the 2015 Industry Studies Association conference in Kansas City argued, widespread windmill networks such as that in the Minnesota region can predict what the system as a whole can deliver 15 minutes in the future and hence bid into the short-term market. While in practice the marginal cost of solar isn't zero, it is lower than that of coal or even natural gas. Hence "green" sources can in practice lower hydrocarbon demand. At present that is a modest slice of the overall market, and current low hydrocarbon prices impede investment in additional capacity. Public policy – will governments take action in the face of global warming – comes to the fore. I simply don't know how much is now being invested in alternative sources, and the total quantities remain modest from a global perspective. With every passing year, the likelihood that incremental demand will be met through investment in renewable capacity improves, and hence my sense is that with each passing year the cost of such capacity lowers the ceiling on long-run energy prices. Now the devil lies in the details, which I don't know, but at least conceptually I can make the case that the longer low prices persist, the more likely that they will persist even longer. [Is my prose confusing? Well, so in this case is the real world.]

...green energy's not important today, but each year the impact on tomorrow's prices becomes more significant...

Second, I overestimated the role of Saudi Arabia, in that I read what is happening to their marginal cost of extraction as reflecting such costs for the market as a whole. So while their costs of extraction are rising, their market power is falling. At the same time, the marginal costs of producing in the Bakken formations, once a well has been drilled, may be (for me surprisingly) low. When supply is in excess, so that market power considerations don't matter, it's the low cost (marginal) producer that matters. The Saudis are no longer that. As I argue in an August 2015 post (and have long argued to classes in my teaching at W&L), OPEC in practice is not a cartel, and the Saudis are by themselves too small to be able to swing the market in their favor.

I've not tried to spell this out in a careful model, "excess" is not a term that an economist uses because at some level supply always equals demand. What I have in mind is a oligopolistic price leadership model, where producers collectively find it easy to not cheat too much – not pump too aggressively – when demand is strong relative to potential output levels, but devolve into a price war when faced with negative demand shocks. But it may be simpler, and result in the same bottom line, to view the industry as having zero costs once a well is in production. At the wellhead there is then no incentive to reduce production, ever. Wells do gradually dry up, and so output from an individual location declines over time. In periods of low prices, oil drillers can't cover the cost of new wells (or investing to boost production from existing ones), so eventually the supply curve shifts in and (for a given level of demand) prices rise. But in the short run the supply curve is fixed and steep – "price inelastic" in economics jargon. Demand is also steep, but can vary in the short run. The result is volatile prices.

Finally, there's that fracking problem. It's clear that there's large heterogeneity from well to well. In some spots – the center of a field – a single well will produce a lot, but as you move further out the average falls, and the variance rises, you have the occasional well that does really well alongside onces that are dry. (I doubt that's the actual geology, instead this is a variation on the standard diminishing returns story that economists employ.) So costs vary a lot, and some wells run dry quickly, but not all. Furthermore, once the core infrastructure is in place, transmission pipelines to the fields (if only to the nearest railroad), then it doesn't take long to drill a new well, and there are a lot of drilling rigs. So while on average fracking is a high-cost operation, particularly at the front end, that hides a lot of variation. And there a lot of oil fields.

At present fracking has been a U.S. thing. Some countries in Europe have banned it, others allow it in principle but local permits haven't been forthcoming, the NIMBY thing. Some attractive formations are in places without a lot of water, and fracking needs a lot: environmental issues are real. But Europe is only a fraction of the earth's surface, and there is the potential for using this comparative new set of technologies around the world. And set of technologies it is: it's not just ability to fan out multiple pipes from a single hole, it's not just the how-to of fracturing rock and then recovering oil from a wider variety of geologies, it's also the ability to employ seismic data to know where and how to drill.

So what I underestimated is the potential for technical change in the industry. Implicitly I assumed it was a mature industry, that the exploration of the earth's surface [but not necessarily the ocean bottom] for potential oil formations was largely complete. Similarly, within a known formation, I assumed that the ability to use seismic data to delve into local details – the ability to do a "CT scan" of rock layers 10,000 feet down – had reached its practical limit. Yet again, after a century and a half of experience, drilling itself was a static technology. Finally, the same was true for how you actually get the oil out once you've drilled.

...will "peak oil" remain forever sound in theory but irrelevant in practice?...

I was wrong in each instance. Capabilities are better; in economist-speak costs are lower. And at least in my mind the fracking revolution has really only begun. Without these technologies, no one paid attention to "tight" formations. There are likely a lot out there, in Iran and the Russian Republic and China, in Brazil and Venezuela and Mexico, some of which will have enough water and all that. Has the minimum price at which it makes sense to drill risen? Likely, but not by enough to mean "peak oil" is at hand. I do believe that we are in an era of diminishing returns, where the base price will trend up. If the economies of South Asia and Sub-Saharan Africa grow – which I pray is the case, that the next couple decades will see another billion or so people pulled out of poverty – then the demand side will encourage renewed exploration and recovery. The longer that takes, though, the greater the potential of non-petroleum energy sources, and the demand for petroleum (and natural gas) will be for chemical products and not for their energy content. In that case, "peak oil" may forever remain sound as a theory but irrelevant as a practical concern.

Sources I scan on energy issues:
  1. WTRG Economics
  2. Nick Butler at the Financial Times
  3. James Hamilton at EconBrowser

Thứ Tư, 16 tháng 3, 2016

The Morgan EV3

Small British automaker raised a few eyebrows when it recently announced that it was making electric cars.

They have released a teaser video of the three wheel EV3 which looks as eccentric as their other models.

Thứ Ba, 15 tháng 3, 2016

Goodbye analog, hello digital

Since 2008, QNX has explored how digital instrument clusters will change the driving experience.

Paul Leroux
Quick: What do the Alfa Romeo 4C, Audi TT, Audi Q7, Corvette Stingray, Jaguar XJ, Land Rover Range Rover, and Mercedes S Class Coupe have in common?

Answer: They would all look awesome in my driveway! But seriously, they all have digital instrument clusters powered by the QNX Neutrino OS.

QNX Software Systems has established a massive beachhead in automotive infotainment and telematics, with deployments in over 60 million cars. But it’s also moving into other growth areas of the car, including advanced driver assistance systems (ADAS), multi-function displays, and, of course, digital instrument clusters.

Retrofitting the QNX reference
vehicle with a new digital cluster.
The term “digital cluster” means different things to different people. To boomers like myself, it can conjure up memories of 1980s dashboards equipped with less-than-sexy segment displays — just the thing if you want your dash to look like a calculator. Thankfully, digital clusters have come a long way. Take, for example, the slick, high-resolution cluster in the Audi TT. Designed to display everything directly in front of the driver, this QNX-powered system integrates navigation and infotainment information with traditional cluster readouts, such as speed and RPM. It’s so advanced that the folks at Audi don’t even call it a cluster — they call it virtual cockpit, instead.

Now here’s the thing: digital clusters require higher-end CPUs and more software than their analog predecessors, not to mention large LCD panels. So why are automakers adopting them? Several reasons come to mind:

  • Reusable — With a digital cluster, automakers can deploy the same hardware across multiple vehicle lines simply by reskinning the graphics.
  • Simple — Digital clusters can help reduce driver distraction by displaying only the information that the driver currently requires.
  • Scalable — Automakers can add functionality to a digital cluster by changing the software only; they don’t have to incur the cost of machining or adding new physical components.
  • Attractive — A digital instrument cluster can enhance the appeal of a vehicle with eye-catching graphics and features.
     
In addition to these benefits, the costs of high-resolution LCD panels and the CPUs needed to drive them are dropping, making digital instrument clusters an increasingly affordable alternative.

2008: The first QNX cluster
It’s no coincidence that so many automakers are using the QNX Neutrino OS in their digital clusters. For years now, QNX Software Systems has been exploring how digital clusters can enhance the driving experience and developing technologies to address the requirements of cluster developers.

Let’s start with the very first digital cluster that the QNX team created, a proof-of-concept that debuted in 2008. Despite its vintage, this cluster has several things in common with our more recent clusters — note, for example, the integrated turn-by-turn navigation instructions:



For 2008, this was pretty cool. But as an early proof-of-concept, it lacked some niceties, such as visual cues that could suggest which information is, or isn’t, currently important. For instance, in this screenshot, the gauges for fuel level, engine temperature, and oil pressure all indicate normal operation, so they don’t need to be so prominent. They could, instead, be shrunk or dimmed until they need to alert the driver to a critical change — and indeed, we explored such ideas soon after we created the original design. As you’ll see, the ability to prioritize information for the driver becomes quite sophisticated in subsequent generations of our concept clusters.

Did you know? To create this 2008 cluster, QNX engineers used Adobe Flash Lite 3 and OpenGL ES.

2010: Concept cluster in a Chevrolet Corvette
Next up is the digital cluster in the first QNX technology concept car, based on a Chevrolet Corvette. If the cluster design looks familiar, it should: it’s modeled after the analog cluster that shipped in the 2010-era ‘Vettes. It’s a great example of how a digital instrument cluster can deliver state-of-the-art features, yet still honor the look-and-feel of an established brand. For example, here is the cluster in “standard” mode, showing a tachometer, just as it would in a stock Corvette:



And here it is again, but with something that you definitely wouldn’t find in a 2010 Corvette cluster — an integrated navigation app:



Did you know? The Corvette is the only QNX technology concept car that I ever got to drive.

2013: Concept cluster in a Bentley Continental GT
Next up is the digital cluster for the 2013 QNX technology concept car, based on a Bentley Continental GT. This cluster took the philosophy embodied in the Corvette cluster — honor the brand, but deliver forward-looking features — to the next level.

Are you familiar with the term Trompe-l’œil? It’s a French expression that means “deceive the eye” and it refers to art techniques that make 2D objects appear as if they are 3D objects. It’s a perfect description of the gorgeously realistic virtual gauges we created for the Bentley cluster:



Because it was digital, this cluster could morph itself on the fly. For instance, if you put the Bentley in Drive, the cluster would display a tach, gas gauge, temperature gauge, and turn-by-turn directions — the cluster pulled these directions from the head unit’s navigation system. And if you threw the car into Reverse, the cluster would display a video feed from the car’s backup camera. The cluster also had other tricks up its digital sleeve, such as displaying information from the car’s media player.

Did you know? The Bentley came equipped with a 616 hp W12 engine that could do 0-60 mph in a little over 4 seconds. Which may explain why they never let me drive it.

2014: Concept cluster in a Mercedes CLA45 AMG
Plymouth safety speedometer, c 1939
Up next is the 2014 QNX technology concept car, based on Mercedes CLA45 AMG. But before we look at its cluster, let me tell you about the Plymouth safety speedometer. Designed to curb speeding, it alerted the driver whenever he or she leaned too hard on the gas.

But here’s the thing: the speedometer made its debut in 1939. And given the limitations of 1939 technology, the speedometer couldn’t take driving conditions or the local speed limit into account. So it always displayed the same warnings at the same speeds, no matter what the speed limit.

Connectivity to the rescue! Some modern navigation systems include information on local speed limits. By connecting the CLA45’s concept cluster to the navigation system in the car’s head unit, the QNX team was able to pull this information and display it in real time on the cluster, creating a modern equivalent of Plymouth's 1939 invention.

Look at the image below. You’ll see the local speed limit surrounded by a red circle, alerting the driver that they are breaking the limit. The cluster could also pull other information from the head unit, including turn-by-turn directions, trip information, album art, and other content normally relegated to the center display:



Did you know? Our Mercedes concept car is still alive and well in Germany, and recently made an appearance at the Embedded World conference in Nuremburg.

2015: Concept cluster in a Maserati Quattroporte
Up next is the 2015 QNX technology concept car, based on a Maserati Quattroporte GTS. Like the cluster in the Mercedes, this concept cluster provided speed alerts. But it could also recommend an appropriate speed for upcoming curves and warn of obstacles on the road ahead. It even provided intelligent parking assist to help you back into tight spaces.

Here is the cluster displaying a speed alert:



And here it is again, using input from a LiDAR system to issue a forward collision warning:



Did you know? Engadget selected the “digital mirrors” we created for the Maserati as a finalist for the Best of CES Awards 2015.

2015 and 2016: Concept clusters in QNX reference vehicle
The QNX reference vehicle, based on a Jeep Wrangler, is our go-to vehicle for showcasing the latest capabilities of the QNX CAR Platform for Infotainment. But it also does double-duty as a technology concept vehicle. For instance, in early 2015, we equipped the Jeep with a concept cluster that provides lane departure warnings, collision detection, and curve speed warnings. For instance, in this image, the cluster is recommending that you reduce speed to safely navigate an upcoming curve:



Just in time for CES 2016, the Jeep cluster got another makeover that added crosswalk notifications to the mix:



Did you know? Jeep recently unveiled the Trailcat, a concept Wrangler outfitted with a 707HP Dodge Hellcat engine.

2016: Glass cockpit in a Toyota Highlander
By now, you can see how advances in sensors, navigation databases, and other technologies enable us to integrate more information into a digital instrument cluster, all to keep the driver aware of important events in and around the vehicle. In our 2016 technology concept vehicle, we took the next step and explored what would happen if we did away with an infotainment system altogether and integrated everything — speed, RPM, ADAS alerts, 3D navigation, media control and playback, incoming phone calls, etc. — into a single cluster display.

On the one hand, this approach presented a challenge, because, well… we would be integrating everything into a single display! Things could get busy, fast. On the other hand, this approach presents everything of importance directly in front of the driver, where it is easiest to see. No more glancing over at a centrally mounted head unit.

Simplicity was the watchword. We had to keep distraction to a minimum, and to do that, we focused on two principles: 1) display only the information that the driver currently requires; and 2) use natural language processing as the primary way to control the user interface. That way, drivers can access infotainment content while keeping their hands on the wheel and eyes on the road.

For instance, in the following scenario, the cockpit allows the driver to see several pieces of important information at a glance: a forward-collision warning, an alert that the car is exceeding the local speed limit by 12 mph, and map data with turn-by-turn navigation:



This design also aims to minimize the mental translation, or cognitive processing, needed on the part of the driver. For instance, if you exceed the speed limit, the cluster doesn’t simply show your current speed. It also displays a red line (visible immediately below the 52 mph readout) that gives you an immediately recognizable hint that you are going too fast. The more you exceed the limit, the thicker the red line grows.

The 26262 connection
Today’s digital instrument clusters require hardware and software solutions that can support rich graphics and high-level application environments while also displaying critical information (e.g. engine warning lights, ABS indicators) in a fast and highly reliable fashion. The need to isolate critical from non-critical software functions in the same environment is driving the requirement for ISO 26262 certification of digital clusters.

QNX OS technology, including the QNX OS for Safety, is ideally suited for environments where a combination of infotainment, advanced driver assistance system (ADAS), and safety-related information are displayed. Building a cluster with the ISO 26262 ASIL-D certified QNX OS for Safety can make it simpler to keep software functions isolated from each other and less expensive to certify the end cluster product.

The partner connection
Partnerships are also important. If you had the opportunity to drop by our booth at 2016 CES, you would have seen a “cluster innovation wall” that showcases QNX OS technology integrated with user interface design tools from the industry’s leading cluster software providers, including 3D Incorporated’s REMO HMI Runtime, Crank Software’s Storyboard Suite, DiSTI Corporation’s GL Studio, Elektrobit’s EB GUIDE, HI Corporation’s exbeans UI Conductor, and Rightware’s Kanzi UI software. This pre-integration with a rich choice of partner tools enables our customers to choose the user interface technologies and design approaches that best address their instrument cluster requirements.

For some partner insights on digital cluster design, check out these posts:

Thứ Năm, 10 tháng 3, 2016

The Nature of Economic Knowledge: Divergent vs Convergent

Mike Smitka, Economics, Washington and Lee University

Discourse in economics presumes that knowledge is convergent: more empirical research and better theory will together refine our knowledge and helps us approach a "true" understanding. That implicit methodological assumption is not only wrong, but leads to the flawed application of economics. In practice, economics is perhaps better thought of as divergent.

The literature on dowries provides an illustration. In India in the 1980s saw newspaper headlines reporting a spate of bride burnings by unhappy mothers-in-law. Coverage of rural areas was presumed spotty, so it's unclear whether this was anything new. It did though correlate with a rise in dowries (or brideprice). That contrasted with records suggesting that in the past the norm was payment of a groomprice, not a dowry.

...we should never expect consensus, even while we hope for collegiality...

Vijarenda Rao was interested in Becker's economics of the family, and investigated the issue for his PhD thesis. Men preferred brides younger than themselves. (More generally, women exhibit hypergamy.) Because of a growing population, the number of women was larger than men a few years older. The faster the population growth, the more "price" that a husband fetched rose. Empirical work seemed to bear that out, with his seminal paper "The Rising Price of Husbands: A Hedonic Analysis of Dowry Increases in Rural India", Journal of Political Economy 101:4 (August 1993).

That proved too simple a story. First, a graduate student interested in dowries, Lena Edlund, wanted to use his data in her own work; she wasn't able to replicate his results. He couldn't either, which led to a comment and a politely-phrased reply/retraction. (In my reading of the literature, Rao continues to be acknowledged for comments on dowry papers, no acrimony involved.) So that called for more, and more careful, empirical work. For example, Maristella Botticini (1999) worked with a set of Italian renaissance archives: “A Loveless Economy? Intergenerational Altruism and the Marriage Market in a Tuscan Town, 1415-1436.” Journal of Economic History 59:1, 104–21. There are now studies for multiple countries and time periods, including China, India and the north .

In parallel, the theory literature expanded. Rao posited a "marriage squeeze" as Indian women (or their parents) paid to make a desirable match as the number of marriageable expanded relative to older men. This in effect was a one parameter model. Siwan Anderson showed, however, that once women could choose men of varying age, and not just those of a fixed age, these results vanished. Similarly, she showed that once hypergamy was made more general, introducing caste and not just age, then not only did dowry inflation reappear, but it took a specific form [women can marry up (but not men) as progeny take the father's status].

The number of researchers working on dowries expanded. Some worked under or with Aloysius Siow at University of Toronto, others with Rao. Co-authorship and paper acknowledgments show that these students and others work with each other. There is now a community of scholars interacting in this area. Finally, they have succeeded in getting articles into into highly visible outlets. There's "Why Dowries?" by Botticini and Siow in 2002 in the American Economic Review, the profession's flagship journal, and a 2007 survey article by Anderson, “The Economics of Dowry and Brideprice,” in the Journal of Economic Perspectives.

But what has happened along the way? What emerges is a complex literature with a burgeoning set of models and empirical studies. In my reading this has not led to a more precise understanding of what has happened to dowries, or why. The picture is much richer, but the result has been more questions rather than answers.

This is I believe a general result, particularly in microeconomics, in which knowledge does not converge on a clear image of a particular phenomenon. Rather, work diverges. This may be a good thing, in that human society is complex and we ought not find simple stories compelling. But it is not what the formal process of running regressions and testing for significance suggests. That implicitly assumes that we can over time refine our models to get a clear picture. That's not the methodology that we see at play when we examine the development of a field over time.

This hearkens back to the picture of science as a social endeavor painted by Thomas Kuhn in his later work, such as the essays in the 1977 collation, The Essential Tension: Tradition and Innovation in Scientific Research. It is also consistent with the rich tapestry constructed by Paul Diesing in his 1991 book, How Does Social Science Work? – Reflections on Practice. Finally, if in fact knowledge creation in economics is a divergent process, then we should never expect consensus, even while we hope for collegiality.

Thứ Tư, 9 tháng 3, 2016

Happy Birthday BMW

Two days ago BMW celebrated a centenary. They are now focused on BMW’s vision for the future of mobility in the next 100 years. As part of the celebration, they unveilled their Vision Next 100 Concept Car. Four key factors underpin the BMW VISION NEXT 100

  A genuine BMW is always driver-focused.
 For autonomous driving, it’s no longer a question of ‘if’ but ‘when’. The BMW Group believes that BMW drivers will let their cars do the work – but only when the driver wants. The BMW VISION NEXT 100 remains a genuine BMW, offering an intense experience of Sheer Driving Pleasure.

  Artificial intelligence and intuitive technology become one.
 Artificial intelligence will learn from us, anticipating many of our wishes and working away in the background to perform the jobs we delegate to it. The way humans and technologies interact will be transformed: screens and touchscreens will be replaced by more intuitive forms of human-machine communication and interaction. Better yet: technology will become more human.

  New materials open up breathtaking opportunities.
 In the future, how will cars be manufactured? At some point, presses that punch out hundreds of thousands of steel parts may well become obsolete – the use of carbon may already be a first indication of the sea-change that is imminent in the world of automotive materials and production. Technologies such as rapid manufacturing and 4D printing will produce not components or objects but intelligent, networked materials and could soon replace conventional tools to open up unimagined possibilities in design and engineering.

  Mobility will remain an emotional experience.
 BMW vehicles have never been a means of getting from one place to the next. Far more, a BMW is about looking to the next bend in the road, feeling the power of the engine and enjoying the sense of speed; it’s about the sensory experience, the adrenaline rush or that intimate moment at which a journey begins. Moving into the future, that’s not set to change – because the emotional experience of mobility is firmly fixed in our collective corporate memory. By keeping the driver firmly in the foreground, the BMW VISION NEXT 100 will heighten this emotional experience in an unprecedented way.

 

Thứ Bảy, 5 tháng 3, 2016

US Employment: Age-specific patterns of recovery

Mike Smitka, Economics, Washington and Lee University

Here's another quick cut on what's been happening over the past decade. Prior to 1997 employment as a percentage of the population in different age brackets was nearly constant. During the Great Recession older workers either didn't lose their jobs or found new ones. I only show the 55-59 age bracket, participation for older brackets actually rose, reflecting a longer-term trend. Meanwhile youth jobs vanished, but that (or the desire of those of high school age to work) had also been declining; I don't show that. The big hit was at the age where people would normally finish schooling, over ages 20-24. Their rate of employment fell 14% relative to base, and is now down 6%. Some of this may be an increased rate of schooling, and hence permanent, but there was no particular trend prior to 2007. Meanwhile most of the job losses were concentrated among prime-age workers, whose employment fell by 6% but now is down "only" 2.5% – which is still the lowest level over January 1994 - February 2016, the period for which data are available. If the slow rise of the past couple years continues, by mid 2017 – when the new president will take office – we'll be at the level prevailing prior to the Great Recession.

Thứ Sáu, 4 tháng 3, 2016

US Employment relative to Population Growth

Mike Smitka, Economics, Washington & Lee Univ

Today's Employment Situation. Adding jobs isn't enough, because our population is growing. I've created a normalized level of employment that accounts for example for "boomer" retirement. At the current rate we'll be back pretty close to normal within 2 years. Since the are headwinds, basically the entire rest of the world, the Fed will be slow to raise rates, but raise they will, so the next President will face quite a different environment. (Note: the age-specific levels of employment for the age brackets I display were stable for the 10 years preceding the start of the Great Recession in January 2007. That provides the basis for my calculations. For details see HERE.)


Thứ Năm, 3 tháng 3, 2016

The US Market: How many brands, how many models?

Mike Smitka, Economics, Washington & Lee Univ

Toyota will eliminate the Scion brand; Ford is withdrawing from Japan. In the US, Suzuki, Hummer, Pontiac, Plymouth, Mercury, Oldsmobile, Isuzu, Renault, Yugo, Saturn, Hyundai (since returned) and others vanished years ago. Excluding upscale marques, today only 2 other brands have market shares in the US below Dodge's 3.5%: Fiat at 0.2%, and Mitsubishi at 0.6%. Meanwhile Scion averaged 0.3% for the whole of 2015, and hasn't been consistently above 0.5% since hovering near the 1% level under the high gas price era that ended in 2008. The Fiat 500 had the misfortune to launch even later, in 2011. Now my categories are arbitrarily mine. BMW's Mini is also at 0.2%, but the base two-door starts at $20,000 while net of current rebates the Fiat 500 starts at $15,000. Even the Chrysler brand, which to me is not now particularly upscale, has a 1.6% market share.

So how many marques will the market support? That should be a function of the size of the market, and the costs of maintaining a brand. Of course that would assume that incumbents have positions that are assailable, and that minor brands are up to the battle. Now on the market size end the answer is clear: these brands are all devoted to passenger cars, and that market ran at 7.2 mil (SAAR) in February 2016, but at 10.8 million 30 years earlier, in February 1986. Have the costs of supporting a brand changed? You still need national advertising, a dealership network and staff to support it, and parts distribution infrastructure when you can't piggy-back on an existing brand (Mitsubishi can't, I don't know whether Fiat uses the Chrysler network). Oh, and both Fiat and Mitsubishi are imported, so they need port infrastructure, delivery services that may not be near the center of their market, and perhaps somewhat larger inventory in the system as a whole. Is marketing cheaper than in the past? Does the shift to multibrand store complexes make it easier to recruit dealers? I don't know, but I suspect not.

So let me assume that the costs of running a brand as a share of revenue have been constant over time. Without Scion there are then 29 brands today. In addition, basic IO (Industrial Organization) theory, which today should more accurately be called the economics of strategy, posits a square root relationship for the number of players versus the market size. We might then expect √10.8/7.1 = 1.233 or about 25% more brands in 1986. That implies the presence of 35-36 brands versus today's 29.

Now my crude count – I don't have a sales table for 1986 – comes in at 34. A more meaningful comparison would eliminate luxury brands (was BMW one in 1986??) and brands that focused on light trucks. Today I count 18 such brands; in 1986 I count 26, though that's including among others Oldsmobile, Volvo and Saab as non-luxury marques.

These back-of-the-envelope calculations are exactly that. Brands that compete with one another might be a better indicator, but that would require a lot of detailed data and its own set of judgement calls. Another calculation would be the number of luxury brands relative to the market size, however arbitrarily defined "luxury" might be. How about Europe, where once relatively independent national markets saw competition from inside the European Common Market, particularly once the Block Exemption on cross-border shopping by consumers was eliminated. Ditto Japan, where several firms (Mazda and Mitsubishi and even Toyota) pared the number of sales channels, as the relative weight of compact cars, regular cars and "kei" minicars shifted. With enough data, the relationship might not turn out to be the square root, but the sort of "power law" found in many parts of nature and social structures. And obviously, while it might be hard to come by, data on the fixed costs of a brand would be helpful, though all that might be possible is to look at the smallest brand that appears to be making a profit. (My sense is that financial statements seldom provide much detail, particularly as imports are subject to pricing-to-market so margins vary over time.)

A more ambitious analysis would focus on models and not brands. As background, with Japan's shrinking market Mazda has announced it will cease making minivans, Subaru (FHI) is quitting "kei" minicar production and Mitsubishi will not develop a next-generation Pajero SUV (Nikkei, Feb 29). How many mid-sized car models relative to market size, or minivans, or pickup trucks, or many luxury models? At what point does a segment become large enough (crossovers) to support luxury models? An alternative approach would focus on brands over some market share threshold, to eliminate those in the process of entering or exiting the market, or driven by the ego trips of management infatuated with being in the US (or some other particular market) that hang on despite all commercial logic. Anyway, the overall line of analysis would be the same.

As the various national auto markets evolve – China "maturing" (whatever that means), India growing, Japan shrinking, and new powertrains seeing volumes beyond the experimental levels of (say) a Tesla at the high end or a Renault Zoe at the low end, well ... there are practical implications.

Thứ Ba, 1 tháng 3, 2016

Trump: Time to flop, er, flip: an economic analysis

Mike Smitka, Economics, Washington & Lee Univ

Trump has run a strategically brilliant campaign, playing the media to the hilt to speak to the very specific and fairly narrow section of the electorate who vote in Republican primaries. He employed his savvy for publicity to see that, no matter how much they spent, Donald would be the name on the home page of all the news sites, every day.

Only some of today's votes are in, but so far that strategy is working.

from the start, Trump needed only a third of the primary vote to garner the nomination

As I've argued on this blog, using the basic Hotelling model, to win the general election candidates need to move toward the center. I'm sure that Trump recognizes that.

I'm not a political junky, but my recollection from the preliminaries to the primary race – before anyone had actually declared – is that in the past Trump held many positions that suggested his own personal affinity would be the conservative end of the Democratic Party. (Logically he might better be thought of as at the liberal end of the Republican Party, but that apparently no longer exists as an electoral force.) Now if he wanted to run for president, could he win a primary race against Hillary? No, he would have to position himself as a centrist, and out-organize Hillary.

He realized, however, that he could trump the other Republican candidates, helped by the sheer number of rivals, none of whom demonstrated his knack for dominating the news. To the extent he has been consistent over the past several months, does he seriously hold the positions that have garnered him headlines? I suspect not, but again, I'm not a political junky. His campaign has been savvy, indeed brilliant, however reprehensible I find the tactics. (See for example this February Slate article on how he test marketed his hyperbolic statements to find which resonated best.) He's running a pragmatic, empirically-driven campaign, and has simply out politicked the professional politicians, aided by a bit of luck in that there were so many of them that he would need only a third of the primary vote to sweep up delegates. But again, he could see that when he threw his hat in the ring.

Trump will flip and flop ... but Hillary can't play the purist

To win Trump needs to move toward the middle. He clearly wants to win. So I expect him to show his true character, and by the end of March begin to flip and flop alongside the fishiest of characters. With her long life in the public sphere, however, Hillary can't play the purist against him, because her positions have shifted over time. Will Trump's current supporters be outraged? Surely they will be. But they won't opt for Hillary. He can and will ignore them.

Trump can and will ignore his current supporters

Addendum: For a very different analysis, rooted in empirical political science, see The Rise of American Authoritarianism. My analysis paints Trump as opportunistically responding to what seems to work. This Vox article paints him as responding, consciously or not, to personalities who are attracted to "authoritarianism." In contrast, The Rolling Stone in "How America Made Donald Trump Unstoppable" points to concrete issues among the Republican Party and its current candidates, including finding itself in the position of having multiple candidates who were good at raising money but terrible as campaigners, allowing a still comparatively weak candidate to handily march towards nomination as the Party's candidate. While more similar to my analysis, it is not incompatible with the story of the "activation" of that (fairly constant) swath of the human population who abhor change and love order.

Addenda: I will gradually add links here that track backtracking trump. These will suffer from a selection bias: mine.

  1. ...will abide by intl law which reverses many positions

Addendum:I've not noticed substantial flops over the past month (as of April 3, the original blog post was Mar 1). So ... either (i) Trump still believes the nomination is not his for the running so that he must continue to challenge the right-of-median-Republican Cruz or (ii) he is so enthralled by the current mudslinging that he's not willing to move beyond it or (iii) he really doesn't care about the general election. In contrast Cruz is naming advisors and so on, things that won't contribute to winning against Trump. But those he's picked for foreign policy, are staking out positions Bush Jr was unwilling to even discuss.

I look at foreign policy and economic policy, both areas where I have experience. I've seen little yet on economic advisors, other than mention that (former) Sen Gramm, of close Enron connections, is to be his chief advisor. His past record is one of impolitic statements and pushing to end all financial regulation, successfully inserting an Enron loophole. He didn't last long on McCain's campaign staff, though the reason cited – being too optimistic following the collapse of Bear, Stearns – strikes me as hindsight by critics.