New Articles

How Artificial Intelligence is Driving the Memory Market for Autonomous and Connected Vehicles

vehicles

How Artificial Intelligence is Driving the Memory Market for Autonomous and Connected Vehicles

One of the important technologies that have emerged over the past few is that of artificial intelligence (AI). The technology is being utilized in various industries for making processes and operations simpler. Just like other industries, AI is also being widely utilized in the automotive industry for making vehicles safer and more secure. The technology is being utilized in infotainment systems that are now serving as personal assistants, aiding the driver by offering efficient navigational support, and responding to voice commands. This increasing utilization of AI is creating wide data storage capacity.

Autonomous and connected cars are generating large amounts of data, since they are extensively making use of electronic functions for providing greater efficiency, greater safety, driver assist capabilities, richer telemetric and entertainment functions, and communication between local networks and vehicles. Owing to these factors, the global memory market for autonomous and connected vehicles generated a revenue of $4,310.8 million in 2019 and is predicted to advance at a 23.9% CAGR during the forecast period (2020–2030), as per a report by P&S Intelligence. The major applications of the memory market in the automotive industry are telematics, navigation, and infotainment.

Out of these, the largest amount of data was generated by navigation features in the past, which can majorly be attributed to the surging adoption of these systems in vehicles. Navigation systems generate data related to alternative routes, shortest route, and traffic or checkpoints on the road, and need efficient storage mechanism. Apart from this, the telematics application is also predicted to make create demand for data storage capacity in the coming years, which is particularly because of the increasing preference for autonomous and connected vehicles. The system captures data via sensors, radars, and cameras.

Different types of memories in the automotive industry are NOT-AND (NAND) flash, dynamic random-access memory (DRAM), and static random-access memory (SRAM). Among all these, the demand for DRAM has been the highest up till now, owing to their effective storage of data and relatively low cost. Both commercial and passenger vehicles generate data, thereby creating a need for memory; however, the largest demand for memory was created by passenger cars in the past. This is because of the fact that passenger vehicles are produced more than commercial vehicles. Furthermore, new technologies are first implemented in passenger vehicles for testing purposes in the automotive industry.

In the past, North America emerged as the largest memory market for autonomous and connected vehicles, and the situation is predicted to be the same in the coming years as well. This can be ascribed to the presence of a large number of automotive technology companies and increasing sales of connected and autonomous vehicles in the region. Moreover, the disposable income in people in North America is high as well, owing to which, they are able to spend more on luxury vehicles that are equipped with advanced, connectivity, safety, and autonomous features.

Hence, the demand for memory in autonomous and connected vehicles is growing due to the increasing demand for safety features in vehicles.

Source: P&S Intelligence

5 DevOps Trends that Demand Your Attention

One of the great things about my job is that I get to go-to software developer conferences all over the world and listen to people being extremely smart. When you watch enough smart talks, read enough articles, and talk to enough people trying to get stuff done on the ground, it gets easier to spot trends—just like it’s easier to see irrigation patterns from the air than from the ground.

Here are the five trends I think you should watch for in 2020.

1. Continuous Integration and Continuous Deployment, but not Continuous Release

I was just at DeliveryConf (which was great and you should try to go next year, but in the meantime, here is a link to the talks ). At the conference, companies of all sizes and maturity levels described how they were working toward the CI/CD goal of getting code into production more quickly. The hesitation we were all feeling our way around was that we want continuous deployment to production, but most consumer and B2B businesses don’t want to change the user experience that often. Simply put, we don’t want Continuous Release.

In fact, customers frequently resent change, especially when it forces them to retrain users in a new workflow. The thing a user knew how to do automatically is now moved or missing, or there is some new option that no one knows how to use effectively. Interface changes in popular software can mean that companies spend millions of dollars in retraining. Anything that interrupts a user’s unconscious competence and forces them to think about what they’re doing slows them down.

Release is a business decision, and it often is safer and cheaper and better for users if all the changes come at once, so they can all be discussed and taught at the same time. CI/CD, on the other hand, is a technical choice. But that doesn’t mean customers need to experience that cadence, as long as you can deploy without releasing.

2. Leveraging existing workflows

Similarly, there is no reason users should have to learn new workflows just because the tools their software group is using have changed. I think this year, we’ll see a lot of SaaS vendors work with existing enterprise tools to make those tools more powerful, without changing the user experience much, if at all.

I think of this as leverage. It doesn’t matter to a user if a form is backed by a spreadsheet that needs to be manually imported or if it’s wired directly to a CRM. The user has applied the same amount of effort, but the new tooling has moved the fulcrum point, and the user’s work is more effective.

3. Personalization

We don’t all want the same things, as we can tell from the Dark Mode Wars. As our bandwidth and information have changed, so have our expectations about how much we can make our technology spaces personally comfortable.

A great example of this is the Google Now app on Android phones. You can tell it what sports team you follow, and then the app will deliver more news about that team and sport. But it also gives you the option to hide gameday spoilers if you’re not going to be able to watch it right away. They aren’t hiding that information from everyone, or even fans of that team, but they are personalizing the experience by protecting you from knowing the score of the game before watching it.

Personalization gives users more control over their experiences. It also provides more options than would otherwise be feasible to present globally. We can’t be all things to all people, unless we allow people to choose which subset of all things they want, and then allow those subsets.

4. Accessibility

The other exciting possibility of increased personalization is better support for different accessibility needs. The US has had web accessibility standards since 2000, but they haven’t been enforced or adopted evenly. That said, we have seen some recent exceptions.

The Supreme Court just ruled against Dominos in a lawsuit alleging that the pizza company failed to comply with accessibility standards. I’m not going to say “this changes everything”, but I will say this might be a good time to be an accessibility consultant who can help teams retool quickly.

The interesting part, and the thing that meshes with personalization, is that different people can have different accessibility needs. Someone with low vision needs solutions that may be incompatible with tab-based navigation, which again may be hard to align with screen readers. Rather than trying to make a single “accessible” page that meets none of those needs well, we’ll use personalization to tune for exactly what different people need.

5. Scientific thinking

This is an interesting outflow of our emphasis on data and metrics. Now that we are doing a better job of democratizing access to statistics and metrics, it’s easier for everyone in the company to understand how changes affect user behavior. Rapid releases and Progressive Delivery make it much easier for us to see how our choices work out in near-real-time. That means it’s possible for anyone—not just the UX team—to see how changes play out. With that visibility, we also can form a hypothesis about how a change will affect the data and then look to confirm or reject the hypothesis.

The scientific method is not heavily taught in most computer science programs, because it wasn’t until recently that we had the fast feedback loop that would make it useful. However, at least in the US, most schoolchildren are taught the basics in elementary school. They learn to ask critical questions like:

-What is the current state of the system?

-What change am I making?

-How can I measure a change’s impact?

-Was the impact what I expected it would be?

-Do I have any evidence for why or why not?

We need to be able to ask these questions at the team and individual level and get meaningful answers. We can then use those answers to iterate rapidly and stay attuned to what users want and find useful. What’s more, we can avoid spending months building things that virtually no one needs or wants.

What do you see coming in 2020? How will this play out in your company or industry?

________________________________________________________

Heidi Waterhouse is a developer advocate at LaunchDarkly. She is working in the intersection of risk, usability, and happy deployments. Her passions include documentation, clear concepts, and skirts with pockets. As a developer advocate, Heidi bridges the experiences of external and internal developers and spends time listening, thinking, and learning deeply about the business and technical challenges that face each group.