December 28, 2022

Datalynq's Market News - September 2023 Update

Datalynq Team
Man on a computer with code on the screen

Corporate Demand for AI is Driving Chip Market - Sept 19, 2023

The chip market has endured a tumultuous few years in the wake of the COVID-19 pandemic. Luckily, several factors within the tech industry are paving the way for a strong recovery and pattern of growth. From IoT devices to automakers eyeing electric vehicles, products across every sector need more chips. As the AI craze reaches full swing, chipmakers are turning record profits and preparing to use this as an opportunity for bolstered growth in the coming years.  

AI, Increasing Tech Adoption Drive Chip Demand, Grow Market

The semiconductor industry appears well-positioned for a period of growth according to a report from the IMARC group which looks out as far as 2028. IMARC predicts the chip industry will reach $941.2 billion globally by 2028 and experience a growth rate (CAGR) of 7.5% from now until then.  

So, what’s driving this steady incline for the industry? As the global economy leans more heavily on tech-centered products, the need for semiconductors rises. Given a push for more technology in offerings across sectors, chips are increasingly in demand—even in places they haven’t been traditionally.  

The growing Internet of Things (IoT) is one major driver of growth. More and more of today’s devices are connected to the internet, enabling efficient communication and seamless collaboration. Of course, these devices require several components to stay connected, going beyond those required for their core function. Analysis firm Mordor Intelligence projects the IoT chip market to reach $37.9 billion by 2028, expanding significantly from its current $17.9 billion size.  

Meanwhile, the automotive industry is also hungry for chips thanks primarily to the rise of electric vehicles (EVs). These cars rely on more components to power their complex software and hardware. Semiconductors are essential for key EV features like battery control, power management, and charging. Even non-electric vehicles are being built with more chips today than ever before. Consumers demand flashy, attractive features, which increases the number of chips needed to support them. As some companies lean into self-driving technology and advanced driver assistance programs, more advanced chips are required.  

Indeed, the automotive industry has been one of the biggest chip buyers in recent years. Experts predict this trend will only increase as more carmakers and governments emphasize EVs, citing environmental concerns. Analysts project the auto sector will be the third largest buyer of semiconductors by 2030, accounting for $147 billion in annual revenue.  

Artificial intelligence (AI) is perhaps the most influential technology in the world right now. Surging demand has chipmakers fighting to keep up as companies race to invest in AI. The rise of ChatGPT sparked interest in generative AI, catching the eye of major tech players like Microsoft and Google. Meanwhile, other machine learning applications are being explored across every industry, driving demand for high-power data center chips. Benzinga projects the

AI chip market will grow to $92.5 billion by 2028, a staggering increase from the $13.1 billion it was valued at in 2022.  

Nivida has a massive head start on the market and currently is responsible for making the overwhelming majority of AI chips thanks to its prowess in the GPU market. Memory chipmakers are also benefiting as AI applications require high-speed DRAM units. The explosive growth of AI is driving demand for HBM3 memory chips and their successors.  

Finally, an increased desire for high performance across devices has put chipmakers in a favorable position. Emerging technologies such as 5G and edge AI computing require advanced silicon and additional components to enable connectivity.

As the world continues to embrace technology in every facet of daily life, the chip industry must be ready. Addressing critical workforce shortages and ensuring manufacturing capacity is sufficient are key areas to watch in the coming years. Moreover, creating a more diverse supply chain that is more resilient in the face of economic disruption is essential. By working to solve these challenges, the chip industry can put itself in a place to succeed throughout the remainder of the decade.  

Corporate Demand for ChatGPT is an Excellent Opportunity for Chipmakers

ChatGPT, the flagship offering of OpenAI, has revolutionized the way the world thinks about artificial intelligence (AI). The technology has started evolving at a rapid pace—and the industry powering it isn’t far behind. As tech giants gear up for an AI arms race over the next decade, chipmakers are racing to meet demand.  

Now, OpenAI is working to monetize its AI golden child. The Microsoft-backed startup reportedly brings in $80 million each month. But it isn’t content to stop there. OpenAI recently introduced a new ChatGPT business offering, which gives corporate clients more privacy safeguards and additional features. The so-called ChatGPT Enterprise comes via a premium subscription which the company has not yet provided pricing details for.  

OpenAI reportedly worked with more than 20 companies to test the product and find the most marketable solutions. Zapier, Canva, and Estee Lauder were all involved and remain early customers of the product. However, OpenAI claims that over 80% of Fortune 500 companies have also used its software since its launch late last year.  

For the chip industry, AI represents a turning point. The technology is already being used to make chip manufacturing more efficient. Startups have tasked AI-powered computer vision programs with spotting defects in wafers during production. This method is both faster and more accurate than manually reviewing each wafer, resulting in greater revenue and production for chipmakers.  

Elsewhere, researchers are beginning to explore how applying machine learning principles to the intricacies of chip design could one day result in more efficient, more powerful semiconductors than humans can create. Samsung is a pioneer in this area. The South Korean firm is already employing generative AI in hopes of competing with TSMC by increasing wafer yields.  

Despite these promising applications, data from McKinsey shows just 30% of companies who currently use AI and machine learning see value from it. But this is expected to change quickly as more businesses experiment with AI and learn how to utilize it most effectively. The same report suggests the use of AI could generate $85 to $95 billion in the long term.  

For chipmakers, using AI in-house isn’t the only factor to consider. As the world’s largest firms scramble to gain an advantage in the AI gold rush, their need for high-performance chips is dire. Without the right hardware, it’s impossible to train AI models and use them to generate and analyze profitable data. Firms who are able to provide the needed silicon will benefit tremendously.  

As ChatGPT is so often an indicator of the wider AI market’s direction, don’t be surprised to see a big push to include AI in the office in the coming months. Startups of all sizes will introduce their offerings to businesses seeking to improve their efficiency and processes. Each day, more “real-world” uses for AI will crop up as startups who have hungrily devoured capital seek to start turning a profit. For chipmakers, the winners of the AI race aren’t what matters. Rather, the success or failure of AI as a concept and as a useful technology will dictate much of what the future looks like for chips. With some luck, it will be a key growth driver for the industry over the next ten years.


A view from space

Semiconductors in Space! - September 5, 2023

The future is bright as technology continues to advance more rapidly than anyone can predict. AI leads the way as countries around the world strive to become proficient and find the next breakthroughs. Meanwhile, startups are looking to the stars as dreams of manufacturing higher-quality products in space inch closer to reality.

For the components industry, these developments are part of a larger trend of adaptation. As technology dictates how the world moves forward, the need for components is evolving but always present. Finding innovative ways to meet the demands of companies and countries pursuing advanced technology is paramount for years ahead.  

U.K. Invests Millions in New AI Silicon but Eyes Chip Diplomacy as Path Forward

As the world turns its focus toward the exciting future of artificial intelligence (AI), every nation is racing to strengthen its digital capabilities. The U.K. recently announced an initiative that will see roughly $126 million poured into AI chips from AMD, Intel, and Nvidia. This move comes as part of a pledge made earlier this year to invest over $1.25 billion to bolster its domestic chip industry over the next 10 years.  

However, critics of the move claim the government isn’t investing enough compared to other nations. Indeed, the U.K.’s latest investment is minuscule compared to those made elsewhere. The U.S. has invested $50 billion in its domestic semiconductor industry through the CHIPS Act while the E.U. has invested some $54 billion. Even so, the scope of the U.K.’s recent move shouldn’t be surprising, given that it accounts for just 0.5% of global semiconductor sales.  

The recent injection of taxpayer money will be used to build a national AI resource. This will give AI researchers access to powerful computing resources and high-quality data to advance their work and the field. Other countries, including the U.S., are establishing similar programs to further their domestic AI capabilities.  

The primary line item of the U.K.’s recent investment is reportedly an order of 5,000 GPUs from Nvidia, which are used to power generative AI data centers and are essential to running the complex calculations demanded by AI applications. The U.K. government is reportedly in advanced talks with Nvidia to secure these chips amid the company’s massive surge in international demand.  

U.K. Prime Minister Rishi Sunak notes that Britain will focus on playing to its strengths rather than delving too far into areas where it is outmatched. For instance, the U.K. will devote a significant portion of its chip resources to research and design rather than building new fabs like many of its European neighbors.  

Moreover, the U.K. seems poised to put itself in the center of the raging discussion surrounding AI safety. It recently announced that a long-awaited and highly publicized international AI safety summit will take place early this November. The meeting will include officials from “like-minded” governments as well as researchers and tech leaders. The group will convene at the historic Bletchley Park between Oxford and Cambridge, home of the National Museum of Computing and the birthplace of the first digital computer.  

Interestingly, as the U.K.’s small investment compared to other nations will likely hinder its domestic chip ambitions, leadership in the regulatory space could be a fitting role. Britain reportedly aims to be a bridge between the U.S. and China for tense chip and AI safety discussions.

In a statement, a government spokesperson said, “We are committed to supporting a thriving environment for compute in the U.K. which maintains our position as a global leader across science, innovation, and technology.”  

Meanwhile, China is racing to buy billions of dollars of GPU chips to further its own AI ambitions ahead of U.S. bans slated to go into effect in early 2024. At this time, it’s unclear if the U.K. will invite China to participate in its upcoming summit.  

This will be an important development to watch as the U.K. aims to secure its position as a chip leader despite investing far less than other nations. While the U.S., Japan, Taiwan, and South Korea continue to dominate manufacturing, Britain could play a vital role in the future of the industry as a global moderator and champion of regulatory discussions.

How Manufacturing Chips and Drugs in Space Could Revolutionize Life on Earth

Outer space presents an environment for unique experiments that are simply impossible to perform on Earth. Astronauts living aboard the International Space Station (ISS) have been conducting such research for years. More recently, though, interest in manufacturing products in outer space has taken off.  

From new pharmaceuticals to pure materials for semiconductors, the possibilities are endless. As a result, experts believe the space manufacturing industry could top $10 billion as soon as 2030. Startups and governments alike are racing to push the limits of this sector.

Manufacturing certain products on Earth, especially at a microscopic scale, is limited by factors like gravity and the difficulty of producing a reliable vacuum. In space, high radiation levels, microgravity, and a near vacuumless environment give researchers ample opportunity to produce materials or use research methods not available on Earth.  

A Wales-based startup called Space Forge aims to revolutionize chip manufacturing by taking it into orbit. The company’s ForgeStar reusable satellite is designed to create advanced materials in space and return them safely to Earth.  

Since crystals grown in space are of far higher quality than those grown on Earth, producing semiconductor materials in space leads to a final product with fewer imperfections. Andrew Parlock, Space Forge’s managing director of U.S. operations, said in an interview, “This next generation of materials is going to allow us to create an efficiency that we’ve never seen before. We’re talking about 10 to 100x improvement in semiconductor performance.”  

The startup plans to manufacture chip substrates using materials other than silicon. In theory, this could lead to chips that outperform anything the world has seen to date while also running more efficiently.  

As for concerns about manufacturing at scale, Space Forge CEO Josh Western says, “Once we’ve created these crystals in space, we can bring them back down to the ground and we can effectively replicate that growth on Earth. So, we don’t need to go to space countless times to build up pretty good scale operating with our fab partners and customers on the ground.”  

As the semiconductor industry seeks new ways to make chips more efficient with current manufacturing technology nearing its limits, new materials made in space could be the answer. Though many years of research and testing will be needed, space manufacturing is a promising path for chip companies to explore.  

Meanwhile, manufacturing drugs in space has also caught the attention of investors and researchers alike. Varda Space Industries relies on the unique ability to research and produce high-quality proteins in space through crystallization. This allows scientists to better understand a protein’s crystal structure so they can optimize drugs to be more effective, resilient, and have fewer side effects.  

Varda co-founder and president Delian Asparouhov says, “You’re not going to see us making penicillin or ibuprofen… but there is a wide set of drugs that do billions and billions of dollars a year of revenue that actively fit within the manufacturing size that we can do.”  

He notes that of all the millions of doses of the Pfizer COVID-19 vaccine given in 2021 and 2022, “the actual total amount of consumable primary pharmaceutical ingredient of the actual crystalline mRNA, it was effectively less than two milk gallon jugs.”  

Once again, this alleviates concerns of producing drugs in space at scale. Rather than making the entire drug in space, companies like Varda will focus on making the most important components. Then, they’ll ship those back to Earth to complete the manufacturing process.  

Thanks to recent advancements in spaceflight technology, such as reusable rockets, making missions to orbit cheaper, dreams of in-space manufacturing are inching closer to reality. Advancements in the coming years will help set the groundwork for what could be the new normal for manufacturing, one that allows humanity to go beyond the limits of what we can create on our planet.

illustration of AI chips

AI is Running the Chip Industry – August 25, 2023

No technology currently has a greater influence on the semiconductor industry than artificial intelligence (AI). From generative models like ChatGPT to massive data centers powering in-house algorithms, AI has sent demand for high-end silicon through the roof.

With demand soaring, chipmakers are scrambling to keep up and expand their production capacities. Cutting-edge AI solutions demand high levels of performance and optimized power efficiency. So, churning out advanced chips is a top priority for manufacturers across segments. Some chipmakers are even turning to an unlikely source for answers to stringent design challenges for future chips—the AI algorithms themselves.

As the chip industry grapples with the possibilities and limitations of AI, the technology’s influence is already redefining the landscape.  

AMD Assures Production Capacity for Key AI Chips Despite Tight Market

AMD CEO Lisa Su had reassuring words for analysts and investors during the company’s second-quarter earnings call. Amid a booming market for AI chips, Su admitted that production capacity is tight, but that her company is poised to meet demand following numerous discussions with supply chain partners in Asia earlier this year.  

She said in the earnings report, “Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale.”  

Large language models (LLMs) like ChatGPT have brought AI, particularly generative AI, to the forefront of the public’s eye. Companies in every sector are scrambling to get their hands on the necessary tech to keep up. For AMD, the generative AI application market and data centers are key strategic focal points.  

With many AMD customers reportedly interested in the MI300x, it’s no surprise they hope to deploy the solution as soon as possible—despite the fact that AMD’s M-series GPUs were announced just a month ago. In the meantime, AMD is working closely with its buyers to ensure joint design progress and those deployments go smoothly as it begins sampling the new line.  

Su noted in the Q2 earnings call that AMD has secured the necessary production capacity to manufacture its MI300-level GPUs—including front-end wafer manufacturing to back-end packaging. The firm is committed to taking in the massive capacity of neighboring supply chains, including TSMC’s high-bandwidth memory (HBM) and CoWoS advanced packaging. Over the next two years, AMD also plans to rapidly scale up its production capacity to meet soaring customer demand for AI hardware.  

In the earnings report, Su said, “We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on track to launch and ramp production of MI300 accelerators in the fourth quarter.”  

AMD plans to begin early deployments of the M-series GPUs in the first half of 2024. More rollouts will happen throughout the latter half of the year as a higher volume of M1300 units becomes available.  

Importantly, Su also commented on the AI chip sector’s stiff competition, citing Nvidia’s market domination and Intel’s recent successes. She noted that the MI300’s flexible design allows it to handle both training and inference workloads. Notably, this capability is attractive to customers across multiple segments, including supercomputing, AI, and LLMs, giving AMD an opportunity to succeed in all these markets.  

Simply put, Nvidia’s head start in the AI market won’t last forever. Although the chipmaker is significantly far ahead of everyone right now, it won’t be the only major supplier of AI chips in the long run. AMD is well-positioned to grab a significant chunk of the market and succeed alongside its competitors with its forthcoming MI300 series. With reassurances of secured production capacity in a tight market, we’ll likely see these chips in the real world sooner than later.  

Researchers Are Using AI to Optimize Chip Design. But What Comes Next?

Computers designing themselves sounds like something from a science fiction movie. But it’s already happening inside the offices of Google’s AI-focused DeepMind, where researchers are using AI algorithms to solve chip design problems.  

For years, experts have predicted that the end of Moore’s Law is near as chips continue to shrink and layouts become more complicated. In September 2022, Nvidia CEO Jensen Huang even declared the decades-old adage dead. Thanks to AI, though, the concept of doubling a chip’s transistor count every other year could get a breath of new life.  

In a recent blog post, DeepMind researchers discussed how they are using AI to accelerate chip design. The novel approach treats chip design like a neural network, which consists of a series of interconnected nodes bridging the gap between inputs and outputs on either edge. To translate this into chip design, the DeepMind team created, “a new type of neural network which turns edges into wires and nodes into logic gates, and learns how to connect them together.”  

The result? Circuits optimized for speed, energy efficiency, and size. Using this method, the team won the IWLS 2023 Programming Contest, finding the ideal solution to 82% of circuit design problems during the challenge. By later adding a reward function to reinforce the algorithm for positive design decisions, the team has seen “very promising results for building a future of even more advanced computer chips.”  

It seems DeepMind researchers foresaw the future when they wrote in a 2021 paper, “Our method has the potential to save thousands of hours of human effort for each new generation.”  

While AI isn’t ready to start designing chips all on its own, the promising results speak volumes about this vastly underexplored technique. Semiconductor leaders like Nvidia and Samsung are already using reinforcement learning algorithms to maximize the efficiency of their chips. Numerous startups are also exploring their own methods for using the power of AI to optimize semiconductor layouts.  

However, it’s unclear whether the latest AI craze—generative AI—will play a role. Several companies and researchers are exploring how the technology could be used to optimize chip design, but analysts are doubtful.  

Gartner VP analyst Gaurav Gupta said in a recent interview regarding the use of generative AI, “There is very limited evidence that I have seen so far.”  

This partially calls back to the larger issue in the generative AI space of who owns what. Generative AI models like ChatGPT are trained on massive datasets to gain their skills. That data comes from internet sources—as does the data for most generic AI applications. But whether the resulting models can be used to create new designs that are then considered proprietary is uncertain. Earlier this week, a U.S. district judge ruled that works of art created by AI cannot be copyrighted. While not applying to semiconductors, this ruling could set an important precedent for other industries moving forward as the world grapples with how to regulate AI.  

Even so, experts believe generative AI could still have a place in chip design—just not creating designs from scratch. The technology could be used to augment human-made designs or identify new ways to make a circuit layout more efficient. For now, though, it appears reinforcement learning algorithms will lead the way.  

Over the coming years, more chipmakers will join the fray, making AI chip design more commonplace. As designs continue to shrink, AI will be a powerful tool for the industry to use to continue innovating and improving semiconductor performance and efficiency. Moreover, as demand for more capable chips to power AI applications grows, the technology could become a bit of a self-fulfilling prophecy, as older iterations are used to improve future functionality.

Silicon wafer manufacturing line

Artificial Intelligence Continues to Attract Chipmakers - August 14, 2023

Original component manufacturers (OCMs) and semiconductor equipment manufacturers are working overtime to prepare for the incoming tidal wave of artificial intelligence (AI) demand.  

The world has quickly fallen head-over-heels for the capabilities of AI. The popular generative AI model, ChatGPT, has been a trailblazer within the AI market, inspiring numerous copycat programs from big and small technology companies. As the competition heats up with ChatGPT equivalents, the need for higher computing power will become a top priority.

After all, these chatbots won’t be able to generate anything without these chips.  

An AI Equipment Boom is on the Way

Artificial intelligence applications have exploded in use since the introduction of ChatGPT. Impressed with the capabilities of generative AI, the market sector is rapidly growing. With consumer demand and competition on the rise, semiconductor and chipmaking equipment maker Tokyo Electron sees a boom in equipment sales on the horizon.  

The semiconductor industry has been experiencing a significant glut of excess electronic component inventory. When consumer demand wilted from increased prices due to inflation and high energy costs, there were worries over how long it would take for the industry to recover. To the surprise and delight of many generative AI, which quickly captured worldwide attention, will be the driving force of glut recovery.  

With this explosion of popularity, the leading generative AI application ChatGPT is already driving growth within the semiconductor manufacturing equipment market sectors. The current impact is still relatively minor, but Tokyo Electron expects that to blossom into significant gains come 1H2024.

The semiconductor industry beyond equipment manufacturers will likely receive a significant boost in demand as technology companies and manufacturers work to dethrone the king of AI chips, Nvidia.  

Tokyo Electron’s senior vice president Hiroshi Kawamoto said OCMs are already contacting the company regarding its GPU-making equipment product lines. Kawamoto told Nikkei Asia reporters, “The trend is still limited in scope, but I think we will start seeing a difference in revenue through April - September of 2024.”

“The number of semiconductors needed for generative AI servers will likely increase,” Kawamoto continued. “This technology could become our next driver for growth.”

As of late, like many OCMs, Tokyo Electron has faced a steady decline in demand during 2023’s chip glut. Demand for memory chip-related equipment has faced the steepest declines to no one’s surprise. Kawamoto doesn’t believe that will last for much longer.  

“DRAM-related equipment will start looking up as early as the end of 2023,” Kawamoto said. “A real rebound won’t start until the next fiscal year.” He expects steady growth over the next fiscal year and forward due to the wide range of end uses for mature semiconductor manufacturing equipment, primarily utilized in automotive and industrial sectors. These industries have less volatility than their advanced node counterparts, as they can be subject to more extreme market shifts.

This is good news for the semiconductor industry, which is currently entering the peak period of excess electronic component inventory. It means faster recovery and mitigation of excess through selling before it costs manufacturers more by storing it correctly.  

Chinese GPU Suppliers Eye AI Market

As semiconductor equipment manufacturers prepare for the oncoming AI equipment boom, so are graphic processing unit (GPU) suppliers. Many of which are eager to grab a portion of Nvidia’s expansive market share within the AI sector.  

Artificial intelligence applications, inferences, and other programs require a lot of computing power. ChatGPT’s current model, GPT-3, runs 175 billion parameters to function. AMD’s latest chip, MI300X, only runs 40 billion parameters. Large-scale AI programs need a lot of high-performance GPUs to run significant tasks like content generation.  

GPUs are the secret sauce behind many large language models (LLMs) like ChatGPT and other popular AI tools. With the sudden rise in popularity, GPUs have sprung into high demand overnight. Nvidia dominates, but OCMs of all shapes and sizes are working overtime to grab AI market dividends.  

Chinese GPU suppliers and startups are no different. More so now that harsh sanctions by the U.S. are seeking to limit China’s access to new AI-capable electronic components.  

The Shanghai-based startup, Denglin, has received funding and support from the China Internet Investment Fund (CIIF), the State Cyberspace Administration of China, and the Ministry of Finance to develop CUDA and OpenCL-compatible chips. CUDA is the name of Nvidia’s software package, allowing users to access the chip’s hardware features for development options. This software has been extremely popular among U.S. developers.  

The chips Denglin has been funded to develop are reportedly used primarily for HPC and AI markets with their GPGPY usage capabilities. Denglin has announced four products this year, that are used for gaming and AI training. Its most popular solution is the Goldwasser chip. The chip is “designed for AI-based compute acceleration and will be getting edge and cloud gaming platforms.”  

According to Denglin, its GPUs have been in high demand since last year when its 2nd generation GPU production capacity was completely booked. This new GPU will likely face the same high demand as it promises to deliver a 3 to 5 times greater performance gain for transformer-based models. Likewise, it is reported to greatly reduce hardware costs for ChatGPT and generative AI workloads, making it strong competition for Nvidia.

Predictably, a GPU supplier that offers the same CUDA and OpenCL capabilities for AI processing would be immensely popular in China’s domestic market. Especially now that restrictions are keeping Nvidia from making a splash within it. Nvidia GPUs’ hefty price tags are nothing to scoff at, either.  

So far, thirteen other GPU developers within China are vying for the top spot. However, whether Denglin can prove popular enough to beat Nvidia on a global scale is far more uncertain.  

Someone typing at night

Supercomputers and China's AI Rules - July 28, 2023

The world of artificial intelligence (AI) is on an unstoppable upward trajectory of popularity and developmental breakthroughs. OpenAI’s ChatGPT lit the fuse within generative artificial intelligence and other AI applications. Now, tech companies and chipmakers across the board are working hard to develop their own generative AI chatbot or AI-capable components respectively.

With these discoveries come concerns. Industry leaders, including Elon Musk, are warning others of the potential dangers of unrestricted artificial intelligence. Equally concerned are government lawmakers who are now beginning to draft the first set of laws to safeguard the public from AI-generated misinformation.  

Tesla’s Upcoming Supercomputer Entering Production

Earlier this year, Tesla CEO Elon Musk called for a pause on artificial intelligence, citing concerns that mismanaged design could lead to “civilization destruction.” Musk’s critiques of artificial intelligence are nothing new. Musk’s contradictory stance on artificial intelligence, condemning it publicly while increasing AI development within his companies has been well-known for years.  

When Musk learned of OpenAI’s and Twitter’s relationship, where ChatGPT was built on user Tweets, Musk severed ties. Formerly, Musk helped found the AI lab in 2015.

Recently, Musk has continued expressing his concerns on artificial intelligence and is reported to be working alongside Microsoft's CEO in approaching the EU to discuss the best strategies for AI regulations.  

Meanwhile, Tesla’s AI team recently announced on Twitter the ongoing progress of Tesla’s upcoming custom supercomputer platform Dojo. According to the tweet, the computer will go into production in July 2023 and is expected to be “one of the world’s five most advanced supercomputers by early 2024.”  

Reports by Electrek and Teslarati detail Dojo’s progress as another significant step by Tesla to carve out a spot for the company within the AI market. Furthermore, Tesla is working to reduce its dependence on traditional chipmakers, like Nvidia, whose A100 and H100 GPU chips are dominant within AI applications, which is true for even some Tesla AI products. In contrast, Dojo uses AI machine learning that utilizes Tesla-designed chips and infrastructure which are trained on data from Tesla's feed to develop its neural network.

After Dojo’s launch in 2021, the supercomputer has continued to be developed over the last few years with the goal of supporting Tesla’s vision technology to obtain complete autonomous driving. Musk has been working hard to make this area of AI a reality. Despite his critiques of other AI developments, Musk is pleased with the progression and advancements by the Tesla AI team in both software and hardware.  

Dojo is expected to be Tesla’s first step in creating a powerful computing center capable of handling many AI tasks. Tesla currently uses Nvidia’s GPUs in its previous supercomputer to process FSD autonomous driving data. Dojo should be able to process more video data, contributing to the acceleration of iterative computing in Tesla’s Autopilot and FSD systems. Eventually, Dojo could be able to provide the significant computing power required for Tesla’s other projects, including its humanoid robot Optimus.

Dojo’s supercomputers will greatly aid Tesla’s growing workload as the company strives for independence by designing its own chips and AI applications. With the data-driven insights provided by Dojo through Tesla’s video data, Tesla could come closer to making autonomous driving a success. By extension, the industries that could benefit from Dojo’s insights would be extensive, especially in a more interconnected world.  

However, it will be interesting to see how upcoming AI regulatory efforts impact the later development within the market.  

China is First to Begin Major AI Regulation

Microsoft and Tesla leaders are heading to the EU to discuss the need for tech leaders to be involved in establishing AI regulation. In comparison, China has moved ahead in establishing new requirements for generative AI. As one of the first countries to regulate generative AI used in popular tools like ChatGPT, other countries might use their laws to guide their regulations.  

The Cyberspace Administration of China (CAC) recently announced updated guidelines for the growing generative AI industry. These new rules are set to take effect on August 15th, and it appears in this “interim measures” document several previously announced provisions have been relaxed.  

The announcement comes after regulators fined fintech giant Ant Group a little under $1 billion. This fine followed a series of regulatory strikes against other tech giants, including Alibaba and Baidu, who are all launching their own AI chatbots, much like Microsoft and Google.  

The new regulations will apply only to services available to China’s general public. Technology developed in research institutions or created for users outside of China are now exempt. The language that indicated punitive measures, including hefty fines, had been removed not to limit AI's ongoing development within China.  

In the document, the CAC “encourages the innovative use of generative AI in all industries and fields by supporting the development of secure and trustworthy chips, software, tools, computing power, and data sources.” Beijing urged tech platforms to “participate in formulating international rules and standards” regarding generative AI. A regulatory body would aid in continued monitoring and culpability to hold each other accountable.  

It will be interesting to see how other countries will follow suit, especially now that tech leaders are courting lawmakers to collaborate on AI regulations. Going forward, it will be pertinent for tech companies and governments to work together to form a robust and flexible foundation to encourage AI development, not hinder it, and safeguard users.  

Production line robots

AI Rules and Regulations on Big Tech’s Mind - July 14, 2023

In May 2023, a few months after the massive boom of OpenAI’s ChatGPT, Geoffery Hinton, dubbed “the Godfather of AI,” quit Google. With his departure came a warning that artificial intelligence could soon grow smarter and possibly manipulate humans. Unfortunately, most of the general public saw those concerns and thought of AI turning on its human creators and overrunning the world with an army of metallic soldiers.  

No, that’s not what Hinton and other tech leaders like Microsoft’s Brad Smith are concerned about with artificial intelligence (AI.)

In an interview with CNN, Hinton discussed concerns about current challenges and rising problems within AI becoming more notable with its fast developments. Specifically, Hinton’s immediate concerns are that “the internet will soon be flooded with fake text, pictures, and videos that regular people won’t be able to distinguish from reality. This could, he said, be used by humans to sway public opinion in wars or elections. He believes that A.I. could sidestep its restrictions and begin manipulating people to do what it wants by learning how humans manipulate others.”

These concerns are not unfounded. Presently, some natural language processing chatbots hallucinate or deliver users false information. In many AI-generated images, artists have expressed concern over being replaced or having their work copied and regurgitated by AI faster and cheaper. However, far-reaching and general restrictions will not fix the issues with AI that concern Hinton and other AI critics.  

Better monitoring through regulations based on conversations between lawmakers and tech leaders will be vital in establishing guidelines to prevent the spread of false AI information.  

Microsoft President Discusses AI Regulation with the EU

Artificial intelligence has been making headlines ever since OpenAI’s ChatGPT came out. With artificial intelligence’s rise, so too have the chips that make these AI feats possible. As the possibilities within AI, specifically generative AI, are explored, the benefits have been marred by growing concerns.  

Attention-grabbing headlines calling the rise of AI the doom of humanity explore the idea of artificially intelligent automatons enslaving their human creators are more fiction than fact. However, for every dystopian sci-fi article that dramatizes the dangers of AI, there is an article that rightly discusses the concerns about the growing use of artificial intelligence, mainly regarding areas where AI still lacks.  

Artificial intelligence is, without a doubt, a fantastic tool. The capabilities of AI will be pertinent for all organizations in the next cycle of growth to aid the work and development of human staff. Automation, predictive analytics, market intelligence, generative AI, and other developments will help companies refine processes, increase operational efficiency, and improve employee morale.  

That said, AI is not perfect. On debut, Google’s Bard AI made a factual error about the James Webb Telescope, and Microsoft’s Bing AI created fictional financial information. These specific errors are part of an inherent problem with AI called artificial hallucination. These hallucinations are “generated content nonsensical or unfaithful to the provided source content." There are several reasons why or what causes an AI to hallucinate. In natural language processing models like Bing, Bard, and ChatGPT, the cause of hallucinations mostly centers on divergence from the training source material or filling in gaps that were not within the training data.  

The biggest concern is for internet-data scouring large language models, which is that the internet is full of false information. Information that trains and feeds many of these language models AI bots.  

Brad Smith, Microsoft’s president, believes that regulating artificial intelligence can benefit AI development. Alongside Tesla CEO Elon Musk, Smith is courting regulators and lawmakers with calls to regulate AI and how big tech companies, like Microsoft and Musk’s Twitter, can help refine the guidelines.  

The timing is perfect as the European Union works out the rules for its coming AI Act, which could set a benchmark for other countries to follow.  

For many countries, it is well-known that many lawmakers are exceedingly tech illiterate. Earlier this year in the U.S., the world watched in silent awe as Congress grilled TikTok CEO Shou Zi Chew for hours. More recently, many waited with bated breath as the U.S. Supreme Court ruled two cases against Google and Twitter, respectively, which many believed had dire implications for the future of AI. The U.S. Supreme Court ruled in favor of Google and Twitter.  

Having tech giants work alongside lawmakers to create AI regulations would help both parties create a strong but fair system. With big tech’s help, lawmakers could better understand and pass requirements that are not too draconian and inflexible but stringent to protect the public.

"Our intention is to offer constructive contributions to help inform the work ahead," Smith said. He addressed Microsoft's five-point blueprint for governing AI, which includes government-led AI safety frameworks, safety brakes for AI systems that control critical infrastructure, and ensuring academic access to AI aligns with the EU's proposed legislation.

Furthermore, Smith urged the EU, the United States, G7 countries, India, and Indonesia to work together on AI governance in line with their shared values and principles. With AI growing in use, hopefully, further collaboration between countries will make a standardized set of requirements, making it far easier for AI developers to follow.  

Digitalization Optimizes Manufacturing for Modern-Day Demand

Experts have said digital transformation is critical to an organization’s success in an increasingly tech-integrated world. Utilizing digital technologies in existing processes increases efficiency, agility, and profitability. The smoother your operations run, the higher customer satisfaction becomes. Staying competitive today requires digitalization.

In an article by Forbes, data from an L2L survey via Plant Engineering found that 24% of manufacturing companies have a digital strategy. Out of those companies, over 40% have yet to take the first step to begin implementation. This is especially surprising when one considers that manufacturing stands to benefit the most from digitalization.  

According to the L2L survey, one of the biggest struggles that prevent organizations from initiating a digital transformation is moving away from decades-old legacy systems. Especially since the old adage often rings true, “If it ain’t broke, don’t fix it,” right? Except, these legacy systems contribute to the same challenges that feed the flames of chip shortages and other disruptions.  

Unfortunately, manual systems are more prone to errors or inaccuracies than digitalized processes. The leading cause is human error. This is especially true in procurement for electronic components due to the extensive information stored on Excel sheets. With AI and other digital tools, this information is stored instantly and, most importantly, accurately.  

Increased amounts of accurate data provide better supply chain visibility. With these data-driven insights, organizations can make more informed decision-making faster without wasting the most valuable commodity within any industry; time. Better visibility gathered through digitalization can give a detailed picture of market opportunities, allowing companies to optimize manufacturing processes and beat competitors by large margins.  

In 2020, a Deloitte survey found that “greater levels of digital maturity are major drivers for annual net revenue growth up to three times higher than competitors with a less-developed digital transformation strategy.” According to Microsoft’s 2019 study on Tech Intensity, most leaders found that among 700 executives, 75% believed harnessing tech intensity, the rate of tech adoption, and capability is the most effective way to build a competitive advantage today.

That percentage is only increasing. In a Forbes article, research states, “Digitalization is the best way to boost productivity, production quality, revenue growth, hiring appeal, and future market share.” The longer organizations rely on legacy systems. The further the gap grows between those that have digitalized and those that haven’t. A digital transformation will only become more complex and costly for those not taking the first step soon.  

One of the simplest ways to begin their digital transformation is by utilizing a digital market intelligence tool such as Datalynq. Upon signing up for a free trial, Datalynq can instantly deliver data-driven insights on the electronic component supply chain. Find out how to strategically prepare and create a more resilient supply chain today.  

Chip wafer production line

AI is Redefining Chip Making - June 30, 2023

Artificial intelligence is changing the way we work. No, AI isn’t here to replace us or take our jobs. AI still has a long way to go before it can accomplish the complex thought process of the human mind, despite recent leaps and strides. Artificial intelligence is, however, a magnificent assistant.  

Nvidia’s GPUs are powering some of the most fascinating advances within the field of artificial intelligence, specifically in generative AI. Currently, Nvidia stands alone in its dominance. The company’s GPUs have been swiftly gobbled up by tech giants such as Apple and Microsoft. Many chipmakers have quickly shifted gears to focus on producing their own coveted AI chips, none have gotten close enough to pose a challenge. Until now, that is.  

AMD Challenging Nvidia with New Chip

Nvidia is currently secure as the king of artificial intelligence chips, but it might not be that way for long. Ever since OpenAI’s ChatGPT kicked off the latest artificial intelligence revolution, chipmakers have been seeking a way to take Nvidia’s crown. Among these upcoming competitors is AMD and its latest AI-capable chip.  

According to experts, it might be the strongest competition Nvidia has yet.

It’s called the MI300X, and AMD says it's the most advanced GPU for artificial intelligence that will become available later this year. Nvidia dominates the market for AI chips with over 80% of the market share due to their use by modern AI firms such as OpenAI. This new chip could steal some of that audience.  

Traditionally, AMD is known for its computer processors. However, AMD CEO Lisa Su believes that AI could be the “largest and most strategic long-term growth opportunity” for the company. Tapping into the AI market could help AMD see similar growth to Nvidia. Su believes that from 2023 to 2027, the AI market could see over a 50% compound annual growth rate with its current pace. It’s no wonder chipmakers are eager to throw their hats in the ring.

In a statement on the MI300X, AMD said MI300X’s CDNA architecture was designed for large language models and other cutting-edge AI. Due to the popularity of GPUs within the subset of artificial intelligence called generative AI, the MI300X being outfitted for that particularity makes it a strong competitor to Nvidia’s GPUs.  

This is seen especially in memory, as AMD’s MI300X can store up to 192GB compared to Nvidia’s H100, which only supports 120GB. This makes AMD’s chip one of the few that can fit in bigger AI models. This is especially worthwhile for large language models as these specific AI applications need extra memory for their algorithms' large number of calculations. Most large language models currently use multiple GPUs to achieve this.  

During the MI300X’s announcement, the chip ran a 40 billion parameter model called Falcon. For comparison, the current ChatGPT model, GPT-3, has 175 billion parameters making numerous GPUs a given necessity. Like Nvidia and Google, AMD plans to offer Infinity Architecture, combining eight chips into one system, for AI applications. Likewise, it should come with its own software package AMD, called ROCm, similar to Nvidia’s CUDA.  

This software would give AI developers the capabilities to enable the chip’s core hardware features, something many developers prefer.  

As of now, there has been no word on what the cost of AMD’s chips will be. Nvidia’s popular GPUs can run upwards of $30,000, making them an expensive investment. If AMD’s MI300X comes with a smaller price tag but the same capabilities that endear Nvidia’s GPUs to developers, AMD will likely gain a large and excited audience.  

Samsung Applies AI to Chip Manufacturing

Automating production lines is one of the simplest but most effective tools to both reduce human error and save on operation costs. Unsurprisingly, after the pandemic shut down semiconductor fabrication plants for weeks, many original component manufacturers (OCMs) have begun automating their lines. Samsung Electronics is no different.  

Samsung Electronics is implementing AI and big data into its chipmaking process to enhance productivity and refine product quality. The president and head of Samsung’s semiconductor department, Kyung Kye-Hyun, is leading the charge to improve wafer manufacturing yields and catch up to foundry leader TSMC.  

It might be the only way to get on an even playing field with TSMC, as sources say the chipmaker uses an AI-enhanced manufacturing method called AutoDMP, powered by Nvidia’s popular DGX H00 chips. This process will likely be used to produce TSMC’s upcoming 2nm process, as this AI-enhanced method optimizes chip designs 30 times faster than other techniques, something Samsung is eager to recreate.  

In partnership with the Samsung Advanced Institute of Technology (SAIT), Samsung’s Device Solutions (DS) division, which oversees its semiconductor business, Kye-Hyun will lead the efforts to broaden and improve AI methods within their chip manufacturing process. According to those familiar with the matter, Samsung will specifically use AI for “DRAM design automation, chip material development, foundry yield improvement, mass production, and chip packaging.”

One of the benefits of utilizing AI that Samsung specifically targets is limiting and determining the source of unnecessary wafer losses while analyzing and removing DRAM product defects. Since AI excels in pattern recognition, it will be a considerable improvement to have AI assist in production lines to eliminate errors before they get too far.  

AI is becoming more essential for chipmakers in ultra-fine processes as the narrower the circuit width on a wafer, the higher the chances of transistor interference and leakage current. Kye-Hyun said earlier this year that “the gap between companies that utilize AI and those that don’t will widen significantly.”  

For smaller or mid-sized OEMs that do not have the bandwidth to optimize production lines with AI, there are other digital tools that can help streamline other processes so staff can devote more time to maintaining operations sufficiently. Market intelligence tools that utilize real-time market data and historical trends can aid manufacturers in strategizing for possible component obsolescence, future disruptions, and create more resilient BOMs. Datalynq is one such tool, and it offers a 7-day free trial for those ready to see how AI can help your organization succeed.  

Interior view of a car with an artificial intelligence dashboard

Artificial Intelligence is the Word - June 16, 2023

The bird isn’t the word anymore! Artificial intelligence (AI) is, and it continues taking the world by storm. While AI opens new avenues within business thanks to data-driven insights and other capabilities, AI is useful in dozens of applications outside traditional information technology solutions. One of those industries is automotive.  

AI has been growing in use within the automotive industry for years and has only blossomed since the dream of a fully autonomous vehicle. The feasibility of such a vehicle might still be years away, but in the meantime, AI is set to boom within vehicles over the next ten years. With that growth, the industry expects to see new capabilities develop thanks to data attained by automotive AI.  

First, however, the chips able to perform such feats must be created, and it doesn’t sound as if they are too far off.  

AI Takes Centerstage at Computex

Computex Taipei, or Taipei International Information Technology Show, is an annual computer expo that’s taken place in Taiwan since the early 2000s. As one of the world's largest computer and technology trade shows, it continually spotlights new and emerging technology areas. Many of these evolve into great importance as they enter mass production.  

Over the past few years, Computex has focused mainly on the evolution of PC and cloud-centric applications that support the Internet of Things (IoT) and artificial intelligence of things (AIoT). These applications have continued to amplify the importance of edge computing, prompting many organizations to catch up to the continued growth of the cloud.  

With computing’s deep ties to electronics, it's no surprise that numerous semiconductor manufacturers also attend, including Arm, MediaTek, Cirrus Logic, and other small-to-medium IC design firms. This year, Nvidia took the lead front and center, with CEO Jensen Huang kicking off the show with a keynote speech.  

Qualcomm gave its own speech, sharing its innovative new product lines for AI applications powered by its mighty Snapdragon chips. For eager audience members, Qualcomm once again teased its highly anticipated Oryon CPU, which was unveiled last year. The Oryon CPU is expected to increase performance and efficiency in AI applications for smartphones, digital cockpits, and advanced driver assistant systems (ADAS).

“AI technology is experiencing a remarkable boom, and the profound impact it has had on our daily lives is nothing short of astonishing,” Qualcomm said.

NXP Semiconductor announced a new processor, the i.MX 91 for industrial, medical, consumer, and IoT applications. The new processor is part of NXP’s product longevity program to ensure qualified products are available for at least ten years, a necessity for automotive, telecom, and medical markets. The i.MX 91 can help smart home devices communicate with one another, increasing the capabilities of emerging smart home technologies.  

Texas Instruments shared its plans for the future of embedded processing and AI. TI’s VP for processors, Sam Wesson, said, "Based on what we see in the industry and what we hear from our customers, three trends stand out as crucial capabilities that embedded designers are looking for more integrated sensing capabilities, enabling edge AI in every embedded system; and ease of use so customers can get to market faster. At TI, we are innovating to enable new capabilities in embedded design while reducing system cost and complexity."

The emerging use of AI applications, especially generative AI such as large language models (LLM), has thrust AI into the spotlight of both the semiconductor industry and the world. Thanks to decades of investments in AI, Nvidia is currently leading the race in chips equipped to handle the growing demands of AI innovations. Computex proved that attention is now laser-focused on AI developments, and chipmakers are quickly following suit.

As other original component manufacturers (OCMs) begin their foray into AI, Nvidia continues to build its already impressive portfolio. After all, Nvidia’s growth will “largely be driven by data center, reflecting a steep increase in demand related to generative AI and large language models,” Colette Kress, Nvidia’s finance chief, said. OpenAI’s GPT-4 large language model, trained with Nvidia GPUs on extensive online data, is at the core of ChatGPT.

It’s expected, based on news shared at Computex, TI, NXP, STMicroelectronics, Nvidia, and others, will continue to increase their share within the AI marketspace with chips capable of handling AI’s impressive demands.  

The Growing Use of AI in Automotive Applications

AI is growing in use in many industries. From smarter homes to radiography assistants detecting mammograms for breast cancer, artificial intelligence is expanding into any and all organizations. It should come as no surprise that AI within the automotive market, where chipmakers and automotive original equipment manufacturers (OEMs) alike are looking to produce fully autonomous vehicles, is booming.  

The current market share is a respectable $9.3 billion for 2023. It is anticipated to set a robust CAGR of 55% and boom to a value of $744.39 billion by 2033. That is a significant amount of growth over ten years. The reason for this explosive increase in AI market share can be linked to autonomous vehicles, as expected.  

ADAS and auto-driving modes are utilizing AI solutions to expand performance and efficiency within automotive technology. Incorporating AI has led to the transformation and creation of services such as guided park assist, lane locator, lane assist, and other technologies. Research and development programs have increased automotive AI sales as the automotive industry works to re-implement smart features after their cuts during the worst global chip shortage.  

People like personalization. AI features satisfy the desire to customize an electronic that suits a user's needs and delivers ease of use for users. AI-enabled applications for autonomous operations are expanding the demand for AI-integrated automotive systems. As a result, automotive engineers designing new cars with newer and better operating systems that utilize AI will drive that explosive market growth.  

AI machine learning integration within the automotive market is also helping engineers develop better systems based on data-driven insights gathered from drivers. This includes AI-integrated systems that observe driving patterns to aid in developing advanced guidance and assistance. An AI model that aids with this pattern recognition is through AI with machine learning and Natural Language Processing (NLP). The same type of AI that makes up the world’s new favorite chatbot, ChatGPT.  

For the future of AI in automotive applications, it is hard to determine what the vehicles of tomorrow will look like today. Based on the current market trend, machine learning will become an integral part of vehicle production as it will significantly aid engineers in the future to make more sustainable and smarter technology. Many automakers, like Tesla, will strive to develop a truly autonomous vehicle requiring no human interaction.  

Likewise, AI integration will only continue to flourish as electric vehicles (EVs) continue to rise in use thanks to government incentives and environmental measures. As it stands, EVs produce large amounts of data on their own. AI can help track, monitor, and report such data to users and manufacturers.  

However, the more chips a vehicle uses, the more prone the industry becomes to shortages. Limited application of sensors and equipment within the vehicle would weaken AI and machine learning systems. Both need a lot of computing power to support them. Though, with countries passing incentive plans for semiconductor manufacturing, the supply of chips will increase as new facilities come online.  

It is pertinent for OEMs and other manufacturers to monitor market conditions to strategize for possible shortages in advance, to prevent future production stalls from undersupply. Also, manufacturers forming partnerships with chipmakers to ensure production capacity is always assured no matter the year will help solidify supply and improve relationships through collaboration. The former can be accomplished with help from Datalynq’s market intelligence.  

Close up view of a silicon wafer

GPUs for AI and Semiconductor Manufacturing - June 2, 2023

The semiconductor market is recovering, but a recent trend might push one chip maker back to the beginning of the global chip shortage. Nvidia has been in the spotlight recently for its latest product debuts in artificial intelligence (AI) and its specific type of microprocessor has taken the world by storm for its benefits to AI applications.  

As AI booms, so too does Nvidia’s products and, as the sole supplier of many of them, the more AI grows, the scarcer the supply becomes. If AI is going to be the one to bring us out of the current chip glut, it could turn out to be a very sharp double-edged sword.  

However, the options are expanding in electric vehicles (EVs) as EV use grows. Japanese chipmaker Renesas announced their plans to begin production on their own silicon carbide (SiC) chips for EV applications. While it is not the only competitor within that market, its products will likely be a hit among many EV manufacturers.  

Nvidia GPUs Are the Hot Ticket Item of the Year

Nvidia’s graphic processing units (GPUs) are powering a new evolution in technology and chip manufacturing. After years spent betting on the success of artificial intelligence, Nvidia’s long-term strategy is starting to pay off in part to GPUs.  

GPUs are accelerating processes. CPUs dominated most workloads in both AI and manufacturing processes, but GPUs are specialized components that excel in running multiple tasks simultaneously.  

GPUs are accelerating the same manufacturing process that creates them. TSMC, ASML Holding, and Synopsys have used Nvidia's accelerators to speed up or boost computational lithography. Meanwhile, KLA Group, Applied Materials, and Hitachi are now using deep-learning code running on Nvidia’s parallel-processing silicon for e-beam and optical wafer inspection.

In computation lithography, manufacturers etch chips into silicon by projecting specific wavelengths of light through a photomask. The smaller the transistor on silicon dies become, the more creative engineers have to be to prevent distortion from blurring those tiny features. These photomasks are so ornate they're generated on massive computer clusters that can take weeks to complete. Well, at least it used to be.

According to The Register, Nvidia's CEO Jensen Huang claims this process can be sped up by 50x by GPU acceleration. "Tens of thousands of CPU servers can be replaced by a few hundred DGX systems, reducing power and cost by an order of magnitude.”

But improving manufacturing processes has only been the tip of the GPU benefit iceberg. AI will become more dependent on high-performing GPUs to accomplish its many tasks as it booms. Most large-scale AI depends on GPUs to allow it to keep up with the numerous algorithms that it is comprised of. At the ITF semiconductor conference, Huang spoke of AI and GPUs' potential to breathe new life into Moore's Law. "Chip manufacturing is an ideal application for Nvidia accelerated and AI computing.”  

Semiconductor manufacturing facilities are already highly automated, but Nvidia is branching out, using those tasks as a base, to move into robotics, autonomous vehicles, chatbots, and even chip manufacturing. Recently Huang teased a new product, VIMA, a multimodal "embodied AI model trained to perform tasks based on visual text prompts, like rearranging objects to match a scene…I look forward to physics-AI robotics and Omniverse-based digital twins helping to advance the future of chipmaking.”

Despite the downturn in chip demand, Nvidia is posed not only to come out relatively well-off but possibly unable to meet the growing demand. Especially now that large foundry operators, including Samsung Electronics, SK Hynix, Intel, and TSMC, are pushing ahead with new projects. New government incentives, such as the US and EU Chips Acts, have pushed forward new facility plans which, in turn, require more GPUs to automate the manufacturing and design processes via artificial intelligence and machine learning.

More facilities will likely use AI/ML going forward, primarily due to many countries' lack of skilled labor. To keep costs low and production high, automating numerous tasks for the small talent pool to oversee will be the path forward.  

The AI boom brought on by ChatGPT has caused demand for high-computing GPUs to soar. DigiTimes reported that “Nvidia has seen a ramp-up in orders for its A100 and H100 AI GPUs, leading to an increase in wafer starts at TSMC, according to market sources. Coupled with the US ban on AI chip sales to China, major Chinese companies like Baidu are buying up Nvidia's AI GPUs.”

As a result, Nvidia components could experience a shortage. This would be incredibly impactful because Nvidia is currently the sole supplier of most AI-capable GPUs. The likelihood of this possibility continues to rise as OCMs increase their use of Nvidia’s GPUs. That is unlikely to stop because, as of now, Nvidia’s GPUs can even outperform a quantum computer.  

Tracking lead times and availability will be a top priority to keep abreast of the shifting market and the possibility of Nvidia being affected by future shortages. Datalynq can help you plan.

Renesas To Begin Making SiC Chips for EVs

As Nvidia leads the charge in AI, Renesas is introducing new components for the growing EV industry. By 2025, Renesas will begin producing next-generation power SiC semiconductors. These new semiconductors will be manufactured in Takasaki in Japan's Gunma prefecture, northwest of Tokyo.  

The site currently produces silicon wafers but will soon begin the shift later in the year. The SiC chips are expected to power the MOSFETs for EV inverters alongside a new gate driver IC Renesas announced in early January 2023. Compared to traditional silicon, SiC chips offer better heat resistance and reduced power loss, contributing to longer driving ranges.  

Numerous chipmakers, including Infineon Technologies and STMicroelectronics, have explored SiC semiconductor capabilities over the last few years. Renesas is behind the curve in this area. Renesas President Hidetoshi Shibata isn’t too concerned with playing catchup, as the chip giant has done it once before.  

Renesas "was a latecomer in conventional power semiconductors, but now [our products] are valued for their high efficiency," Shibata told reporters on Friday during the announcement. "The same can be done with SiC."

Despite Tesla’s 75% cut of SiC chip use in their future vehicle line-ups, the EV industry isn’t wavering in its use. SiC has proven itself to be a more efficient component within EV applications, and it's likely it will continue to be used in power-saving solutions for its benefits over traditional silicon. Especially now that silicon carbide is becoming more affordable as its manufacturing process is further refined.

Circuit board for digital security

Lack of Digitalization Costs Companies Far More than Revenue - May 19, 2023

The digital age is upon us. However, many organizations are still unprepared to adapt. Financial losses are the last thing you must worry about for those who don’t upgrade in the ever-evolving digital landscape.  

Leading the charge in technological developments is artificial intelligence (AI), thanks to recent AI platforms making waves. AI shouldn’t be viewed as a terrifying boogeyman out to replace humans but as a tool that helps make excellent staff perform even better. Chip giant Nvidia is one of the manufacturers leading the charge to help organizations of all sizes implement AI into their workflows.  

Nvidia’s AI Tech is Transforming Enterprise Workflows

2023 is quickly becoming the year of AI. After OpenAI’s ChatGPT popularity boom in December, the last few months have been nothing but AI launch after launch. Experts believe that growing interest in AI technology will keep semiconductor demand up throughout 2023, with expectations it will ease the chip glut the industry is currently struggling with. Most of that recovery is forecasted to begin in late 2023.  

While Google and Microsoft work to perfect their ChatGPT equivalents after some troublesome debuts, one original component manufacturer (OCM) is helping companies kick off their AI journey.

Earlier this year, Nvidia launched several AI inference platforms to aid developers in building specialized, AI-powered applications. These platforms use Nvidia’s latest inference software, including NVIDIA Ada, NVIDIA Hopper, and NVIDIA Grace Hopper processors. Each platform can tackle AI video, image generation, large language model deployment, and recommender inference.  

In a recent article by the Motley Fool, Nvidia will reportedly benefit significantly from this oncoming adoption of AI systems. As a company heavily interested in AI with numerous products critical for AI solutions, such as its graphics processing units (GPUs), Nvidia will be one of the front runners of the AI revolution within modern industries. This is mainly due to GPUs' capabilities in terms of AI requirements.  

AI supercomputers need thousands of GPU arrays to increase performance in quickly solving intense calculations with high-quality graphics and efficiency. For those that can’t build their own supercomputers, Nvidia offers the Nvidia DGX cloud, which allows clients to utilize a specified amount of AI-computing power. Furthermore, its AI training service provides multiple tools to aid an organization’s AI development, from workload management models to chatbots for customer support.  

PC sales and cloud services expect continued downward trends through Q2 2023 and perhaps the first half of Q3. This will put a damper on AI implementation throughout the year. Still, as companies formulate their AI strategies, sales are expected to pick up around the end of the year when most inventory correction for excess stock finalizes.  

AI is set to become one of the more beneficial tools that can aid human staff in completing their tasks. One of the best ways to see how greatly your organization can benefit from AI is by using it on a smaller scale before investing in creating your own.

Don’t Plan on Digitalizing? It Will Cost You

The semiconductor industry and the greater electronic component supply chain have a problem. For an industry that supplies manufacturers with cutting-edge technology that brings modern society closer to an almost sci-fi Hollywood-like reality, it is exceedingly traditional.  

Traditional practices within the electronic components industry mean hundreds of Excel sheets to track market data generated daily. No matter how resistant a company is to embracing digital tools, it is impossible not to possess some form of digitalization. Whether in the shape of email, instant messengers, using e-commerce sites for component procurement, or, yes, Excel. The problem lies in the resistance to further digitalizing and improving existing digital tools.  

Why is that a problem? There’s a multi-pronged reason why continued digitalization and relying on tradition hurts companies far more than theoretical cost-savings.  

Data is invaluable. Data provides hard evidence that a product is working, successful, if it needs to be improved, and more. Within the electronic component industry, the supply chain generates oceans of data daily. Some data is less insightful than others, it depends on what function it needs to serve.  

So, what’s the issue with data? The electronic components industry is filled with a lot of sensitive data. Semiconductors and other electronic components are among the most critical products integral to modern society. They power computers, cars, medical devices, and defense products, among thousands of other things. Therefore, chip designers are very particular about who can access their data.  

That data is at serious risk if companies fail to digitalize or adopt new technology.  

Unfortunately, the world has some bad eggs in it. In a recent article by Forbes, writers discussed why technology companies must take digital stewardship seriously. Digital stewardship is the ethical and responsible management of digital assets, including data, privacy, and cybersecurity. For companies that collect vast amounts of data, the cost of being digitally negligent or failing to manage data properly goes beyond cost.  

This isn’t just a problem for tech companies. The electronic components industry, from OCMs to electronic manufacturing service providers (EMS) and original equipment manufacturers (OEMs) to everyone in between, produces and hosts quite a large amount of data. Failing to protect this information opens organizations up to data breaches, cyberattacks, privacy violations, financial losses, and damaged reputations.  

IBM studied the effect data breaches had on companies. Data breaches cost $4.35 million globally, but the more considerable loss comes from the damaged trust between customers and a company. Since data breaches can contribute to the loss of intellectual property, legal fees, and regulatory fines, many former clients will cease existing relationships due to diminished trust. If a company fails to protect secured data, whether it just be transactional information from a purchase or intellectual property shared between an original design manufacturer (ODM) and an OCM, the repercussion of having that information accessed by an unwanted third party are enough to destroy any previous partnership.  

That loss goes far beyond the cost of legal fees and investments in a new cybersecurity system.

Furthermore, the lack of digitalization can quickly put a company in hot water with various government bodies worldwide. The European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) are two laws that require companies to protect user data and privacy. Violations can result in significant fines and legal fees. Possessing old digital tools that have not been updated, for example, an old email server, can put even the largest company at risk for data breaches in the form of phishing emails–especially if staff aren’t adequately trained on the risks.  

Digitalization is an important step for companies, not only for the competitive edge it gives them but for the protection provided to sensitive data. The cost of digital negligence is too great a threat to ignore in this day and age. It’s time to start bringing your organization into the digital age.  

Touch-activation

If it Ain’t Broke, You Still Might Need to Fix it - May 5, 2023

Is digitalization right for everyone?

That’s a popular phrase in the industry today. Digitalization is a necessity. It is not optional. If you want to stay competitive, you have to digitalize. The semiconductor industry has used traditional operational methods for decades and outlasted numerous shortages. If these methods have worked so well, why must the industry change them?  

The truth is yes, the semiconductor industry should have left certain operating methods in the past a long time ago. Just because they continued to work long past their “best by date” doesn’t mean they weren’t broken. Like the chips we depend on for our products, there is a short window of time for tools to function at their peak. After a while, and a lot of use, they degrade and decay in performance until they break.  

The fact of the matter is the semiconductor industry should have digitalized a long time ago.  

Tradition Fails, New Procurement Processes Take the Lead

There’s an old saying, “If it ain’t broke, don’t fix it.” It’s an American adage often attributed to Thomas Bertram Lance, also known as Bert Lance. Lance was a close adviser to Jimmy Carter during his 1976 campaign and became director of the Office of Management and the Budget (OMB) in Carter's government. Lance was hardly the inventor of the phrase, as it originated long before Lance’s use of it was published in William Safire's Pulitzer Prize-winning article.  

The phrase is, most often, true. If something works adequately, leave it alone--only fix it if it is broken.  

But metaphors aren’t something manufacturers can rely on when inaction results in disaster. A method may work well for years, but evolution often trumps adequacy.  

Before the Covid-19 pandemic, the traditional methods within the semiconductor supply chain worked. If it ain’t broke, don’t fix it. Just-in-time (JIT) scheduling was the primary method of scheduling shipments. Original equipment manufacturers (OEMs), contract manufacturers (CMs), and electronic manufacturing service providers (EMS) would only order from original component manufacturers (OCMs) or electronic component distributors when they needed components. OCMs would determine production capacity for component lines based on previous yearly sales. Market indicators were sometimes taken into account. Collaboration and transparency between manufacturers were almost non-existent.  

After all, why share designs with an OCM or discuss future stock security if it was never needed? Short-term gains were prioritized over long-term strategies, and when the pandemic hit, the semiconductor supply chain faced one of the worst shortages ever.  

Semiconductor bottlenecks are born from limited capacity, high demand, and over-ordering. These bottlenecks will likely last throughout 2023, despite continued efforts to mitigate the problem. The chip glut is here, and excess inventory won’t resolve until late 2023 at the earliest. Traditional ordering and even sourcing methods are not cutting it anymore. Many manufacturers are unwilling to adapt right now because time is scarce.  

Traditional chip sourcing methods are notoriously tedious. Long hours are spent comparing chip offers with numerous calls to different sales representatives. Keeping these methods in place for short-term gains will only delay the inevitable. New methods are more resilient, cost-effective, and time efficient when integrated into a workflow.

It doesn’t even take a lot of time to learn.  

Many companies are starting to integrate artificial intelligence tools into their organization’s workflows. AI can easily aid companies by quickly collecting and sorting through data to find credible insights for procurement teams. Machine learning algorithms can be customized in their approach to data collection, finding new opportunities traditional sourcing methods can overlook. Organizations that utilize AI are more agile, resilient, and able to routinely run aggressive inventory strategies that benefit both short-term and long-term goals.

Manufacturers that use AI can offer greater transparency to OCMs, allowing them to accurately determine chip production capacity based on a manufacturer’s future needs.  

One of AI's newest features is the ability to create a digital twin. This AI-powered solution can optimize end-to-end supply arrangements, from the availability of parts to advanced production planning and logistics. AI-powered tools will help optimize the supply chain’s complex web of producers and products while boosting time management—something the semiconductor industry desperately needs.  

The future shortages the supply chain will face are expected to make the 2020-2022 global shortage look like a walk in the park. Fortifying your supply chain now with AI will prevent the shortage's damage effects from impacting your organization to the same extent again.  

How to Mitigate the Shortage’s Impact

Recovery is coming, but based on expert forecasts, it won’t be until late 2023 that we can all breathe a sigh of relief. For now, working on building resiliency and supply diversification will help speed-up recovery.

Government incentives and subsidy programs can only do so much, especially when bringing new fabs online takes years. Demand could outpace supply, despite being in a period of excess, with how rapidly digitalization is taking the world by storm. Popular tools like generative AI and the growing Internet of Things (IoT) are expected to feed the demand that brings the semiconductor industry out of the current chip glut.  

To continue negating the impact of the shortage, you can do a few things to outlast the pain.  

1. Collaborate with Distributors

Transparency is key. One of the exacerbating problems of the shortage was the traditional method of OCMs and OEMs not working with one another. During the design phase for a product, there should be collaboration between distributors, OCMs, and OEMs regarding what is needed for upcoming production lines. That way, stock can be secured for future capacity, so it is available when a product is ready for market.  

2. Always Check for Alternates

Sole source components are a dangerous threat that often goes unnoticed or overlooked. The more sole sources your product requires, the more vulnerable it is to shortages. You’ll have a stronger product design if a component has plenty of form-fit-function (FFF) alternates, a diverse lineup of manufacturers, and currently active alternates. You can confirm this through Datalynq’s Multi-Source Availability Score, which takes several factors into account before assigning a component a score based on its availability.

3. Be Ready to Adapt

The semiconductor shortage was one of the more difficult challenges of the recent decade for the semiconductor industry. By adapting to emerging technologies and new methods, manufacturers can create a more robust supply chain that can withstand some of the worst disruptions. Traditional methods acted as kindling, and Covid-19 was the spark. Using digital tools to give supply chain managers and other key decision-makers more visibility over the global state of the supply chain and market.

The shortage’s impact will last for years to come. It was a perfect storm of circumstances that led to the years-long drought of chips, and relying on old methods will only ensure the subsequent shortage is worse. As manufacturers mitigate the damages of the 2020-2022 shortage, it is essential to implement new technology to better prepare for the long-term changes coming to the electronic component industry.

You can explore what AI tools can do for you with Datalynq’s 7-day free trial today.  

Human face in digital pixels

For Those That Don’t Embrace Digitalization, You Will Lose Valuable Data - April 21, 2023  

Chip demand is in a slump. While it’s low chipmakers and market experts are urging original equipment manufacturers (OEMs), electronic manufacturing service providers (EMS), and others to invest in transforming their supply chain now. Supply chain managers are working hard to diversify, fortify, and make their supply chains more resilient. The way to accomplish this goal is through digitalization.  

New artificial intelligence (AI) and machine learning tools are helping organizations across all industries uncover better insights from their data to prepare for future disruptions. While diversifying your supply chain is vital to a supply chain’s strength, strategically planning for possible disruptions through data insights will give companies the competitive edge they need.  

Moreover, integrating AI into workflows is vital to overcoming future hardships.  

Chip and Research Giants Give the Same Advice: Digitalize

Digital transformation is what the supply chain needs. The current demand slump has left ten of the top global chip companies down 34% from the $2.9 trillion made in November 2021. Many issues, including rising interest rates, high inflation, lower consumer confidence, and tech-led stock market retreats, have caused the downturn. Pair these economic challenges with more significant disruptions, including Covid-19 lockdowns, seaport congestion, Europe’s energy crisis, and the war in Ukraine, and you get the current state of the semiconductor supply chain. U.S. sanctions on advanced semiconductor technologies in China will likely impact the industry throughout 2023.  

To combat these problems, original component manufacturers (OCMs) have been cutting costs, reducing their workforce, and lowering production output where possible. These strategic measures are working to mitigate the effects of the shortage-turned-glut and have varying levels of success depending on the niche of the chip market. Memory OCMs, such as Samsung Electronics and SK Hynix, are experiencing some of the worst quarters since 2008-2009 as DRAM and NAND prices fall. Production cuts, scaled-back capacity, and low spending are helping mitigate some of the damage.

But everyone, including chip giant TSMC, faces pitfalls from the current slump.

The industry is working quickly to solve these rising problems, and based on experts’ forecasts, these plans might pay off. The last chip shortage is expected to be resolved by early 2024. Those facing excess inventory challenges should see demand pick up in late Q3 and Q4 of 2023, helping digest the rest of the remaining surplus. All-in-all, the bullwhip from shortage to glut could have been worse. Thankfully, it wasn’t.

However, it could have been a lot better.  

Industry-leading business consulting firm Deloitte believes that despite the predicted growth in the latter half of the year, the semiconductor industry requires transformation through 2023. According to Deloitte, the U.S. and European Union’s plan to diversify all portions of their supply chains, from fabrication to assembly and testing, will be the first necessary step. Had these steps been taken in the past, many missteps by OCMs and OEMs alike could have been avoided.  

There are different ways to make a more resilient supply chain. The more diverse your supply chain is, the less vulnerable to shortages, geopolitical tension, and other hurdles. Deloitte’s article states that digital transformation with data-driven supply chain networks is the best way to fortify your supply chain against future disruptions.  

According to Deloitte’s report, “This is where integrated data platforms, next-generation ERP, planning, and supplier collaboration systems along with artificial intelligence and cognitive technologies are expected to make OSAT processes more efficient and help sense and preemptively plan for future supply chain shocks.”  

Sharing real-time data and intelligence across the supply chain with digital tools helps OCMs and OEMs address numerous issues from sales and partner organizations and proactively plan for warehouse storage and logistics. It also keeps the entire process transparent, which aids collaboration across the supply chain. Deloitte advises semiconductor supply chain members to adopt industry 4.0 solutions, such as Nvidia’s Omniverse which utilizes digital twins to aid in process visualization. Predictive analytics can assist decision-makers in perfecting mitigation strategies before disruptions occur, preventing the significant losses that OEMs are currently dealing with.  

Deloitte continued, “In 2023, semiconductor companies need to modernize their ERP systems and integrate diverse data sources such as customer data, manufacturing data, financial and operational data.”  

AI-Gathered Data Improving Processes Across Industries

It’s hard to stay competitive without artificial intelligence. Today, AI adoption is a necessity, not an option. To keep pace with the ever-changing and more technology-integrated world, AI is needed to help pick up the slack so talented human workers can better aid organizations. Numerous enterprises have embarked on their journey to utilize AI in transforming their businesses.  

AI is a general term for many technologies, all of which represent some form and degree of digitalization. For some, it is automating assembly line robots, which use AI to perform a predetermined function. For others, it is implementing predictive analytics to help forecast future disruptions based on historical trends or aid assembly robots in the detection of possible defects in products.  

It’s important to note that the more technology is integrated into the world, the more data is produced. The vast quantities of data generated daily can no longer be analyzed swiftly and efficiently by human staff. The best way to organize and understand it is best accomplished through AI aiding workers by quickly presenting relevant data that supports an organization’s objective through big data analysis. Other AI software platforms should function similarly.

Each tool should serve a specific function that supports the overall business goal. One such tool, ChatGPT, or generative AI, is new on the market. Business leaders are now seeking ways to integrate this latest tool into their workflows to maximize benefits. To determine whether a new piece of software will benefit your company is to first assess the existing data strategy and how it is maintained. If an organization is to implement the latest and greatest tool without a proper understanding or strategy in place, organizations can quickly be bogged down and inflexible.  

Supply chain managers should implement strategic data sourcing, which requires a defined objective, decision, and outcome regarding collected data. AI can be utilized in this position to seek out internal and external data, like data scientists, to find relevant, appropriate, of high-quality, and permissible to use. Better yet, AI aids data scientists and other staff in this task by helping identify high-quality data quickly through its given rule set. It forces data management teams to set transparent objectives and rules to determine necessary data. Once trained, AI can quickly sort through the mountains of real-time data finding pieces that fit the predetermined set of rules.

Once completed, data gathered by AI helps define and drive digital transformation. Datalynq can help digitalize your supply chain through predictive analytics, a solution that uses AI to forecast market trends, prices, and possible disruptions through real-time market data. You can get started with a 7-day free trial to see how Datalynq can help you today.  

Nvidia logo

New Day, New Tech - April 7th, 2023

2023 is quickly becoming the year of new technology. After several years of the chip shortage, companies are making up for the lost time. With intelligent tech, the number of chips needed to support their efficiency is increasing with every advancement. Now that advanced chips are more available, due to consumer demand slowing from inflation costs, artificial intelligence is rising in popularity and application.  

Next-Generation IoT Prioritizes Connectivity

Let’s begin with a simple fact. Technology today makes it hard to imagine a world where we aren’t connected. Connectivity is important to us, whether between people, organizations, or devices. It should come as no surprise that as technology develops in the modern age, the focus on connectivity and how a device connects users with other devices are prioritized. Smart, interconnected devices are intertwined with daily life as it helps make mundane tasks easier. It allows users and others to benefit from connected devices more productively and efficiently.  

Connectivity is an inherent piece of smart technology. Smart technology is “a technology which uses big data analysis, machine learning, and artificial intelligence to provide cognitive awareness to the objects which were in the past considered inanimate.” AI and other learning algorithms within smart technology usually involve connection with, at minimum, another platform capable of data collection or hosting. As it learns and interacts with the data, it communicates results to users or other devices following predetermined actions programmed within it based on a dataset pattern.  

The Internet of Things (IoT), a network whose purpose is to connect numerous devices and their data, prioritizes connectivity as it develops. With each technical advance in lower power chips, AI, and machine learning, new uses for IoT become known. Their advanced connectivity allows for better and faster data sharing. In a recent report by McKinsey and Company, it’s expected that by 2030 IoT products and services will create between “$5.5 trillion and $12.6 trillion in value.” While today’s IoT is still somewhat fragmented, thanks to its variety of tech ecosystems, IP, and standards, insights from today’s challenges can uncover new solutions.  

As a result, IoT solutions are being utilized in numerous industries that rely on quick and efficient data sharing. Manufacturing, healthcare, and transportation will be some of the key markets expected to benefit immensely from improved networks that lead to connectivity between applications where none previously existed. By 2030, McKinsey expects factories and medical sectors to account for 36% to 40% of value.  

Rob Conant, vice president of software ecosystems at Infineon, told McKinsey and Company that IoT applications are spreading from industry to industry and into more diverse businesses. “Connectivity is extending into more and more applications,” Conant said. “Pool pumps are becoming connected, light bulbs are becoming connected, even furniture is becoming connected. So, all of a sudden, companies that were not traditionally tech companies are becoming tech companies because of the value propositions they can deliver with IoT. That’s a huge transformation in those businesses.”  

According to McKinsey, the best way to improve IoT solutions faster is to enhance collaboration between software, hardware, and chip manufacturers alongside IoT product makers. Alignment of goals through each member of the production chain in bringing these solutions to fruition will lead to innovation at a faster pace. If the focus on new IoT products is to improve connectivity between devices and users, it should also be the priority of those who make them.  

Collaboration and transparency in all sectors of a supply chain, from design to market, can give the organizations working together a competitive edge that jettisons them far beyond their competition.

Nvidia Launches AI Interfaces to Help Integration in Workflows

The future is here. 2023 is quickly becoming the year of AI. OpenAI’s ChatGPT, Google’s Bing, and Microsoft’s Bard kicked off 2023, and the train of AI applications is only just getting started. On March 21st, Nvidia launched a series of inference platforms for large language models and generative AI workloads. As a result of AI rapidly growing in popularity, Nvidia’s latest inference platforms are optimized to aid developers in quickly building specialized, AI-powered applications.

Each inference platform has been combined with Nvidia’s latest inference software, including NVIDIA Ada, NVIDIA Hopper, and NVIDIA Grace Hopper processors, which utilize NVIDIA L4 Tensor Core GPU and H100 NVL GPU. The four platforms are equipped to tackle AI video, image generation, large language model deployment, and recommender inference.

With AI’s use rising in prominence among numerous industries, and only poised to continue its vast reach, Nvidia CEO Jensen Huang knows it is only limited by imagination. “The rise of generative AI requires more powerful inference computing platforms,” Huang said in Nvidia’s product announcement. “The number of applications for generative AI is infinite, limited only by human imagination. Arming developers with the most powerful and flexible inference computing platform will accelerate the creation of new services that will improve our lives in ways not yet imaginable.”  

With the steady increase and utilization of AI in numerous industries, Nvidia is attempting to position itself as the go-to partner for AI development. Partnering with Oracle Cloud, Microsoft Azure, and Google Cloud, Nvidia is making its AI supercomputers available as a cloud service through Nvidia DGX Cloud. The goal is to help enterprises train models for generative AI and other AI applications based on the quick rise and experimental implementation of other generative AIs, like ChatGPT.  

This announcement and new products come on the heels of Nvidia’s Omniverse improvements. Nvidia’s Omniverse is an industrial metaverse that provides “digital twins” of processes or products to aid in predicting how it would perform throughout time. Nvidia’s new powerful GPUs help better visualize the techniques.  

Nvidia’s tools are a small indication of the greater change within the modern industry. AI utilization in any form, whether generative or others, significantly increases organizational innovation, productivity, and efficiency. The further they improve, the greater they aid workflows and help decision-makers strategize based on results derived from the seas of data generated. Implementing AI into your company will soon become necessary, not an option, with how greatly AI can improve certain parts of an organization.

The best way to begin digitalizing your business through AI is to incorporate a tool that provides multiple features for various solutions that handle tedious tasks. Datalynq is equipped with market intelligence scores that water down large amounts of data into a 1-to-5 scoring system on component availability, price, risk, and more. The predictive analytics and alerts solution warns users of upcoming disruptions long in advance, giving organizations time to prepare and document everything through Datalynq’s case management system. At the same time, the staff focuses on larger, complicated projects.  

Want to increase innovation and productivity in your company’s workflows? Try out Datalynq’s 7-day free trial! Gain better insights and visibility throughout your supply chain today.

Radiology in medicine

Data Analytics and AI, How Components Are Changing Everything from Healthcare to Manufacturing - March 24, 2023

Digitalizing your organization is for more than just tech industries. Artificial intelligence (AI), machine learning, big data, predictive analytics, and more are changing how organizations operate in the 21st century. After the 2020-2022 chip shortage, development in AI and its capabilities has resumed improving rapidly. Each improvement brings better benefits through enhanced accuracy and faster results.  

Healthcare, notorious for its complex and niche fields, benefits immensely from the inclusion and aid of AI. Predictive analytics and AI, in general, partnered with human staff, through studies have proven how efficiently work can be done when operating in tandem.

Predictive Analytics is a Great Tool for Manufacturing and Precision Medicine

Technology truly is a beautiful thing. Each innovation benefits everyone, from an organization's production lines right down to the consumer. Every industry saves time and cost while increasing efficiency with the aid of digital tools. Any business sector, from retail to healthcare, can operate more smoothly with the aid of AI and the many tasks it can accomplish  

A rising star within AI is predictive analytics. While it is not necessarily a new feature, its aid in numerous industries has recently risen in popularity and further implementation. That includes healthcare.  

While AI, in general, is not new to medicine, predictive analytics, a great tool for manufacturers, is now on the rise. Predictive analytics identifies patterns and trends through data analysis, thereby providing alerts and possible solutions to human decision-makers. The biggest boon of predictive analytics in the semiconductor manufacturing supply chain is sorting through mountains of data quickly and accurately. The semiconductor supply chain, a large and usually complicated chain of original chip manufacturers (OCMs), original equipment manufacturers (OEMs), distributors, clients, and more, produces proverbial oceans of data daily by each member within it.  

Predictive analytics is not limited to supply chain forecast management. It can also be utilized to predict when robotic assembly tools will go down for maintenance. This ensures that operations can be accurately scheduled around that time frame. Predictive analytics in chip manufacturing can also quickly forewarn manufacturers of possible risks in the form of sole source components. Sole source components face higher prices, longer lead times, and obsolescence, as there are no form-fit-function (FFF) alternates to replace them in the future.  

With the boon predictive analytics provides to manufacturing, is it a surprise it has also become an excellent medical assistant? Predictive analytics in precision medicine can provide personalized treatment plans and associated risks of a patient’s genetic history, lifestyle, and environmental data. It can alert providers to specific conditions and diseases, such as cancer or heart disease, for patients long before symptoms appear. Successfully aiding in proactively treating patients long before malignant tumors or illnesses strike. Does that sound familiar?  

Healthcare is just as, if not more complex, than the semiconductor supply chain. The amount of data the program must sort through to accurately predict future events and outcomes is immense and only continues to grow. Luckily, most developments within AI are happening at the same rapid pace that both medicine and technology have set, making it sensible to integrate predictive analytics into an organization so it can learn these advancements as they’re developed.  

AI is Being Utilized to Detect Cancer

Pattern recognition within AI has been a function steadily perfected over decades. It’s a vital component of most AI systems, as machines can easily identify patterns in data. Once it recognizes a pattern, AI can make decisions or predictions using specific algorithms. This crucial component of AI can be utilized in many fields and industries.  

Radiology is not a new industry for AI. In 2018 in an article published by the National Institutes of Health, a team of doctors examined the relationship between AI and radiology. Specifically, the authors discussed the general understanding of AI and how it can be applied to image-based tasks like radiology. Recent advances in AI-led doctors realize that deep learning algorithms could lead AI models to exceed human performance and reasoning with complex tasks, as current AI models can already surpass human performance in narrow task-specific areas. In some regions of medicine, such as radiology, the early detection of cancer can make a big difference in the patient's mortality.  

In the 2018 article, the authors noted that using AI in mammography, a particularly challenging area to interpret expertly, could help identify and characterize microcalcifications in tissue. In 2023, AI is being used to help successfully identify breast cancer in mammogram screenings. This AI is utilizing an advanced form of pattern recognition to assist radiologists in analyzing the images’ details.  

The AI used is called computer-assisted detection (CAD). Studies have shown that CAD helps review images, assess breast density, and flag high-risk mammograms that radiologists might have missed. It also flags technologists for mammograms that need to be redone. A study published last year found that CAD was just as effective, if not more so, than a human radiologist in a faster period.  

One doctor who spoke with the New York Times stated, "AI systems could help prevent human error caused by fatigue, as human radiologists could miss life-threatening cancer in a scan while working long hours.” While doctors and AI development teams agree that AI can never replace doctors, AI-human teams reduce the workload of radiologists by having an automated system quickly and accurately provide a second opinion.  

This partnership is true for all industries that incorporate AI into their organization. The goal of utilizing AI shouldn’t be to replace human staff but to aid them in accomplishing goals faster and more accurately. Continued advances in AI show that it is versatile and can be trained to perform numerous tasks, from cancer detection to even product defects. Implementing AI into your organization can be a time and cost-effective strategy.

Production assembly line

Why Case Management is Necessary for Any Design Strategy- March 10, 2023

The digital tools we use are only getting smarter. Machine learning is quickly becoming part of organization workflows, but it shouldn’t stop there. Machine learning can provide a strong foundation for several manufacturing operations, decreasing costs while improving production line efficiency.  

Component case management is an easily overlooked strategic aspect in the electronic component industry. For many original equipment manufacturers (OEMs), the only element given any strategic overview is sourcing and when to schedule orders. But none of that matters if you aren’t investing in a tool that can aid you in successfully managing components necessary for your products from start to finish.  

Component case management should be done during the initial design phase so that risks are discovered and mitigated long before they become a problem.  

Why You Need Component Case Management

The traditional reactivity method in response to component obsolescence, shortages, and other disruptions is no longer applicable in a post-pandemic supply chain. Proactivity concerning component management is necessary for the supply chain of today. What was solved through simple communications with suppliers is no longer applicable in the complex, global supply chain that is continuing to diversify. Strategizing for component risk factors must be done as early as the design phase.  

Otherwise, one might lose costs and time reacting to problems through damage control, much like the automotive industry had to during the pandemic.  

Manufacturing case management is a dynamic process that assesses, plans, implements, coordinates, monitors, and evaluates to improve outcomes, experiences, and value. In the case of electronic component manufacturing, case management is actively pre-planning for scenarios, the expected downtime, and the cost for the preplanned response. That could be anything from component obsolescence impacting products and planning a last-time-buy (LTB) before a manufacturer ceases production to complete product redesign around a component.  

Component case management is usually used to plan a documented strategy to mitigate and resolve the issue in preparation for component obsolescence. However, OEMs can use case management for many problems beyond obsolescence management. Documenting plans that coordinate strategies to resolve these issues lead to better efficiency in resolving them. Likewise, for many defense OEMs, these documents on case management are required.  

Datalynq offers case management for users that aids in identifying issues you may have with parts, opening a case on these problems, and taking action to resolve them. When you open a case in Datalynq, you can add the expected impact date, the case status, the government case number if acquired, the number of days production will be impacted, the impact rate of logistics and repairs, and more. Once you’ve initiated a case within Datalynq, all your pertinent case information is documented in an audit trail.

You can also add information for potential resolutions, their cost, the summary of the mitigation plan, and even the confidence of how this mitigation strategy is expected to work. These documents provide full transparency to government agencies, like the Department of Defense (DoD), which require it. For other OEMs, visibility into product case management helps smooth the resolution process and can be shared easily with the necessary departments to implement such a significant change.  

If you want to see how easily Datalynq’s case management system can improve manufacturing processes effectively, Datalynq’s 7-day free trial lets you take control.

Machine Learning Transforms Manufacturing

As part of the greater umbrella of artificial intelligence (AI), machine learning is “the use and development of computer systems that can learn and adapt without following explicit instructions by using algorithms and statistical models to analyze and draw inferences from patterns in data.” Machine learning is an intelligent program that learns through studying data to predict, detect, and provide analyses through its algorithm.  

Machine learning usually comes to mind when people think of AI, as it is often utilized in natural language processing (NLP) chatbots to better imitate human speech. The popular ChatGPT possesses some machine learning in its program as its algorithms are pre-trained by data. This pre-training then aids in generating text, whether small chat box responses or entire articles, close to human speech.  

Beyond chatbots, machine learning is a rapidly expanding field thanks to its endless potential in many applications. Any industry can utilize machine learning, from retail to healthcare. It is particularly helpful in manufacturing applications. OEMs can use machine learning within manufacturing for quality control through defect detection, automation of repetitive work in production lines, and customization of products.  

Utilizing training algorithms to help identify product defects from images and other data sources can help reduce the cost of quality control while improving inspection accuracy. Along those same lines, machine learning can be paired with another subset of AI called predictive analytics. Together these programs can detect, predict, and forecast when automated production lines need maintenance and how long they’ll be nonoperational.  

The repetitive nature of production lines makes machine learning the perfect tool for automation. Assembly line robots can run off machine learning algorithms trained to perform many tasks, from welding to part fabrication. Automation with machine learning cuts down on operational costs while increasing efficiency. Automating production lines also frees human staff from tedious but necessary simple tasks so they can put their time and attention into more innovative projects.  

Customizing products with automated production lines trained through machine learning is far easier. Time spent customizing products through manual labor and individualized assembly lines would no longer be necessary. Machine learning can abide by the data that comprises these custom designs without incurring additional costs of individualized production lines. It simply needs to be told when and how to do it before it starts creating.  

Machine learning technologies will continue to be implemented far into the future beyond the small forays of ChatGPT into general workflows. With how competitive the current global supply chain is, reducing operating costs and time to manufacture products will give companies an edge many hesitate to take advantage. It’s time to get started.  

 

Automated production line

Data is Transforming the Semiconductor Supply Chain - February 24, 2023

The world is becoming more connected. As it does the amount of data that these connections produce grows too. The global supply chain produces a significant amount of data every year but still needs more visibility for industry members to be able to glean all the critical insights within it. To manage this information, data analytics and artificial intelligence are vital to collect important supply chain information.  

As data-driven tools become more advanced, so does the information it delivers to users. To stay competitive and keep further disruptions from impacting the world’s supply chain, original equipment manufacturers (OEMs) must adopt these tools. If not, manufacturers might miss some important red flags that warn of future disruptions, like those brought on by the pandemic.

Semiconductor Supply Chain Issues Are Negated by Data Analytics and AI

It should be no surprise that data-driven analytics and artificial intelligence (AI), among other digital tools, help mitigate supply chain disruptions. After experiencing unexpected events derailing plans over the last several years, supply chain managers are eager to keep history from repeating. Rohit Tandon, the managing director and global AI and analytics services leader at Deloitte, explained the only way to prevent future disruptions is to know what they are and plan accordingly.

“The Covid-19 pandemic vividly illustrated unexpected events' impact on global supply chains. However, AI can help the world avoid similar disruptions in the future.” Tandon said, “AI can predict various unexpected events, such as weather conditions, transportation bottlenecks, and labor strikes, helping anticipate problems and reroute shipments around them.”  

The supply chain is a complicated beast that needs dedicated monitoring, which today only a program can sufficiently manage. AI and other machine learning algorithms accomplish this by crunching through massive amounts of data generated daily by the electronic component supply chain. The data, far too much for a team to sort through and analyze as quickly as AI can, is growing in specificity and amount each year.  

This increased transparency can lead to improvements in operating efficiency, which, in turn, boosts working capital management with fewer supply disruptions. “Manufacturers that are using AI for visibility,” said Tandon. “Can better respond to potential disruptions to avoid delays and pivot if needed…organizations can leverage data analytics for deeper insights across the supply chain.”

Even better, these tools are designed to improve demand prediction and support data sharing with customers and partners. So, everyone benefits from the insights. The increased transparency also helps fortify supply chain resilience and build trust in the output of analytics and AI processes. As these tools develop, it becomes easier to identify trends and patterns to guide customers through market conditions years into the future. It can point out design risks when using specific components if they are a sole source, EOL preventing costly future redesigns or increased shortage risk.  

The most crucial factor to consider is finding a tool that provides the most accurate data so that when information is shared, it helps, not hinders. The most capable market intelligence tool that combines real-time market data and predictive analytics with other management algorithms is Datalynq.  

Cyber-Manufacturing Improves the Global Supply Chain

The CHIPS and Science Act are making OCMs approach manufacturing in the U.S. a little differently. The U.S. needs more skilled labor to become a chip-manufacturing powerhouse. However, there are two solutions to that problem.

At the start of 2023, tech giants in Silicon Valley began laying off staff in massive waves, thanks to consumer demand slowdown. As a result, the talent pool of skilled candidates increased. This influx of experienced labor is not as large as the talent pools inside India and Vietnam, but their experience with Silicon Valley’s tech leaders can kill two birds with one stone. First, they can help support the new facilities in research, development, and manufacturing with their technical expertise. Secondly, they have the expertise to properly manage a cyber-manufacturing line efficiently.

What is cyber-manufacturing? It refers to a modern manufacturing system that offers an information-transparent environment to facilitate asset management, provide reconfigurability, and maintain productivity. The real-world application of cyber-manufacturing is utilizing automation, artificial intelligence (AI), the Internet of Things (IoT), and other data-driven analytics to provide transparency. Most industry experts believe that the modern manufacturing model further embraces automation tools for production lines to save on labor expenses. The future workforce with more competitive OEMs will be tech-savvy academics that utilize predictive analytics to forecast downtime and maintenance on machines.  

Covid-19 showed how dangerous unplanned and unexpected failures complicated the global supply chain. Further developments in cloud and quantum computing through AI and machine learning are expected to lead to additional visibility of production line health. Vulnerabilities will be easier to spot, and the costs to maintain this type of production will be lower in the long run.  

When more manufacturers embrace cyber-manufacturing, the greater resilience of the world supply chain develops. The less unpredictable events occur on automation lines, the less likely it will impact other manufacturers and suppliers further down the chain. The best way to get started is through digital tools that support case management for design components. Datalynq does this task effectively and only grows in accuracy.

Datalynq Multi-Source Availability Window

Data Can Manage Your Stock Better, But Sole Sources Can Undo That Step - February 10, 2023

Too much stock, too little stock, and no stock because the sole source manufacturer cannot meet demand or is otherwise impacted. Many of us have dealt with that over the past several years. Unexpected demand highs and lows paired with disruptions turned buying stock into a gamble. No matter what original chip manufacturers (OCMs) and original equipment manufacturers (OEMs) tried, everyone wound up with the short end of the stick.

But what else can you do? In the face of unpredictable consumer demand and weather, how can you accurately predict what will be short one day and excessive the next? Fortunately, there are digital tools available that can do all that and more.  

Predictive Analytics is the Key to Prevent Backlogs and Excess

Over the last few years, manufacturers have experienced one of two things—inventory backlogs or excess stock. Many of us have been without supply-demand stability for a while, and we might be in for one or two more years. Why?

At the pandemic's start, automotive original equipment manufacturers (OEMs) cut chip orders. In response, original chip manufacturers (OCMs) cut capacity for automotive chips. Chip capacity is determined by order amount, and with no orders, there was no reason to keep capacity for automotive chips. Only automotive demand spiked long before automotive OEMs thought it would and no OCMs had the chips to meet the growing demand.  

Since the start of the pandemic, many personal electronics and other white goods OEMs continued to make large orders, or double order products, from eager consumers. OCMs, as a result, increased chip capacity for them. Then in July 2022, demand quickly dropped as recession fears mounted.  

OEMs tried to cancel orders, but it didn’t work. Many OEMs are left with six months of stock and no product demand.  

And there are many other instances of lacking data, market visibility, and historical trend analysis that led to frustration from either backlogs or excess stock. The result is lost costs and time, production stalls, delayed product development, and more. It is imperative to prevent either scenario from taking place. How are OEMs expected to stay on top of market trends and more if disruptions make time scarcer than the components themselves?  

Predictive analytics is the solution.  

Predictive analytics is exactly what you think. It predicts customer demand and forecasts future market trends. Rather than manually tracking inventory data, predictive analytics takes mountains of real-time data to forecast demand according to market trends, weather patterns, and other variables to determine future demand needs.  

Datalynq, a market intelligence tool that utilizes predictive analytics, uses component sales data from a global, open-market and machine learning to deliver insights on future part availability. You’ll never be caught off-guard when making strategic orders months or years out. It’s time to stop being caught by surprise when Datalynq is ready to use.  

How the Shortage Reminded Us of the Danger of Sole Sources

The 2020-2022 semiconductor shortage was a rough ride. The shortage’s easement has finally arrived, but there are still areas of chip scarcity that could continue for a few more years–especially with automotive components. The effects are extensive and could last for years into the future. While there are ways, we can mitigate such devastating effects through predictive analytics and market intelligence, these methods will be worthless if one simple step isn’t taken.  

It’s preventing sole source components in your product designs.  

What are sole sources? A sole source for a component entails it has no form-fit-function (FFF) alternates and is not manufactured by more than one OEM. Sole source components are an inherently dangerous design risk. The reason is simple and one that the shortage proved over and over throughout its course. There is no backup if bad weather, logistics issues, geopolitical strife, raw material shortages, and more impact the supply chain for sole source components. A manufacturer must wait for the stock to become available again or, if lucky, buy excess inventory from another.  

Once a sole source enters obsolescence, manufacturers have no choice but to redesign. As there usually are no alternate components that mimic the form, fit, and function well enough, manufacturers need more wiggle room. Redesigns themselves are time-consuming and costly during normal market conditions. In a period of shortage, those prices shoot up.

Sole source components are usually far costlier than multi-source components. As sole source components are unique and scarce by nature, their prices are generally far higher than non-sole source components. While predictive analytics can warn manufacturers of upcoming shortages and price trends rising, it doesn’t do much to ease either challenge for sole sourced parts.

Another misstep manufacturers can make when trying to avoid sole source components in their BOMs is not identifying who manufactures the alternates and where they’re based. While a component might have existing alternates, they could all be produced by the same manufacturer. Despite the alternates making it a little easier to secure stock, if the OCM is impacted, OEMs, contract manufacturers (CMs), and others still wind up with no stock.  

Another concern when sourcing components and preventing sole source is failing to identify whether alternate components are active. If the alternates for a component are all inactive, then it’s no better than having no existing alternates.

The more resilient your BOM is the easier it will be to mitigate minor and major supply chain disruptions. You need a tool to measure your BOM and design risk for sole source components.  

Datalynq’s Multi-Source Availability Risk Score uses real-time data to assess numerous attributes, from the number of unique manufacturers to active FFF and DIR alternates. It then breaks the information into a brief, easy-to-understand window for quick, decisive decisions. Take back control of your product design now to prevent tomorrow’s problems with help from Datalynq.

Circuit board manufacturing

Component Obsolescence is About to Make Waves - January 27, 2022

The market is in a precarious position. The shortage is easing but not over, and a chip surplus is beginning but only for some. Outside of chip excess and scarcity, the global supply chain is still in an odd flux. Automakers want the capacity for legacy nodes to increase while chipmakers cut what is no longer in demand.  

What that means is component obsolescence, and a lot of it is just around the corner.

Germany Takes Steps to Decrease Obsolescence Risks During Future Disruptions

In an article by Supply Chain Connect in 2018, two years before the 2020-2022 shortage, industry leaders described obsolescence as a supply chain threat. The reason is simple to understand. Managing component obsolescence takes time, money, and logistics to handle. Obsolescence in electronic components is a persistent and never-ending challenge.  

It is not a matter of “if” obsolescence will occur. It is a matter of when.

Component obsolescence is an inherent complication with only a few solutions. One can procure a sizeable last-time purchase of components as they enter the end-of-life (EOL) stage. Another solution requires finding form-fit-function (FFF) alternates to replace the obsolete component. Sometimes last-time purchases or FFF alternates are unavailable, especially if the components are from a sole supplier or used in specific industry devices. That requires a redesign.

It is a lot easier to redesign a phone than it is a defibrillator. The latter requires time-consuming and costly testing to ensure it follows stringent regulations. Medical devices, among other products, must have every part approved. Even replacing a component with an FFF alternate is costly.  

Obsolescence becomes much more complicated when the global supply chain is in a period of shortage or excess. If mitigating a natural part of a component’s lifespan is considered a supply chain threat far before the impact of the shortage occurs, it is pertinent to take proper steps to minimize the effects. To aid in preventing such far-reaching consequences, Germany is fortifying its resilience against such profound risks.  

“The German Electronics Design and Manufacturing Association (FED) and the Component Group Deutschland (COGD) signed a cooperation agreement at Electronica, Munich,” Evertiq reported. “The agreement ensures coordinated representation of interest with political decision-makers and networking in research and development.”

Further, both organizations plan to develop training courses and lectures to better inform others of proactive and strategic obsolescence management. The goal is to bring attention to obsolescence by creating long-term strategies.

“Efficient, proactive obsolescence management starts in the design. If risky components or materials are used at this early stage, the subsequent effort required to correct the problem is all the greater.” Dr. Wolfgang Heinbach, an honorary chairman of the COGD board, said this on COGD’s collaboration with FED and the importance of obsolescence management.  

Current geopolitical uncertainties and other imponderables will make the effects of obsolescence risks more significant and frequent. These challenges apply beyond electronic components and their raw materials and software products.  

“We have now reached a point where obsolescence can pose a significant risk not only to individual companies,” Dr. Heinbach warned, “but also, worst case, to entire sections of our national economy.”

2023 Obsolescence Outlook, What Are the Challenges?

The shortage might be easing, but the supply-demand balance is not here. With automotive components still scarce and advanced chips piling up, the global supply chain is still a year away from stabilization. As excess stock builds, manufacturers across the supply chain are moving into the next stage.

That stage is inventory correction. As consumer demand deteriorates, so does the need for chips. Increased capacity for specific chips will be limited without demand funding as original component manufacturers (OCMs) cut back production. This new limited capacity brings a worrying fact to the table.  

A lot of components are about to enter obsolescence.  

The semiconductor market worldwide lost $240 billion in value last year. Excess stock is forecasted to be a problem throughout 2023 until late Q3 and possibly early Q4. In late 2022, TSMC had ten of its top clients cancel orders. These orders were supposed to get them through 2023 alone when product demand was still high. Many original equipment manufacturers (OEMs) are now stuck with six months’ stockpiles.  

Now it isn’t. In the face of production stalls and another year of revenue losses, many OCMs are digesting what inventory they can. OCMs will cut capacity to avoid losses. With this limited capacity and no demand, plenty of advanced chips will become obsolete, possibly before product demand picks back up.  

Innovation doesn’t stop. TSMC, as an example, has already begun production of its 3nm nodes. Samsung’s plan puts 1.4nm into total production by 2027, beginning with 3nm in 2024 and 2nm in 2025. TSMC is considering price cuts on their 3nm during this inflationary period to attract more buyers and increase capacity. Other components enter obsolescence to make room.

Overcoming obsolescence will be complicated by the current factors impacting the supply chain, including shortages, excess inventory, macroeconomic pressure, inflation, and more. Being proactive now by finding FFF alternates, sourcing another supply elsewhere, and even redesigning existing projects if necessary while early enough, will make the difference. Datalynq will help mitigate obsolescence’s effects starting now.

Datalynq

Welcome to 2023 From Datalynq! - January 13, 2023  

Welcome to the new year! We have many exciting adventures planned for 2023, starting with this new blog dedicated to market news. This blog will bring you the latest market news, obsolescence management, component lead times, and more by updating biweekly.

As a digital market tool, Datalynq condenses large amounts of data across the supply chain and millions of parts into one easy window. After the twists and turns over the past year, it’s hard to keep track of if you are used to the traditional way of storing information on spreadsheets. That’s why we’re ready to give you a crash course in why digitalization and digital tools will be key in coming out of 2023 on top.  

Datalynq and Market News

The supply chain suffered setbacks over the last few years. Now, it’s in a tumultuous state of extremes. The automotive industry can’t get enough parts to keep production lines open. Consumer electronics manufacturers are up to their necks in components they no longer need. While no significant disruptions are affecting the global supply chain now, it would only take one to throw this delicate balance into chaos.

That’s where Datalynq comes in. It replaces traditional spreadsheets with information that’s easy to digest. Datalynq’s market intelligence monitors fluctuations in the electronic components industry. It makes obsolescence management easy, lead time tracking a breeze, and preparing for future component risks. Excel spreadsheets and hours of phone tag chases are no longer needed.

Datalynq accomplishes this with its Market Availability Score rating system that supplies a comprehensive view of component obtainability based on information compiled from historical trends, suppliers, and market forecasts. The Design Risk Score rating system also uses insights from supply data, lead times, and product lifecycles to determine the risks of designing a part in-house. These ratings are then scored from 1-to-5 for easy legibility, with five being the worst and one the best.  

Technical data for components are at your fingertips and readily available on Datalynq, so you can ensure the part is the best fit for any project. Along with pricing and inventory trends, users will always make the most informed choice with Datalynq’s aid. It doesn’t take long to learn how to use it, either. With straightforward analysis, users only need a few minutes to understand how to best optimize their projects and supply chain.  

Digitalization is the Way to Go

It’s time to invest in digitalization. Manufacturers like Qualcomm are encouraging OEMs, CMs, ODMs, and EMS providers to invest in digital tools as the new year begins. Whether through automation, artificial intelligence (AI), machine learning, or other digital tools, the theme of 2023 should be expanding your digital footprint. Why? The more you invest in implementing these tools, the more you’re bound to benefit.  

Digital tools make it easier to prevent production stalls, navigate shortages, and strategize for component obsolescence. Machine learning can help companies with their algorithms called predictive analysis. These algorithms can predict disruptions by analyzing historical trends and market data.That amount of data would be far too time-consuming and tedious for human analysts to predict as accurately in the short time it takes machine learning tools.  

AI can give users greater supply chain visibility, which many lacked in early 2020 when decreasing consumer demand at the beginning of the pandemic led to canceled orders en masse. This visibility helps users make more informed demands and keeps miscommunication from happening as it did in early 2020. It took a few months for that demand to turn and skyrocket sharply, leading to the 2020-2022 chip shortage.  

Automation of processes is imperative beyond production lines. Automating processes save time and give it back to staff to focus on more critical task. OCMs like Qualcomm are taking steps to automate semiconductor manufacturing. As the process is labor-intensive, automation is necessary to avoid production stalls if another pandemic occurs–which previously kept staff home and unable to continue producing chips.  

These are just a few of the ways to begin your digital journey. It’s not a one-size-fits-all solution, as different tools are more beneficial to some than others. The industry is becoming far more reliant on data, especially within the EV industry. To create better EVs, quickly analyzing mountains of data to design better features will be imperative to stand apart from the competition. This is also true for consumer electronics, medical, industrial, and other industries.  

While that may sound like a monumental task to begin the transformation, there are easy and quick tools to get you started. Datalynq is one of those tools.

How exactly does digitalization benefit an organization and what specifically does it prevent? All that and more in this deep dive.

Market Trends