System-on-Chip

One Chip To Run Them All

Share

The big bang of technology has had an intense impact on the lives of the plebeians. The personal computers that were developed earlier did not have a direct effect over the lives of the people. The computers were large and confined to the air-conditioned rooms of the business and research organizations. It all began when Bell Laboratories presented the prototype of the first mobile phone about 90 years ago. And around 40 years ago, the first cell phones became commercially feasible and accessible. But the computing power was mostly limited and the cell phones were limited to the mere functionalities of calling and texting.

When Neil Armstrong and Buzz Aldrin first set their foot on the lunar surface, NASA had less computing power than what an average smartphone possesses today. From LSI (Large Scale Integrations) to VLSI and VVLSI, the transistors have found home along with a billion others on a same small substrate of a chip. Post-PC era devices like an iPad did not follow the legacy PC-era architecture but were instead made from the ground-up level. There no longer was a need of a CPU to manage all generic operations and computations. The cell phones had become smart.

The demands on cell phones are ever increasing and challenging at the same time. Once used for simple tasks, they now cannot be thought of without features like video streaming, Global Positioning Systems, Artificial Intelligence and high computing features like face recognition and finger print detection. And more versatile and powerful SoCs are the players behind it all.

ASICs (Application Specific Integrated Circuits) and ASSPs (Application Specific Standard Parts) have been developed for use by multiple design houses for specific and generic purposes. A System-on-Chip (SoC) is a chip that contains microprocessors (MPUs) and/or microcontrollers (MCUs) and/or digital signal processors (DSPs) --along with hardware accelerator functions, on-chip memory, peripheral functions, and (potentially) various other components. However, if an ASIC or an ASSP contains one or more processing cores, it is an SoC.

There is no doubt in the fact that SoCs have enormous technological benefits. All that you need in order to run a mobile phone can be developed on a single chip and can be manufactured in high volumes. Hundreds of semiconductor Intellectual Property companies are emerging and hoping to ride the SoC tidal wave leaving traditional semiconductor companies in the wake.

Let’s delve deeper and explore the world of SoCs and the way it has disrupted the modern day technology

Fabrication of System-on-Chip


Full Custom Fabrication:

It is a design methodology mainly for designing integrated circuits wherein we specify the layout of each individual transistor and the interconnections between them. This technology maximizes the performance of the chip and minimizes its area.

Standard Cell:

This method is used to design application-specific integrated circuits (ASICs) with mostly digital-logic features. The initial design of a standard cell (in our case an SoC) is developed at the transistor level, in the form of a transistor netlist or schematic view. A schematic view is generated with a number of different Computer Aided Design (CAD) programs that provide a Graphical User Interface (GUI). Also, a physical representation of the standard cell (also called layout) is designed to fabricate the chips. From a manufacturing perspective, the standard cell’s VLSI layout is the most important view, since it is the closest to an actual “manufacturing blueprint” of the standard cell. After the layout is created, additional CAD tools are used to perform a validation check.

Field-programmable Gate Array:

A field-programmable Gate Array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing. It is generally specified using a hardware description language (HDL), similar to that used for an ASIC. The basic structure of FPGAs consist of programmable logic blocks along with a hierarchy of reconfigurable interconnects that allows the blocks to be “wired together”. These blocks are configured to form the various components of SoCs and can be used to perform complex combinational functions.


[ALT TEXT]

Working and Architecture


The vast technological advancement in the fabrication of Chips has resulted in a single chip containing billions of transistors. Transistor gates are now measured in terms of nano-meters, something that couldn’t have been though of 10 years ago. As a result of various advancements, it has become extremely easy to integrate all the components of conventional Printed Circuit Board into a single chip, in the form of an SoC. Currently, the key challenge is to design the communication/ integration between the different entities in an SoC.

SoC chips use a surface mount technology known as ball grid array. The chips are lined with tiny interconnection pins, both at the top and bottom surfaces. The manufacturers then solder the lower pins to connect the SoC to the board, and use the ones on the top to connect to memory packages. This gives them more flexibility, as they can use memory from different vendors. In general, memory is not connected to the core of the SoC and is left out as an individual entity

Structure & Functionalities of Components of an SoC


Control Unit:

The Control Unit consists of a Central Processing Unit and various communication buses. The Central Processing Unit is the heart and brain of every computer. Every single operation that one performs on a computer is processed by the CPU. Having a robust CPU ensures better performance and faster execution times. In a broad sense, there are two types of Central Processing Units segregated on the basis of number of cores present in the processor:
Single-Core Processors: A single core processor is present as a single core on a chip, which runs a single thread at any one time. For a long time, processors remained single cored, until it was practically impossible to achieve performance gains from the increased clock speed, transistor count, increased depth of pipeline, increased CPU Cache sizes and/or additional execution units. It was problematic, since to increase clock speeds, the silicon transistors on the chip had to switch faster. These higher speeds required higher input voltages and semiconductor manufacturing processes that resulted in greater leakage current, both of which increased power consumption and heat output. This has resulted in the development of Multi-Core Processors.
Multi-Core Processors: A multi-core processor is a component with two or more independent cores, which read and execute program instructions in parallel. The cores may be coupled tightly or loosely such that they may or may not share caches, and they may implement message passing or shared-memory inter-core communication.

Memory Block:

Read-Only Memory (EPROM/EEPROM), Random Access Memory and Flash are the basic memory units inside an SoC. Akin to a computer, memory is required to process and perform various tasks that a smartphone is capable of. All the data present in the various stages of processing are stored in Memory Blocks and are retrieved according to need.

Graphics Processing Unit:

The Graphics Processing Unit (GPU) is a specialized electronic circuit designed to render images in a frame buffer intended for output to a display device. The GPU is responsible for processing complex graphics including 3-D games on smartphones/tablets. Smartphones and tablets mostly have a dedicated GPU which renders games and other high-quality animation for a good user experience.

Arithmetic Logic Unit:

An ALU is a digital circuit for performing arithmetic and logic operations. It uses operands and opcode to perform specific operations on the input data. After the information is processed, signals are sent to the CPU to request for the next operation. Also, the result of the last operation is forwarded to various other units as required and directed by the Control Unit

Timing Unit:

Oscillators and Phase Locked Loop (which is a closed loop frequency control system) are used in the timing units of an SoC. The timing unit ensures that the SoC achieves the minimum possible clock cycle time for a given configuration.

Radios:

In current times, mobile phones are used for much more than sending text messages and making voice calls. Most mobile phones support WiFi, GPS/GLONASS and Bluetooth which require individual radio modules to be present inside them. A faster standard of communication, namely LTE (4G communication) requires a specialized radio module to be present in the SoC or as an independent module. This module offers upto 10x better communication rates.

Analog interfaces, external interfaces following industry standards viz. USB, USART, SPI, voltage regulators and power management units form the basic interface of the SoCs. Besides these, there are two components known as Northbridge and Southbridge which handle communications between the components and various I/O Functionalities.

Design Flow of an SoC


Hardware and Software Modules:

Hardware blocks of SoCs are developed from pre-qualified hardware elements and software modules which are aggregated and integrated using various software development environments. Hardware description languages like Verilog, VHDL and SystemC are used to write and develop these modules. However, hardware is not the only focus during SoC design. The chips developed must be supported by software drivers that control the operation of the hardware. Since an SoC has to manage networking as well, the protocol stacks have to be written along with the drivers.

Functional Verification:

Functional verification is a very important task in SoC manufacturing. It is the process of verifying that the hardware developed follows the logic intended by the designer. This involves testing performance of the hardware against various permutations and combinations of simulations and situations. The SoCs are verified for logical correctness before being sent to the foundry for fabrication.

Verify Hardware & Software Designs:

There are various checks like Design Rule Check (DRC) to verify that the design meets the foundry and other layout needs. Nodal connections of the netlist are then compared to those of the schematic netlist with a Layout vs Schematic (LVS) procedure to verify that the connectivity models are equivalent.

Place and Route:

Finally, there are powerful Place and Route (PNR) tools which pull everything together and synthesize VLSI layouts, in an automated fashion. The first step involves deciding where to place all the electronic components, circuitry and logic elements in a generally limited amount of space. This is followed by routing, which decides the exact design of all the wires needed to connect the placed components. This step implements all the desired connections while following the rules and limitations of the manufacturing process.

Challenges in designing SoCs


  1. Architecture Strategy
  2. The architecture of the processor that we use to design the SoC is a really important factor that has to be considered at all times. We also need to choose the kind of bus that has to be implemented.

    The ARM vs x86 CPU Decision
    ARM (Advanced RISC Machine) is a family of reduced instruction set computer (RISC) architecture for computer processors as well as SoC. In the beginning, the ARM architecture was specifically developed for use in a PC. In the late 1980’s ARM2 was one of the simplest 32-bit processors of its time, with only around 30,000 transistors, which was less than half that of Motorola 68000’s 68,000. The lower transistor count coupled with the efficient RISC architecture allowed ARM2 to outperform Intel’s 80286 and save on electricity consumption at the same time.

    The Intel 8086 CPU launched in 1978 was a 16-bit microprocessor and was followed by several successors whose names also ended in “86”, which resulted in the christening of this series as the x86 series. This is perhaps Intel’s most successful line of processors. Most computers still use the x86 architecture as the processor core for desktop machines and laptops although it is not preferred as a mobile SoC.

    Today ARM processors have a big advantage in mobile devices: they need less energy in order to work. This is very important in smartphones and tablets because the technology of the batteries is always the same and so if you want to increase the autonomy of these devices you need components that use less power. For now, Intel is some steps behind in power usage, so manufacturers prefer to use ARM CPUs in mobile devices. This is mainly due to the retro compatibility of the x86 architecture that Intel is forced to maintain. The x86 chips consist of a higher number of transistors and thus consume more power.

  3. Backend Synthesis Strategy and Integration Strategy
  4. There are effects like IR drop, cross talk, 3D noise, antenna disruption, SAR and EMI effects which need to be taken care of while designing a SoC. To tackle these issues, chip planning, power planning, DFT planning, clock planning, timing and area budgeting are required in the early stage of the design. The detailed description of fabrication comes under Integration Strategy.

  5. Test Strategy and Validation Strategy
  6. Checking for physical defects, verifying the cores and verification of the integration of the system are the major challenges here.


[ALT TEXT]

Current market frontrunners


Apple

Apple SoCs all prioritize graphics performance over everything else, both to support the large number of games available on the iOS platform and also as a general push towards high-resolution display panels that Apple is known for. They generally contain less CPU power and RAM than the other flagship mobile phones. The latest offering by Apple is the A11 Bionic which is a 64-bit hexa-core ARM-based SoC. It is based on a 2+4 core configuration based on ARM’s ‘big.LITTLE’ architecture. The high-performance cores (codenamed Monsoon) have been benchmarked to be 25% faster than the Apple A10 SoC and the four high-efficiency cores (codenamed Mistral) are upto 70% faster than the energy-efficient cores in the A10. The A11 processor contains a Neural Engine which can perform up to 600 billion operations per second and is used for FaceID, which is currently an Apple exclusive feature!

Qualcomm

Qualcomm is hands-down the biggest player in the mobile chip-making game right now. It is the only one (other than ARM) that creates its own CPU and GPU architectures, rather than licensing one from ARM or other companies. The current series of CPU architecture is codenamed “Krait” and it is faster clock-for-clock than ARM’s Cortex series of processors. The GPU series is named “Adreno” and is the most popular GPU in the market. In fact, most of the US versions flagships of major players like Samsung, Motorola, OnePlus are shipped with Qualcomm fabricated cores

Modern Advancements


The contribution of SoCs to the world of technology is incomparable, and it has caused one of most dramatic outpourings of technological progress in human history.

The post PC-era saw the advent of smartphones and tablets, and the computing paradigm shifted — overall user experience became a critical benchmark independent of the performance of the underlying technology. Form-factor, cost and power became the critical drivers and increased the importance of on-chip integration of functional hardware. This shift to power-constrained, low-cost chips with increased system-level integration changed the traditional semiconductor landscape, and SoCs came into picture. Qualcomm, Apple, Nvidia, Samsung, Texas Instruments, Intel and MediaTek are a few of the leading SoC manufacturers and the innovations brought about by these companies offered the first significant platform for SoC technology to demonstrate its potential and compete with the traditional CPUs.

Recently, manufacturers are bringing in much greater diversity in chip configurations. Innovations like multi-chip modules, also known as 2.5D, or System in a Package (SiP) are on the way. It involves packing components closely together, without the complete, end-to-end integration of the SoC.

Though smartphones have been the overwhelming driver of innovation in the technology industry, the growth rate is slowing. There’s a new boss in town — Internet of Things. Over the next decade, this industry is expected to produce billions of connected sensor devices. These will be used in every corner of the world, to gather new insights to help us live and work better. And at the heart of it all will be an SoC.

Google, Microsoft, Samsung and Intel are a few of the front runners in the IoT industry and the innovations brought about by these companies are slowly morphing the hardware industry, and paving the way to more varied and powerful SoCs. Faster, versatile and more powerful chips, along with declining costs are resulting in an exponential growth of the IoT industry, and we can expect a lot of ‘smart’ devices in the future and even more innovation in the chip sector.

With the advent of Internet of Things and other developments, Artificial Intelligence and Machine Learning are becoming the new talk in town, and manufacturers are trying to produce more versatile chips which will be able to handle far complex computations.

Recently ARM unveiled its next generation of processors, a new microarchitecture named ‘Dynamiq’ which focuses on AI and machine learning. It will allow for more powerful systems-on-chip, and also processors that are better at computing. Also, Google has developed a custom-built chip called Tensor Processing Unit or TPU that helps drive its AI services, including its image recognition and machine translation tools. A new processor is also under creation, dubbed TPU 2.0 - chips designed to both train and execute deep neural networks, and perform everything from image and speech recognition to automated translation to robotics.

Another field where system on a chip innovation is on the rise is quantum computing. By reworking the architecture of microprocessors, a team of Australian scientists from the University of New South Wales (UNSW) were able to create the first-ever design of a quantum computer chip that allows quantum calculations to be performed using silicon-based material. The new quantum computer chip design is capable of handling millions of ‘qubits’ and would offer exponentially more processing power. But in reality, scientists have been able to pack less than 50 stable qubits onto a chip and the reason for this limitation is the fact that qubits exist in a delicate, zen-like state of superposition which makes them extremely fragile and unstable, vulnerable to environmental interference. But if scientists are able to overcome this hurdle it will open up a new dimension in the world of computing.


[ALT TEXT]

Conclusion


In the near past, the traditional approach to electronics and especially computing devices — was creating systems that ran on separate, independent parts. Computers and laptops are examples of such systems which are made out of different and distinct components connected together. However, with the advent of VLSI and VVLSI, the permanent miniaturization of all things around us means that there is more reliance on smaller, better and more power efficient systems; this perfectly fits the bill for the current and perhaps, future generations of System on a Chip.

The disruptive potential of the SoCs took the world by storm and it is showing no signs of dying down anytime soon. Smartphones, tablets, wearable gadgets and even IoT (Internet of Things) devices prove that System on a Chip is an important part of the future of all electronics, and innovations are yet to come which will transform the world of technology. So get ready to see some big changes in the ‘silicon’ of the Silicon Valley.