Intel has been making graphics chips for a long time now, and the display controller has been updated in nearly every generation. Early chips supported VGA, then DVO was added to allow boards to include DACs to drive other output types (e.g. LVDS or TV). Over time, more functionality was pulled into the display controller, including LVDS, TV, and DVI support, along with an improved add-on interface called SDVO. The introduction of Arrandale and Core HD graphics is arguably the most significant change we’ve seen though, since rather than being a relatively straightforward integration of an existing interface, it splits display controller functionality between the CPU and PCH. In the past, both the GPU and display controller were part of the GMCH (graphics and memory controller hub), but with the memory controller moving onto the CPU, something had to give. So starting with Core HD graphics (code named Arrandale), most display functionality moved to the PCH (platform controller hub), with the display planes and GPU remaining on the CPU (in the same package at first, but on-die as of second generation Core processors).
The above roughly illustrates the split. As you can see, one output controller for eDP (embedded DisplayPort) is contained in the CPU, while the rest of the display functionality has migrated to the PCH. Between the CPU and PCH is FDI (flexible display interface), used by the CPU when something other than the builtin eDP output needs to be driven. Having eDP integrated onto the CPU provides both programming and power advantages on platforms that choose to use it (i.e. most eDP laptops without switchable graphics). Fewer clocks and links have to be configured (meaning less can go wrong on the programming side), and both FDI and the PCH display logic can be kept powered down, significantly reducing power consumption.
The CPU contains all the display data related functionality: display planes & pipes, framebuffer compression, panel fitting, cursor planes & control, video overlays, palette handling, color control, watermarks & FIFOs, and display interrupts. As mentioned last week, the display planes feed data to the display pipes, which in turn provide data to either the integrated eDP link or to FDI for passing to the PCH.
Pipes & Planes
On current Intel hardware, there are two display planes, each of which can drive pixels to its corresponding pipe. This allows two independent displays to be driven simultaneously, as in an extended desktop configuration for example. However, given clock and output limitations, sometimes both planes & pipes are needed for cloned configurations as well (e.g. eDP plus anything on the PCH).
Before a pipe/plane combination can be active, however, a clock source must be enabled. The clock source is used to drive the data transfer and other activity in the pipe/plane configuration. The PCH provides this to the CPU, providing a reference clock for both the on-die eDP controller and any FDI links that may be activated. Spread spectrum clocking is available as well, in configurations where noise and interference may be a problem.
FBC and power
The plane can optionally be compressed by the framebuffer compression (FBC) unit. If additional memory is provided to the FBC unit, it will periodically read the pixels in the display plane, compress them, and write them to a compressed framebuffer. If enabled, pixels will be fetched from this compressed buffer when the pipe needs data, rather than from the main display plane. This reduces memory traffic, saving power. Keeping memory idle is especially important when the memory can enter self-refresh (possible when the CPU is idle and no devices are performing DMA), since memory in self-refresh mode consumes far less power than it would otherwise. The display plane FIFO watermarks affect power consumption in a similar way; if set too conservatively, the frequency of memory traffic increases, preventing long periods of self-refresh mode. However if set too aggressively, FIFO underruns can occur, leading to display corruption.
Assuming a given configuration needs to drive something other than the on-die eDP link, FDI must be configured and enabled to provide data to the PCH. FDI is similar to DisplayPort, and despite being a fixed frequency, on-board link, it requires link negotiation and training, including vswing and pre-emphasis configuration. Fortunately, this is generally a very quick operation, so doesn’t contribute to noticeable delays in the mode setting sequence (especially not compared to some of the other delays involved, like panel power up). First the receiver and transmitter clocks are enabled, then the link is trained and enabled, allowing pixels to flow from the CPU to the pipe (called the transcoder) on the PCH, which in turn drives the configured display output. Although FDI is fixed frequency, minimizing the number of enabled lanes will prevent unnecessary power consumption.
The PCH can drive many types of outputs directly: DisplayPort, LVDS, HDMI, and VGA. In addition, it can drive an optional SDVO link; this allows for other types of outputs to be connected, for example a TV encoder. The PCH also has logic for controlling an attached panel (whether eDP or LVDS), providing interfaces for both backlight and power control. Audio interfaces are provided as well, to support HDMI and DP audio functionality. The PCH contains some interrupt handling logic as well, which feeds to the master CPU interrupt status registers. This allows the PCH to notify the driver of hotplug events, FDI related features, DP AUX events, and transcoder related errors (e.g. FIFO underruns).
To me it seems it would have made much more sense to usually have no PCH at all and have two eDP ports on the CPU.
Then for the systems that want it have a PCH thing that can convert DP to VGA/HDMI/LVDS/etc. That new PCH could be used for other things than just Intel CPUs too, and systems that don't need more than two DP ports don't need it. (I'm assuming eDP is compatible with DP, if not that would be really stupid).
Paul, thanks. We definitely haven't gone away, everything is still supported. I've just been slacking on blogging about it. :) In fact, there's a lot of fun stuff going on: new GLSL compiler for Mesa, lots of work around Wayland (Qt and GTK+ porting, device & driver interoperability, new display work, etc.).
Currentry I am working with original OS that is microkernel and I am now trying to create a simple graphics driver for this OS. I plan to start from doing this driver for intel HD chips (I have gen5/6 video card in i3/i5 cpus). Looking at intel documentation and linux driver for these chips I get into some troubles in porting. Basically the OS that I am working with has no KMS or DRM stuff, so I need to make a driver almost from scratch. But my primary goal seems to me simple: I need to be able to setup a video card and I need to be able to switch a display modes on it. No 3D or video acceleration.
I have several years experience in driver programming (NIC, HDD, Floppy) but no experience in Display drivers.
So I need now to understand looking at linux display driver for intel HD card what stuff I need to do, what not.
I am searching now for some kind of gide about minimum setup sequence for display mode setting. Like: read pci configuration, map pci resources, parse intel HD card bios, calculate crtc regs, set crtc regs, write to video memory and get a picture on the display. As far as I understand from your blog I also need to deal with GTT (some kind of page tables from the gpu side?), encoders, pipes, ...
Please, can you explain what is the minimum setup action sequence in driver for intel HD graphics card to be able to draw a pixel on the screen?
Leave a comment:
Trackback address for this post:
No Trackbacks for this post yet...
No Pingbacks for this post yet...