Linux – Linux.com https://www.linux.com News For Open Source Professionals Thu, 18 Jul 2024 12:24:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck https://www.linux.com/news/linux-6-8-brings-more-sound-hardware-support-for-intel-amd-including-the-steam-deck/ Tue, 16 Jan 2024 01:54:29 +0000 https://www.linux.com/?p=585706 Waiting for pulling into the mainline kernel once Linus Torvalds is back online following Portland’s winter storms is the sound subsystem updates for Linux 6.8, which include a lot of new sound hardware support. Linux sound subsystem maintainer Takashi Iwai at SUSE describes the new sound hardware support for Linux 6.8 as: “Support for more […]

The post Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck appeared first on Linux.com.

]]>
Waiting for pulling into the mainline kernel once Linus Torvalds is back online following Portland’s winter storms is the sound subsystem updates for Linux 6.8, which include a lot of new sound hardware support.

Linux sound subsystem maintainer Takashi Iwai at SUSE describes the new sound hardware support for Linux 6.8 as:

“Support for more AMD and Intel systems, NXP i.MX8m MICFIL, Qualcomm SM8250, SM8550, SM8650 and X1E80100”

Read more at Phoronix

The post Linux 6.8 Brings More Sound Hardware Support For Intel & AMD, Including The Steam Deck appeared first on Linux.com.

]]>
Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project https://www.linux.com/news/maintainer-confidential-opportunities-and-challenges-of-the-ubiquitous-but-under-resourced-yocto-project/ Wed, 11 Jan 2023 16:04:01 +0000 https://www.linux.com/?p=585066 By Richard Purdie Maintainers are an important topic of discussion. I’ve read a few perspectives, but I’d like to share mine as one of the lesser-known maintainers in the open source world. Who am I, and what do I do? I have many job titles and, in many ways, wear many hats. I’m the “architect” […]

The post Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project appeared first on Linux.com.

]]>
By Richard Purdie

Maintainers are an important topic of discussion. I’ve read a few perspectives, but I’d like to share mine as one of the lesser-known maintainers in the open source world.

Who am I, and what do I do? I have many job titles and, in many ways, wear many hats. I’m the “architect” for the Yocto Project and the maintainer and lead developer for both OpenEmbedded-Core and BitBake. I’m the chair of the Yocto Project Technical Steering Committee (TSC) and a member of the OpenEmbedded TSC. I am also a Linux Foundation Fellow, representing a rare “non-kernel” perspective. The fellowship was partly a response to an industry-wide desire for me to work in a position of independence for the good of the projects and communities I work with rather than any one company.

The different roles I’ve described hint at the complexities that are part of the everyday tasks of maintaining a complex open source project. Still, to many, it could look like a complex labyrinth of relationships, directions, and decisions to balance.

What the Yocto Project is

I still need to tell you more about what I do, so I should explain what the Yocto Project does. Most people realize Linux is all around us but have yet to think much about how it gets there or how to maintain or develop such systems. There is much more to a Linux system than just a kernel, and there are many use cases where a traditional desktop Linux distribution isn’t appropriate. In simple terms, the Yocto Project allows people to develop custom Linux (and non-Linux) systems in a maintainable way.

For a sense of scale, around 65% of the world’s internet traffic runs through devices from a specific manufacturer, and they have hundreds of millions of devices in the field. Those devices have software derived from the Yocto Project. The copy of Linux in Windows, “Windows Subsystem for Linux”, originally derived from the Yocto Project. Alongside the main operating system, most servers have a base management controller, which looks after the server’s health. The openBMC project provides that software and builds on the Yocto Project. A similar situation exists for cars using Automotive Grade Linux, which derives from the Yocto Project as well. The Comcast RDK is an open source UI software stack built using the project and is widely used on media devices such as set-top boxes, and the Yocto Project has also built LG’s TV WebOS operating system. We’ve even had a Yocto Project built system orbiting Mars!

Those examples are tips of the iceberg, as we only know some of the places it is in use; being open source, they don’t have to tell us. The Yocto Project feeds into things all around us. The fact that people don’t know about it is a sign we’ve done a good job—but a low profile can also mean it misses out on recognition and resourcing.

The premise of the Yocto Project is to allow companies to share this work and have one good shared toolset to build these custom systems in a maintainable, reproducible, and scalable way.

How we got here

Now, we come to my role in this. I’m the crazy person who thought this project was possible and said so to several companies just over a decade ago. Then, with the support of some of them, many very talented developers, and a community, I took some existing open source projects and grew and evolved them to solve the problem, or at least go a significant way to doing so! 

The project holds the principle of shared contributions and collaboration, resulting in a better toolset than any individual company or developer could build. Today, I keep this all working.

It may sound like a solved problem, but as anyone working with a Linux distribution knows, open source is continually changing, hardware is continually changing, and the “distro” is where all this comes together. We must work to stay current and synchronized with the components we integrate. 

The biggest challenge for us now is being a victim of our success. The original company sponsorship of developers to work on Yocto understandably scaled back, and many of those developers moved on to other companies. In those companies, they’re often now focused on internal projects/support, and the core community project feels starved of attention. It takes time to acquire the skillsets we need to maintain the core, as the project is complex. Everyone is hoping someone else helps the project core.

I’m often asked what features the project will have in its next release. My honest answer is that I don’t know, as nobody will commit to contributions in advance. Most people focus on their own products or projects, and they can’t get commitment from their management to spend time on features or bug fixing for the core, let alone agree to any timescale to deliver them. This means I can’t know when or if we will do things.

A day in my life as the Yocto Project architect 

I worked for a project member company until 2018, which generously gave me time to work on the project. Times change, and rather than moving on to other things, I took a rather risky decision at the time to move to get funding directly from the project as I feared for its future. Thankfully it did work out, and I’ve continued working on it.

Richard Purdie, Linux Foundation Fellow and Yocto Project architect

There are other things the project now funds. This includes our “autobuilder” infrastructure, a huge automated test matrix to find regressions. Along with the autobuilder and admin support to keep it alive, the project also funds a long-term support (LTS) release maintainer (we release an LTS every two years), documentation work, and some help in looking after incoming patch testing with the autobuilder, integrating new patches and features. 

There are obvious things in my day-to-day role, such as reviewing patches, merging the ones that make sense, and giving feedback on those with issues. Less obvious things include needing to debug and fix problems with the autobuilder. 

Sadly, no one else can keep the codebase that supports our test matrix alive. The scale of our tests is extensive, with 30+ high-power worker machines running three builds at a time, targeting the common 32- and 64-bit architectures with different combinations of core libraries, init systems, and so on. We test under qemu and see a lot of “intermittent” failures in the runtime testing where something breaks, often under high load or sometimes once every few months. Few people are willing to work on these kinds of problems, but, left unchecked, the number of them makes our testing useless as you can’t tell a real failure from the random, often timing-related ones. I’m more of a full-time QA engineer than anything else!

Bug fixing is also an interesting challenge. The project encourages reporting bugs and has an active team to triage them. However, we need help finding people interested in looking into and fixing identified issues. There are challenges in finding people with both the right skills and time availability. Where we have trained people, they generally move on to other things or end up focused on internal company work. The only developer time I can commit is my own.

Security is a hot topic. We do manage to keep versions of software up to date, but we don’t have a dedicated security team; we rely on the teams that some project users have internally. We know what one should do; it is just unfortunate that nobody wants to commit time to do it. We do the best we can. People love tracking metrics, but only some are willing to do the work to create them or keep them going once established.

Many challenges arise from having a decent-sized team of developers working on the project, with specific maintainers for different areas, and then scaling back to the point where the only resource I can control is my own time. We developed many tools currently sitting abandoned or patched up on an emergency basis due to a lack of developer resources to do even basic maintenance. 

Beyond the purely technical, there are also collaboration and communication activities. I work with two TSCs, the project member organizations, people handling other aspects of the project (advocacy, training, finance, website, infrastructure, etc.), and developers. These meetings add up quickly to fill my calendar. If we need backup coverage in any area, we don’t have many options besides my time to fall back on.

The challenges of project growth and success

Our scale also means patch requirements are more demanding now. Once, when the number of people using the project was small, the impact of breaking things was also more limited, allowing a little more freedom in development. Now, if we accept a change commit and something breaks, it becomes an instant emergency, and I’m generally expected to resolve it. When patches come from trusted sources, help will often be available to address the regressions as part of an unwritten bond between developers and maintainers. This can intimidate new contributors; they can also find our testing requirements too difficult.

We did have tooling to help new contributors—and also the maintainers—by spotting simple, easily detected errors in incoming patches. This service would test and then reply to patches on the mailing list with pointers on how to fix the patches, freeing maintainer time and helping newcomers. Sadly, such tools require maintenance, and we lost the people who knew how to look after this component, so it stopped working. We formed plans to bring it back and make the maintenance easier, but we’ve struggled to find anyone with the time to do it. I’ve wondered if I should personally try to do it; however, I just can’t spend the chunk of time needed on one thing like that, as I would neglect too many other things for too long.

I wish this were an isolated issue, but there are other components many people and companies rely upon that are also in a perilous state. We have a “layer index,” which allows people to search the ecosystem to find and share metadata and avoid duplicating work. Nobody is willing and able to spend time to keep it running. It limps along; we do our best to patch up issues, but we all know that, sooner or later, something will go badly wrong, and we will lose it. People rely on our CROPs container images, but they have no maintainer.

I struggle a lot with knowing what to do about these issues. They aren’t a secret; the project members know, the developers know, and I raise them in status reports, in meetings, and wherever else I can. Everyone wants to work elsewhere as long as they ‘“kind of ’work” or aren’t impacting someone badly. Should I feel guilty and try to fix these things, risking burnout and giving up a social life, so I have enough time to do so? I shouldn’t, and I can’t ask others to do that, either. Should I just let these things crash and burn, even if the work in rebuilding would be much worse? That will no longer be a choice at some point, and we are slowly losing components.

Over the holiday period, I also realized that project contributions have changed. Originally, many people contributed in their spare time, but many are now employed to work on it and use it daily as part of their job. There have been more contributions during working hours than on weekends or holidays. During the holiday period, some key developments were proposed by developers having “fun” during their spare time. Had I not responded to these, helping with wider testing, patch review, and feedback, they likely would have stalled and failed, with people no longer having time when back outside the holiday period. The contributions were important enough that I strongly felt I should support them, so I did, the cost being that I didn’t get so much of a break myself.

As you read this blog and get a glimpse of my day, I want you to leave with an understanding that all projects, large and small, have their own challenges, and Yocto isn’t alone. 

I love the project; I’m proud of what we’ve done with it, companies, and a community together. Growth and success have their downsides, though we see some issues I never expected. I am confident that the project can and will survive one way or another, come what may, as I’ve infused survival traits into its DNA.

Where the Yocto Project is going

There is also the future-looking element. What are the current trends? What do we need to adapt to? How can we improve our usability, particularly for new users? There is much to think about.

Recently, after I raised concerns about feature development, the project asked for a “five-year plan” showing what we could do in that timeframe. It took a surprising amount of work to pull together the ideas and put cost/time estimates against them, and I put a lot of time into that. Sadly, the result doesn’t yet have funding. I keep being asked when we’ll get features, but there needs to be more willingness to fund the development work needed before we even get to the question of which developers would actually do it!

One question that comes up a lot is the project’s development model. We’re an “old school” patch on a mailing list, similar to the kernel. New developers complain that we should have GitHub workflows so they can make point-and-click patch submissions. I have made submissions to other projects that way, and I can see the attraction of it. Equally, it does depend a lot on your review requirements. We want many people to see our patches, not just one person, and we greatly benefit from that comprehensive peer review. There are benefits in what we do, and being told that we need to understand the reasons and benefits to stay the course is unhelpful and gets a bit worn over time! Our developer/maintainer base is used to mailing list review, and changing that would likely result in one person looking at patches, to the detriment of the project. Maintainers like myself also have favored processes and tools, and changing them would likely at least cause productivity issues for a while.

Final thoughts: The future?

Governments are asking some good questions about software and security, but there are also very valid concerns about the lifecycle of hardware and sustainability issues. What happens to hardware after the original manufacturer stops supporting it? Landfill? Can you tell if a device contains risky code?

The project has some amazing software license and SBoM capabilities, and we collaborate closely with SPDX. We’re also one of the few build environments that can generate fully reproducible binaries and images down to the timestamps for all the core software components straight out of the box.

Combining these technologies, you can have open and reproducible software for devices. That means you can know the origin of the code on the device, you can rebuild it to confirm that what it runs is really what you have instructions/a manifest for, and if—or, in reality, when—there is a security issue, you have a path to fixing it. There is the opportunity for others to handle software for the device if the original provider stops for whatever reason, and devices can avoid landfill.

I dream of a world where most products allow for this level of traceability, security, and sustainability, and I believe it would drive innovation to a new level. I know a build system that could help it become a reality!

Get involved to help the Yocto Project community grow

Basic survival isn’t my objective or idea of success. I’d love to see more energy, engagement, and collaboration around new features, establish that security team and see the project playing a more prominent role in the broader FOSS ecosystem.

Help can take different forms. If you already use the Yocto Project, say so publicly, or let us list you as a user! We’re open to developer help and new contributors too, be it features, bug fixing, or as maintainers.

The project is also actively looking to increase its number of member companies. That helps us keep doing what we’re doing today, but it might also let us fund development in the critical areas we need it and allow us to keep things running as the ecosystem has grown to expect. Please contact us if you’re interested in project membership to help this effort.

About the author: Richard Purdie is the Yocto Project architect and a Linux Foundation Fellow.

The post Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project appeared first on Linux.com.

]]>
How to Manage Linux Endpoints with Automation https://www.linux.com/topic/linux/how-to-manage-linux-endpoints-with-automation/ Thu, 14 Apr 2022 13:24:52 +0000 https://www.linux.com/?p=584152 Endpoint security is traditionally treated separately from the broader network security plan, and usually falls under responsibility of the IT admins team rather than the security team. However, endpoints are becoming a more critical part of the extended network ecosystem as many organizations will continue encouraging remote work after the Great Office Return. The IT […]

The post How to Manage Linux Endpoints with Automation appeared first on Linux.com.

]]>
Endpoint security is traditionally treated separately from the broader network security plan, and usually falls under responsibility of the IT admins team rather than the security team. However, endpoints are becoming a more critical part of the extended network ecosystem as many organizations will continue encouraging remote work after the Great Office Return.

The IT admins approach not only limits visibility and control but also makes it difficult to assess a device’s security level. It’s challenging to take the necessary automated steps in the event of a compromise due to a lack of access to vital threat intelligence. These challenges are even greater for Linux users, which is the preferred system of many developers and DevOps-led organizations.

Stack Overflow’s 2020 developer poll cites that professional developers will increase by more than 28 million by 2024. Thus, long-term integration and automation of Linux systems and infrastructure into IT operations is an even bigger priority for organizations moving forward.

Why organizations lack control and visibility over their Linux endpoint devices

Unfortunately, Linux infrastructure is not generally straightforward to automate. Without extra tooling, some administrators may face a long road to achieving their automation targets in the first place. To automate Linux systems, IT administrators must have complete control over their security and configuration settings. They must also possess the ability to monitor systems afterward to ensure everything is running smoothly.

Challenges of Linux endpoint management

Many endpoints currently connected to corporate networks are not official corporate assets. IT departments can’t quickly assess or monitor them to ensure they get the updates and patches they need because they don’t own these devices. This makes them vulnerable to threats, but it also makes them a relatively unknown threat vector, posing a threat to the entire fleet of devices.

Another significant barrier to visibility is mobility. Endpoint devices were once considered corporate assets kept behind the corporate firewall. Users of these endpoint devices today can connect to corporate resources, access corporate data, and even work on it using a variety of applications. They don’t need to be connected to a VPN to access physical or cloud-based resources. This is becoming more common across organizations of all sizes.

These devices spend the majority of their time related to non-corporate network resources which significantly reduces IT visibility. According to a 2020 Ponemon Institute report titled “The Cost of Insecure Endpoints,” two-thirds of IT professionals admit to having no visibility into endpoints that connect to the network regularly when they work outside of it.

There is also the challenge of Shadow IT. Employees can easily install and run traditional and cloud-based applications on their phones and computers and on corporate-owned assets assigned to them without having to go through IT. If IT administrators don’t have insight into all of the programs operating on these devices, they won’t be able to verify that essential access controls are in place to mitigate threats or govern the spread of data and other business assets. Self-compliance and security are not ideal for Linux endpoints.

Why manage your Linux devices in real-time?

Having complete visibility over IT asset inventory for security and productivity monitoring is critical to helping identify and eliminate unauthorized devices and apps.

What should IT teams monitor in real-time? Important metrics to keep an eye on include the number of unknown, checked-in, and total devices in the fleet, as well as devices installed and outdated and rarely used apps. IT professionals should look for a tool that keeps a constantly updated and monitored inventory of IT assets, including Linux.

Maintaining endpoint health with security controls is another advantage of managing Linux devices in real-time. Every day, numerous activities take place at an endpoint. It is critical to keep an eye on everything, including suspicious activity.

IT practitioners need a tool that conducts regular endpoint health checks to protect your endpoints, enforces firewall policies, quarantines or isolates unnecessary devices, kills rogue processes and services, hardens system configurations, and performs remote system tune-ups and disc clean-ups. This will help identify and eliminate unauthorized devices and applications.

Otherwise, allowing any random device or application in the network will gouge a hole in IT security and employee productivity. That’s why disabling or blocking illegal devices and programs from entering your network is critical.

Moreover continuous monitoring and remediation must be enabled. Continuous monitoring of your endpoints requires security tasks to be executed periodically. Chef Desktop helps achieve this without worrying about connectivity and maintenance issues and helps to ensure that endpoints remain in the desired state 

Conclusion

Long-term integration of Linux systems and infrastructure into IT operations is common in organizations that have them.  Continuous monitoring of endpoints requires security tasks to be executed even remotely, without relying on physical access of devices. IT administrators must have complete control over their security and configuration settings to automate Linux systems, as well as the ability to monitor systems after the fact to ensure everything runs smoothly. 

IT managers must reduce costs and optimize time by leaning off manual processes. Instead, they should configure the entire linux fleet in a consistent, policy-driven manner. This boosts efficiency and productivity as well as maintains detailed visibility into the overall status of the Linux and desktop fleet. Easy-to-implement configuration management capabilities can assist IT teams in managing and overcoming some of the challenges they face when managing large Linux laptop fleets.

sudeep charles

AUTHOR BIO

Sudeep Charles is a Senior Manager, Product Marketing at Progress. Over a career spanning close to two decades, he has held various roles in product development, product marketing, and business development for application development, cybersecurity, fintech and telecom enterprises. Sudeep holds a Bachelor’s degree in Engineering and a Master’s in Business Administration. 

The post How to Manage Linux Endpoints with Automation appeared first on Linux.com.

]]>
Hacking the Linux Kernel in Ada – Part 3 https://www.linux.com/audience/hacking-the-linux-kernel-in-ada-part-3/ Thu, 07 Apr 2022 22:32:00 +0000 https://www.linux.com/?p=584075 For this three-part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada! Part 1. Review of a kernel module, build strategy, and Ada integration. Part […]

The post Hacking the Linux Kernel in Ada – Part 3 appeared first on Linux.com.

]]>
For this three-part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!

You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a GitHub issue if you find any problem.

Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.

Binding 101

The binding thickness

Our code boundary to the Linux kernel C methods lies in kernel.ads. For an optional “adaptation” opportunity, kernel.adb exists before breaking into the concrete C binding. Take printk (printf equivalent in kernel space) for example. In C, you would call printk(“hello\n”). Ada strings are not null-terminated, they are an array of characters. To make sure the passed Ada string stays valid on the C side, you expose specification signatures .ads that make sense when programming from an Ada point of view and “adapt” in body implementation .adb before calling directly into the binding. Strictly speaking, our exposed Ada Printk would qualify as a “thick” binding even though the adaptation layer is minimal. This is in opposition to a “thin” binding which is really a one-to-one mapping on the C signature as implemented by Printk_C.

-- kernel.ads
procedure Printk (S : String); -- only this is visible for clients of kernel

-- kernel.adb
procedure Printk_C (S : String) with -- considered a thin binding
    Import        => true,
    Convention    => C,
    External_Name => "printk";

procedure Printk (S : String) is -- considered a thick binding
begin
   Printk_C (S & Ascii.Lf & Ascii.Nul); -- because we ‘mangle’ for Ada comfort
end;

The wrapper function

Binding to a wrapped C macro or static inline is often convenient, potentially makes you inherit fixes, upgrades happening inside/under the macro implementation and are, depending on the context, potentially more portable. create_singlethread_workqueue used in printk_wq.c as found in Part 1 makes a perfect example. Our driver has a C home in main.c. You create a C wrapping function calling the macro.

/* main.c */
extern struct workqueue_struct * wrap_create_singlethread_wq (const char* name)
{
   return create_singlethread_workqueue(name); /* calling the macro */
}

You then bind to this wrapper on the Ada side and use it. Done.

-- kernel.ads
function Create_Singlethread_Wq (Name : String) return Workqueue_Struct_Access with
   Import        => True,
   Convention    => C,
   External_Name => "wrap_create_singlethread_wq";

-- flash_led.adb
...
Wq := K.Create_Singlethread_Wq ("flash_led_work");

The reconstruction

Sometimes a macro called on the C side creates stuff, in place, which you end up needing on the Ada side. You can probably always bind to this resource but I find it often impedes code story. Take DECLARE_DELAYED_WORK(dw, delayed_work_cb) for example. From an outside point of view, it implicitly creates struct delayed_work dw in place.

/* https://elixir.bootlin.com/linux/v4.9.294/source/include/linux/workqueue.h */
#define DECLARE_DELAYED_WORK(n, f)					\
	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, 0)

Using this macro, the only way I found to get a hold of dw from Ada without crashing (returning dw from a wrapper never worked) was to globally call DECLARE_DELAYED_WORK(n, f) in main.c and then bind only to dw. Having to maintain this from C, making it magically appear in Ada felt “breadboard wiring” to me. In the code repository, you will find that we fully reconstructed this macro under the procedure of the same name Declare_Delayed_Work.

The pointer shortcut

Most published Ada to C bindings implement full definition parity. This is an ideal situation in most cases but it also comes with complexity, may generate many 3rd party files, sometimes buried deep, out-of-sync definitions, etc. What can you do when complete bindings are missing or you just want to move lean and fast? If you are making a prototype, you want minimal dependencies, the binding part is peripheral eg. you may only need a quick native window API. You get the point.

Depending on the context you do not always need the full type definitions to get going. Anytime you are strictly dealing with a handle pointer (not owning the memory), you can take a shortcut. Let’s bind to gpio_get_value to illustrate. Again, I follow and layout all C signatures found in the kernel sources leading to concrete stuff, where we can bind.




/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)include/linux/gpio.h */
static inline int gpio_get_value(unsigned int gpio)
{
	return __gpio_get_value(gpio);
}

/* (+)include/asm-generic/gpio.h */
static inline int __gpio_get_value(unsigned gpio)
{
	return gpiod_get_raw_value(gpio_to_desc(gpio));
}
/* (+)include/linux/gpio/consumer.h */
struct gpio_desc *gpio_to_desc(unsigned gpio);            /* bindable */

int gpiod_get_raw_value(const struct gpio_desc *desc);    /* bindable */

/* (+)drivers/gpio/gpiolib.h */
struct gpio_desc {
	struct gpio_device	*gdev;
	unsigned long		flags;
...
	const char		*name;
};

Inspecting the C definitions we find that gpiod_get_raw_value and gpio_to_desc are our available functions for binding. We note gpio_to_desc uses a transient pointer of type gpio_desc *. Because we do not touch or own a full gpio_desc instance we can happily skip defining it in full (and any dependent leads eg. gpio_device).

By declaring type Gpio_Desc_Acc is new System.Address; we create an equivalent to gpio_desc *. After all, a C pointer is a named system address. We now have everything we need to build our Ada version of gpio_get_value.

-- kernel.ads
package Ic renames Interfaces.C;

function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int; -- only this is visible for clients of kernel

-- kernel.adb
type Gpio_Desc_Acc is new System.Address; -- shortcut

function Gpio_To_Desc_C (Gpio : Ic.Unsigned) return Gpio_Desc_Acc with
   Import        => True,
   Convention    => C,
   External_Name => "gpio_to_desc";
 
function Gpiod_Get_Raw_Value_C (Desc : Gpio_Desc_Acc) return Ic.Int with
   Import        => True,
   Convention    => C,
   External_Name => "gpiod_get_raw_value";

function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int is
   Desc : Gpio_Desc_Acc := Gpio_To_Desc_C (Gpio);
begin
   return Gpiod_Get_Raw_Value_C (Desc);
end;

The Raw bindings, “100% Ada”

In most production contexts we cannot recommend reconstructing unbindable kernel API calls in Ada. Wrapping the C macro or static inline is definitely easier, safer, portable and maintainable. The following goes full blown Ada for the sake of illustrating some interesting nuts and bolts and to show that it is always possible. 

Flags, first take

Given the will power you can always reconstruct the targeted macro or static inline in Ada. Let’s come back to create_singlethread_workqueue. If you take the time to expand its macro using GCC this is what you get.

$ gcc -E [~ 80_switches_for_valid_ko] printk_wq.c 
...
wq = __alloc_workqueue_key(("%s"),
                          (WQ_UNBOUND |
                           __WQ_ORDERED |
                           __WQ_ORDERED_EXPLICIT |
                          (__WQ_LEGACY | WQ_MEM_RECLAIM)),
                          (1),
                          ((void *)0),
                          ((void *)0),
                          "my_wq");

All arguments are straightforward to map except the OR‘ed flags. Let’s search the kernel sources for those flags.

/* https://elixir.bootlin.com/linux/v4.9.294/source/include/linux/workqueue.h */
enum {
   WQ_UNBOUND             = 1 << 1,
   ...
   WQ_POWER_EFFICIENT     = 1 << 7,

   __WQ_DRAINING          = 1 << 16,
   ...
   __WQ_ORDERED_EXPLICIT  = 1 << 19,

   WQ_MAX_ACTIVE          = 512,     
   WQ_MAX_UNBOUND_PER_CPU = 4,      
   WQ_DFL_ACTIVE          = WQ_MAX_ACTIVE / 2,
};

Here are our design decisions for reconstruction

  • WQ_MAX_ACTIVE, WQ_MAX_UNBOUND_PER_CPU, WQ_DFL_ACTIVE are constants, not flags, so we keep them out.
  • The enum is anonymous, let’s give it a proper named type.
  • __WQ pattern is probably a convention but at the same times usage is mixed, eg. WQ_UNBOUND | __WQ_ORDERED, so let’s flatten all this.

Because we do not use these flags elsewhere in our code base, the occasion is perfect to show that in Ada we can keep all this modeling local to our unique function using it.

-- kernel.ads
package Ic renames Interfaces.C;

type Wq_Struct_Access is new System.Address;      -- shortcut
type Lock_Class_Key_Access is new System.Address; -- shortcut
Null_Lock : Lock_Class_Key_Access := 
Lock_Class_Key_Access (System.Null_Address); -- typed ((void *)0) equiv.

-- kernel.adb
type Bool is (NO, YES) with Size => 1;       -- enum holding on 1 bit
for Bool use (NO => 0, YES => 1);            -- "represented" by 0, 1 too

function Alloc_Workqueue_Key_C ...
   External_Name => "__alloc_workqueue_key";      -- thin binding

function Create_Singlethread_Wq (Name : String) return Wq_Struct_Access is
   type Workqueue_Flags is record
      ...
      WQ_POWER_EFFICIENT  : Bool;
      WQ_DRAINING         : Bool;
      ...
   end record with Size => Ic.Unsigned'Size;
   for Workqueue_Flags use record
      ...
      WQ_POWER_EFFICIENT  at 0 range  7 ..  7;
      WQ_DRAINING         at 0 range 16 .. 16;
      ...
   end record;
   Flags : Workqueue_Flags := (WQ_UNBOUND          => YES,
                               WQ_ORDERED          => YES,
                               WQ_ORDERED_EXPLICIT => YES,
                               WQ_LEGACY           => YES,
                               WQ_MEM_RECLAIM      => YES,
                               Others              => NO);
   Wq_Flags : Ic.Unsigned with Address => Flags'Address;
begin
   return Alloc_Workqueue_Key_C ("%s", Wq_Flags, 1, Null_Lock, "", Name);
end;
  • In C, each flag is implicitly encoded as an integer literal, bit swapped by an amount. Because __alloc_workqueue_key signature expects flags encoded as an unsigned int It should be reasonable to use Ic.Unsigned’Size, to hold a Workqueue_Flags.
  • We build the representation of Workqueue_Flags type similar to what we learned in Part 2 to model registers. Compared to the C version we now have NO => 0, YES => 1 semantic and no need for bitwise operations.
  • Remember, in Ada we roll with strong user-defined types for the greater goods. Therefore something like Workqueue_Flags does not match the expected Flags : Ic.Unsigned parameter of our __alloc_workqueue_key thin binding. What should we do? You create a variable Wq_Flags : Ic.Unsigned and overlay it the address of Flags : Workqueue_Flags which you can now pass in to __alloc_workqueue_key.
Wq_Flags : Ic.Unsigned with Address => Flags'Address; -- voila!

Ioremap and iowrite32

The core work of the raw_io version happens in Set_Gpio. Using Ioremap, we retrieve the kernel mapped IO memory location for the GPIO_OUT register physical address. We then write the content of our Gpio_Control to this IO memory location through Io_Write_32.

-- kernel.ads
type Iomem_Access is new System.Address;

-- led.adb
package K renames Kernel;
package C renames Controllers;

procedure Set_Gpio (Pin : C.Pin; S : Led.State) is

   function Bit (S : Led.State) return C.Bit renames Led.State'Enum_Rep;

   Base_Addr : K.Iomem_Access;
   Control   : C.Gpio_Control := (Bits  => (others => 0), 
                                  Locks => (others => 0));
   Control_C : K.U32 with Address => Control'Address;
begin
   ...
   Control.Bits (Pin.Reg_Bit) := Bit (S); -- set the GPIO flags
   ...
   Base_Addr := Ioremap (C.Get_Register_Phys_Address (Pin.Port, C.GPIO_OUT),
                         Control_C'Size); -- get kernel mapped register addr.
   K.Io_Write_32 (Control_C, Base_Addr);  -- write our GPIO flags to this addr.
   ...
end;

Let’s take the hard paths of full reconstruction to illustrate interesting stuff. We first implement ioremap. On the C side we find

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)arch/arm64/include/asm/io.h */
#define ioremap(addr, size) \
   __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))

extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot);                       

Flags, second take

Here we are both lucky and unlucky. __ioremap is low hanging while __pgprot(PROT_DEVICE_nGnRE) turns out to be a rabbit hole. I skip the intermediate expansion by reporting the final result

$ gcc -E [~ 80_switches_for_valid_ko] test_using_ioremap.c
…
void* membase = __ioremap(  
   (phys_addr + offset),
   (4),
   ((pgprot_t) {
      (((((((pteval_t)(3)) << 0) |
      (((pteval_t)(1)) << 10) |
      (((pteval_t)(3)) << 8)) |
      (arm64_kernel_unmapped_at_el0() ? (((pteval_t)(1)) << 11) : 0)) |
      (((pteval_t)(1)) << 53) |
      (((pteval_t)(1)) << 54) |
      (((pteval_t)(1)) << 55) |
      ((((pteval_t)(1)) << 51)) |
      (((pteval_t)((1))) << 2)))
   }))

Searching for definitions in the kernel sources: (meaningful sampling only)

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)arch/arm64/include/asm/pgtable-hwdef.h */
#define PTE_TYPE_MASK       (_AT(pteval_t, 3) << 0)
...
#define PTE_NG		    (_AT(pteval_t, 1) << 11) 
...
#define PTE_ATTRINDX(t)     (_AT(pteval_t, (t)) << 2)    

/* (+)arch/arm64/include/asm/mmu.h */
static inline bool arm64_kernel_unmapped_at_el0(void)   
{
   return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
   cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
}

/* (+)arch/arm64/include/asm/pgtable-prot.h */
#define PTE_DIRTY           (_AT(pteval_t, 1) << 55)    

/* (+)arch/arm64/include/asm/memory.h */
#define MT_DEVICE_nGnRE     1                           

The macro pattern _AT(pteval_t, x) can be cleared right away. IIUC, it serves to handle calling both from assembly and C. When you are concerned by the C case, like we do, it boils down to x, eg. ((pteval_t)(1)) << 10) becomes 1 << 10.

arm64_kernel_unmapped_at_el0 is in part ‘kernel configuration dependant’, defaulting to ‘yes’, so let’s simplify our job and bring it in, PTE_NG which is the choice ? (((pteval_t)(1)) << 11), for all cases.

(((pteval_t)((1))) << 2))) turns out to be PTE_ATTRINDX(t) with MT_DEVICE_nGnRE as input. Inspecting the kernel sources, there are four other values intended as input to PTE_ATTRINDX(t). PTE_ATTRINDX behaves like a function so let implement it as such.

type Pgprot_T is mod 2**64; -- type will hold on 64 bits 

type Memory_T is range 0 .. 5;
MT_DEVICE_NGnRnE : constant Memory_T := 0;
MT_DEVICE_NGnRE  : constant Memory_T := 1;
...
MT_NORMAL_WT     : constant Memory_T := 5;

function PTE_ATTRINDX (Mt : Memory_T) return Pgprot_T is
   (Pgprot_T(Mt * 2#1#e+2)); -- base # based_integer # exponent

Here I want to show another way to replicate C behavior, this time using bitwise operations. Something like PTE_TYPE_MASK value ((pteval_t)(3)) << 0 cannot be approached like we did before. 3 takes two bits and is somewhat a magic number. What we can do is improve on the representation. We are doing bit masks so why not express using binary numbers directly. It even makes sense graphically.

PTE_VALID      : Pgprot_T := 2#1#e+0;
...
PTE_TYPE_MASK  : Pgprot_T := 2#1#e+0 + 2#1#e+1; -- our famous 3
...
PTE_HYP_XN     : Pgprot_T := 2#1#e+54;
-- kernel.ads
type Phys_Addr_T is new System.Address;
type Iomem_Access is new System.Address;

-- kernel.adb
function Ioremap (Phys_Addr : Phys_Addr_T; 
                  Size      : Ic.Size_T) return Iomem_Access is
...         
   Pgprot : Pgprot_T := (PTE_TYPE_MASK or
                         PTE_AF        or
                         PTE_SHARED    or
                         PTE_NG        or
                         PTE_PXN       or
                         PTE_UXN       or
                         PTE_DIRTY     or
                         PTE_DBM       or
                         PTE_ATTRINDX (MT_DEVICE_NGnRE));
begin
   return Ioremap_C (Phys_Addr, Size, Pgprot);
end;

So what is interesting here?

  • Ada is flexible. The original Pgprot_T values arrangement did not allow record mapping like we previously did for type Workqueue_Flags. We adapted by replicating the C implementation, OR‘ing all values to create a final mask.
  • Everything has been tidied up by strong typing. We are now stuck with disciplined stuff.
  • Representation is explicit, expressed in the intended base.
  • Once again this typing machinery lives at the most restrictive scope, inside the Ioremap function. Because Ada “scoping” has few special rules, refactoring up/out of scopes usually boils down to a simple blocks swapping game.

Emitting assembly

Now we give a look at ioread32 and iowrite32. Turns out those are, again, a cascade of static inline and macros ending up directly emitting GCC assembly directives (detailing only iowrite32).

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)include/asm-generic/io.h */
static inline void iowrite32(u32 value, volatile void __iomem *addr)
{
   writel(value, addr);
}
/* (+)include/asm/io.h */
#define writel(v,c)     ({ __iowmb(); writel_relaxed((v),(c)); })
#define __iowmb()       wmb()    

/* (+)include/asm/barrier.h */
#define wmb()           dsb(st) 
#define dsb(opt)        asm volatile("dsb " #opt : : : "memory")

/* (+)arch/arm64/include/asm/io.h */
#define writel_relaxed(v,c) \
   ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
   
static inline void __raw_writel(u32 val, volatile void __iomem *addr)   
{
   asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
}

In Ada it becomes

with System.Machine_Code
...
procedure Io_Write_32 (Val : U32; Addr : Iomem_Access) is
   use System.Machine_Code;
begin
   Asm (Template => "dsb st",
        Clobber  => "memory",
        Volatile => True);

   Asm (Template => "str %w0, [%1]",
        Inputs   => (U32'Asm_Input ("rZ", Val), 
                     Iomem_Access'Asm_Input ("r", Addr)),
        Volatile => True);
end;

This Io_Write_32 implementation is not portable as we rebuilt the macro following the expansion tailored for arm64. A C wrapper would be less trouble while ensuring portability. Nevertheless, we felt this experiment was a good opportunity to show assembly directives in Ada.

That’s it!

I hope you appreciated this moderately dense overview of Ada in the context of Linux kernel module developpement. I think we can agree that Ada is a really disciplined and powerful contender when it comes to system, pedal to the metal, programming. I thank you for your time and concern. Do not hesitate to reach out and, happy Ada coding!

I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.


olivier henley
Olivier Henley

The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.

The post Hacking the Linux Kernel in Ada – Part 3 appeared first on Linux.com.

]]>
Hacking the Linux Kernel in Ada – Part 2 https://www.linux.com/audience/hacking-the-linux-kernel-in-ada-part-2/ Thu, 07 Apr 2022 21:17:00 +0000 https://www.linux.com/?p=584061 For this three part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada! Part 1. Review of a kernel module, build strategy, and Ada integration. […]

The post Hacking the Linux Kernel in Ada – Part 2 appeared first on Linux.com.

]]>
For this three part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!

You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a Github issue if you find any problem.

Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.

Pascal on steroids you quoted?

led.ads (specification file, Ada equivalent to C .h header file) is where we model a simple interface for our LED.

with Controllers;
package Led is -- this bit of Ada code provides an interface to our LED

  package C Renames Controllers;

  type State is (Off, On);
  type Led_Type (Size : Natural) is tagged private;

  subtype Tag Is String;

  procedure Init       (L : out Led_Type; P : C.Pin; T : Tag; S : State);
  procedure Flip_State (L : in out Led_Type);
  procedure Final      (L : Led_Type);

private

  for State use (Off => 0, On => 1);

  function "not" (S : State) return State is
      (if S = On then Off else On);
  type Led_Type (Size : Natural) is tagged record
      P     : C.Pin;
      T     : Tag (1 .. Size);
      S     : State;
  end record;

end Led;

For those new to Ada, many interesting things happen for a language operating at the metal.

  • First, type are user-defined and strong. Therefore the compile-time analysis is super-rich and checking extremely strict. Many bugs do not survive compilation. If you want to push the envelope, move to the SPARK Ada subset. You can then start to prove your code for the absence of runtime errors. It’s that serious business.
  • We with the Controllers package. Ada with is a stateless semantic inclusion at the language level, not just a preprocessor text inclusion like #include. Eg. No more redefinition contexts, accompanying guard boilerplate, and whatnot.
  • Led is packaged. Nothing inside Led can clash outside. It can then be with’ed, and use’d at any scope. Ada scoping, namespacing, signature, etc. are powerful and sound all across the board. Explaining everything does not fit here.
  • Here renames is used as an idiom to preserve absolute namespacing but code story succinct. In huge codebases, tractability remains clear, which is very welcomed.
  • Ada enum State has full-image and range representation. We use a numeric representation clause, which will serve later.
  • A tagged record lets you inherit a type (like in OOP) and use the “dot” notation.
  • We subtype a Tag as a String for semantic clarity.
  • out means, we need to have an initialized object before returning “out”, in out the passed “in” initialized object will be modified before returning “out”.
  • Record (loosely a C struct equivalent) by specifying our Tag Size.
  • We override the “not” operator for the State type as a function expression.
  • We have public/private information visibility that lets us structure our code and communicate it to others. A neat example, because a package is at the language level, you remove any type in the public part, add data in the body file and you end up with a Singleton. That easy.

The driver translation

The top-level code story resides in flash_led.adb. Immediately when the module is loaded by the kernel, Ada_Init_Module executes, called from our main.c entry point. It first imports the elaboration procedure flash_ledinit generated by GNATbind, runs it, Init our LED object, and then setup/registers the delayed work queue.

with Kernel;
with Controllers;
with Interfaces.C; use Interfaces.C;
...
   package K renames Kernel;
   package C renames Controllers;

   Wq             : K.Workqueue_Struct_Access := K.Null_Wq;
   Delayed_Work   : aliased K.Delayed_Work; -- subject to alias by some pointer on it

   Pin            : C.Pin := C.Jetson_Nano_Header_Pins (18);
   Led_Tag        : Led.Tag := "my_led";
   My_Led         : Led_Type (Led_Tag'Size);
   Half_Period_Ms : Unsigned := 500;
...
procedure Ada_Init_Module is
   procedure Ada_Linux_Init with
      Import        => True,
      Convention    => Ada,
      External_Name => "flash_ledinit";
begin
   Ada_Linux_Init;
   My_Led.Init (P => Pin, T => Led_Tag, S => Off);
   ...

   if Wq = K.Null_Wq then -- Ada equal
      Wq := K.Create_Singlethread_Wq ("flash_led_wq");
   end if;

   if Wq /= K.Null_Wq then -- Ada not equal
      K.Queue_Delayed_Work(Wq, 
                           Delayed_Work'Access, -- an Ada pointer
                           K.Msecs_To_Jiffies (Half_Period_Ms));
   end if;
end;

In the callback, instead of printing to the kernel message buffer, we call the Flip_State implementation of our LED object and re-register to the delayed work queue. It now flashes.

procedure Work_Callback (Work : K.Work_Struct_Access) is
begin
   My_Led.Flip_State;
   K.Queue_Delayed_Work (Wq,
                         Delayed_Work'Access, -- An Ada pointer
                         K.Msecs_To_Jiffies (Half_Period_Ms));
end;

Housekeeping

If you search the web for images of “NVIDIA Jetson Development board GPIO header pinout” you will find such diagram.

Right away, you figure there are about 5 data fields describing a single pinout

  • Board physical pin number (#).
  • Default function (name).
  • Alternate function (name).
  • Linux GPIO (#).
  • Tegra SoC GPIO (name.#).

Looking at this diagram we find hints of the different mapping happening at the Tegra SoC, Linux, and physical pinout level. Each “interface” has its own addressing scheme. The Tegra SoC has logical naming and offers default and alternate functions for a given GPIO line. Linux maintains its own GPIO numbering of the lines so does the physical layout of the board.

From where I stand I want to connect a LED circuit to a board pin and control it without fuss, by using any addressing scheme available. For this we created an array of variant records instantiation, modeling the pin characteristics for the whole header pinouts. Nothing cryptic or ambiguous, just precise and clear structured data.

type Jetson_Nano_Header_Pin is range 1 .. 40; -- Nano Physical Expansion Pinout
type Jetson_Nano_Pin_Data_Array is array (Jetson_Nano_Header_Pin) of Pin_Data;

Jetson_Nano_Header_Pins : constant Jetson_Nano_Pin_Data_Array :=
      (1   => (Default => VDC3_3, Alternate => NIL),
       2   => (Default => VDC5_0, Alternate => NIL),
       3   => (Default       => I2C1_SDA, 
               Alternate     => GPIO, 
               Linux_Nbr     => 75, 
               Port          => PJ, 
               Reg_Bit       => 3, 
               Pinmux_Offset => 16#C8#),
       4   => (Default => VDC5_0, Alternate => NIL),
...
      40   => (Default       => GPIO,      
               Alternate     => I2S_DOUT,  
               Linux_Nbr     => 78,  
               Port          => PJ,  
               Reg_Bit       => 6, 
               Pinmux_Offset => 16#14C#));

Because everything in this Jetson_Nano_Header_Pins data assembly is unique and unrelated it cannot be generalized further, it has to live somewhere, plainly. Let’s check how we model a single pin as Pin_Data.

type Function_Type is (GPIO, VDC3_3, VDC5_0, GND, NIL, ..., I2S_DOUT);
type Gpio_Linux_Nbr is range 0 .. 255;        -- # cat /sys/kernel/debug/gpio
type Gpio_Tegra_Port is (PA, PB, ..., PEE, NIL);
type Gpio_Tegra_Register_Bit is range 0 .. 7;

type Pin_Data (Default : Function_Type := NIL) is record
   Alternate: Function_Type := NIL;
   case Default is
       when VDC3_3 .. GND =>
           Null; -- nothing to add
       when others =>
           Linux_Nbr     : Gpio_Linux_Nbr;
           Port          : Gpio_Tegra_Port;
           Reg_Bit       : Gpio_Tegra_Register_Bit;
           Pinmux_Offset : Storage_Offset;
   end case;
end record;

Pin_Data type is a variant record, meaning, based on a Function_Type, it will contain “variable” data. Notice how we range over the Function_Type values to describe the switch cases. This gives us the capability to model all pins configuration.

When you consult the Technical Reference Manual (TRM) of the Nano board, you find that GPIO register controls are layed out following an arithmetic pattern. Using some hardware entry point constants and the specifics of a pin data held into Jetson_Nano_Header_Pins, one can resolve any register needed.

Gpio_Banks : constant Banks_Array := 
   (To_Address (16#6000_D000#), 
    ...         
    To_Address (16#6000_D700#));

type Register is (GPIO_CNF, GPIO_OE, GPIO_OUT, ..., GPIO_INT_CLR);
type Registers_Offsets_Array is array (Register) of Storage_Offset;
Registers_Offsets : constant Registers_Offsets_Array := 
   (GPIO_CNF     => 16#00#, 
    ... , 
    GPIO_INT_CLR => 16#70#);

function Get_Bank_Phys_Addr (Port : Gpio_Tegra_Port) return System.Address is
   (Gpio_Banks (Gpio_Tegra_Port'Pos (Port) / 4 + 1));

function Get_Register_Phys_Addr (Port : Gpio_Tegra_Port; Reg  : Register) return System.Address is
   (Get_Bank_Phys_Address (Port) + 
    Registers_Offsets (Reg) + 
   (Gpio_Tegra_Port'Pos (Port) mod 4) * 4);

In this experiment, it is mainly used to request the kernel memory mapping of such GPIO register.

-- led.adb (raw io version)
Base_Addr := Ioremap (C.Get_Register_Phys_Address (Pin.Port, C.GPIO_CNF), Control_C'Size);

Form follows function

Now, let’s model a common Pinmux register found in the TRM.

package K renames Kernel;
...
type Bit is mod 2**1;      -- will hold in 1 bit
type Two_Bits is mod 2**2; -- will hold in 2 bits

type Pinmux_Control is record
   Pm         : Two_Bits;
   Pupd       : Two_Bits;
   Tristate   : Bit;
   Park       : Bit;
   E_Input    : Bit;
   Lock       : Bit;
   E_Hsm      : Bit;
   E_Schmt    : Bit;
   Drive_Type : Two_Bits;
end record with Size => K.U32'Size;

for Pinmux_Control use record
   Pm         at 0 range  0 ..  1;  -- At byte 0 range bit 0 to bit 1
   Pupd       at 0 range  2 ..  3;
   Tristate   at 0 range  4 ..  4;
   Park       at 0 range  5 ..  5;
   E_Input    at 0 range  6 ..  6;
   Lock       at 0 range  7 ..  7;
   E_Hsm      at 0 range  9 ..  9;
   E_Schmt    at 0 range 12 .. 12;
   Drive_Type at 0 range 13 .. 14;
end record;

I think the code speaks for itself.

  • We specify types Bit and Two_Bits to cover exactly the binary width conveyed by their names.
  • We compose the different bitfields over a record size of 32 bits.
  • We explicitly layout the bitfields using byte addressing and bit range.

You can now directly address bitfield/s by name and not worry about any bitwise arithmetic mishap. Ok so now what about logically addressing a bitfield/s? You pack inside arrays. We do have an example in the modeling of the GPIO register.

type Gpio_Tegra_Register_Bit is range 0 .. 7;
...
type Bit is mod 2**1;  -- will hold in 1 bit
...
type Gpio_Bit_Array is array (Gpio_Tegra_Register_Bit) of Bit with Pack;

type Gpio_Control is record
   Bits : Gpio_Bit_Array;
   Locks : Gpio_Bit_Array;
end record with Size => K.U32'Size;

for Gpio_Control use record
   Bits  at 0 range 0 .. 7;
   Locks at 1 range 0 .. 7; -- At byte 1 range bit 0 to bit 7
end record;

Now we can do.

procedure Set_Gpio (Pin : C.Pin; S : Led.State) is
   function Bit (S: Led.State) return C.Bit renames Led.State'Enum_Rep;
   -- remember we gave the Led.State Enum a numeric Representation clause.

   Control : C.Gpio_Control := (Bits  => (others => 0),  -- init all to 0
                                Locks => (others => 0));
   ...
begin
   ...
   Control.Bits (Pin.Reg_Bit) := Bit (S); -- Kewl!
   ...
end;

Verbosity

I had to give you a feel of what is to gain by modeling using Ada. To me, it is about semantic clarity, modeling affinity, and structural integrity. Ada offers flexibility through a structured approach to low-level details. Once set foot in Ada, domain modeling becomes easy because as you saw, you are given provisions to incisively specify things using strong user-defined types. The stringent compiler constraints your architecture to fall in place on every iteration. From experience, it is truly amazing how the GNAT toolchain helps you iterate quickly while keeping technical debt in check.

Ada is not too complex, nor too verbose; those are mundane concerns.

Ada demands you to demonstrate that your modeling makes sense for thousands of lines of code; it is code production under continuous streamlining.

What’s next?

In the last entry, we will finally meet the kernel. If I kept your interest and you want to close the loop, move here. Cheers!

I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.


olivier henley
Olivier Henley

The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.

The post Hacking the Linux Kernel in Ada – Part 2 appeared first on Linux.com.

]]>
Hacking the Linux Kernel in Ada – Part 1 https://www.linux.com/audience/developers/hacking-the-linux-kernel-in-ada-part-1/ Thu, 07 Apr 2022 20:18:39 +0000 https://www.linux.com/?p=584012 For this three part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada! Part 1. Review of a kernel module, build strategy, and Ada integration. […]

The post Hacking the Linux Kernel in Ada – Part 1 appeared first on Linux.com.

]]>
For this three part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!

You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a Github issue if you find any problem.

Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.

What’s in the Ada name?

Delightfully said by Rod Chapman in his great SPARKNaCL presentation https://blog.adacore.com/sparknacl-two-years-of-optimizing-crypto-code-in-spark-and-counting, the Ada programming language is “Pascal on steroids”. Though, I would argue that the drug is healthy. Thriving on strong typing and packaging, Ada has excellent modelization scalability yet remains on par with C performances. 

It compiles native object code using a GCC front-end or LLVM, respectively called GNAT and GNAT-LLVM.  This leads us to an important reminder: Ada, at least through GNAT, has an application binary interface (ABI) compatible with C on Linux.

Linux driver experiment in Ada, but why?

  • First clearing the technical plumbing in Ada facilitates moving to SPARK/Ada where significant value can be added by improving drivers implementation using contracts, pointer ownership, advanced static analysis and associated proof technologies.
  • Share Ada’s bare-metal capability, expressivity, and productivity as a system/embedded programming language.
  • Demonstrate that Ada sits well at the foundation of heterogeneous real-world technology stacks.

Note that as Ada code is not accepted in the upstream kernel sources and the Linux team made it clear it is not interested in providing a stable kernel API, writing Linux drivers shipped in Ada/SPARK mean you will have to adapt those drivers to all kernel version you are interested in, which is a task outside the scope of this document. For single kernel versions, proofs-of-concept, or organizations having enough firepower to maintain and curate their own drivers this is not an issue though.

Kernel module headfirst

Let’s discuss our overall driver structure from an orthodox C perspective. It will allow us to clear important know-how and gotchas. The following C kernel module (driver) implements a one-second delayed work queue repeatedly registering a callback writing “flip_led_state” to the kernel message buffer. Please, note the usage of the preprocessing macros.

/* printk_wq.c */

#include <linux/module.h>
#include <linux/workqueue.h>
#include <linux/timer.h>

void delayed_work_cb(struct work_struct* work);
struct workqueue_struct* wq = 0;
DECLARE_DELAYED_WORK(dw, delayed_work_cb);   /* heavy lifting 1. */

void delayed_work_cb(struct work_struct* work)
{
   printk("flip_led_state\n");
   queue_delayed_work(wq, &dw, msecs_to_jiffies(1000));
}

int init_module(void)
{
   if (!wq)
       wq = create_singlethread_workqueue("my_wq"); /* heavy lifting 2. */
   if (wq)
       queue_delayed_work(wq, &dw, msecs_to_jiffies(1000));
   return 0;
}

void cleanup_module(void)
{
   if (wq){
       cancel_delayed_work(&dw);
       flush_workqueue(wq);
       destroy_workqueue(wq);
   }
}

MODULE_LICENSE("GPL");

When building a kernel module on Linux the produced Executable and Linkable Format (ELF)  object code file bears the *.ko extension. If we inspect the content of the working printk_wq.ko kernel module we can sketch the gist of binding to kernel module programming.

$ nm printk_wq.ko
...
                 U __alloc_workqueue_key
                 U cancel_delayed_work
00000000000000a0 T cleanup_module
0000000000000000 T delayed_work_cb
                 U delayed_work_timer_fn
                 U destroy_workqueue
0000000000000000 D dw
                 U flush_workqueue
0000000000000000 T init_module
                 U _mcount
0000000000000028 r __module_depends
                 U printk
                 U queue_delayed_work_on
0000000000000000 D __this_module
0000000000000000 r __UNIQUE_ID_license45
0000000000000031 r __UNIQUE_ID_vermagic44
0000000000000000 r ____versions

First, we recognize function/procedure names used in our source code eg. cancel_delayed_work. We also find that they are undefined (U). It is important to realize that those are the kernel’s source symbols and their object code will be resolved dynamically at driver/module load time. Correspondingly, all those undefined signatures can be found somewhere in the kernel source code headers of your target platform.

Second, we are missing some methods we explicitly called, eg.  create_singlethread_workqueue, in the symbol table.  This is because they are not, in fact, functions/procedures but convenience macros that expand to concrete implementations named differently and potentially have a different signature altogether or static inline not visible outside. For example, laying out the create_singlethread_workqueue explicit macro expansion from the Linux sources makes it clear. (Follow order, not preprocessor validation order)

/* https://elixir.bootlin.com/linux/v4.9.294/source/include/linux/workqueue.h*/

#define create_singlethread_workqueue(name)				\
	alloc_ordered_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, name)
...
#define alloc_ordered_workqueue(fmt, flags, args...)			\
	alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED |		\
			__WQ_ORDERED_EXPLICIT | (flags), 1, ##args)
...
#define alloc_workqueue(fmt, flags, max_active, args...)		\
	__alloc_workqueue_key((fmt), (flags), (max_active),		\
			      NULL, NULL, ##args)
...
extern struct workqueue_struct *
__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
	struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);

Now everything makes sense. __alloc_workqueue_key is marked ‘extern’ and we find its signature in the printk_wq.ko symbol table. Note that we moved from create_singlethread_workqueue taking a single parameter to __alloc_workqueue_key taking more than five arguments. A logical conclusion, as deduced while following the explicit expansion up here, is that the arguments delta are all baked at the preprocessor stage. ‘Baking’ parameters using macros chaining offer polymorphism opportunities for kernel developers. Eg. compiling for arm64 may expand the macros differently than on RISC-V while offering both to retain a unified create_singlethread_workqueue(name) call for device driver developers to use; client of this ‘kernel API function’.

To get an Ada equivalent implementation of this driver I think of three choices when faced with a binding:

  • The signature you want to bind is extern, concrete, you bind directly by importing it.
  • You reconstruct around, only those undefined (U) “final” symbols until you reimplement the integrality of the functionality provided by the “top” macro. Useful when macros create stuff in place and you need to get a hold of it on the Ada side.
  • You write a concrete C function wrapping the macro and then bind by importing this wrapper function.

I will present an example of each in subsequent parts.

Platform driver and device driver

A Linux kernel module code structure is somewhat simple: you implement an init function, a deinit function. You have other requirements like supporting code reentry (eg. entering function may be called many times asynchronously) and you should not stall (eg. you do not run a game loop inside any kernel driver function). Optionally, if you are doing a platform (subsystem) driver, you need to register callbacks to polymorph a kernel interface of your choice. There is more to it, but you can get a long way just within this structure.

If you were to replace the shipped GPIO platform driver on your target machine, without breaking anything, your driver code would need to provide a concrete implementation of methods exposed in the linux/gpio/driver.h API. Below is some Tegra GPIO platform driver implementation code. If you start from the end, subsys_initcall(tegra_gpio_init), you should find that registering the driver sets a probe callback, in turn setting tegra_gpio_direction_output as the gpio_chip direction_output concrete code.

/* linux/gpio/driver.h */
struct gpio_chip {
	int	(*direction_output)(struct gpio_chip *chip, 
                                unsigned offset, int value);
}

/* drivers/gpio/gpio-tegra.c */
struct tegra_gpio_info {
	struct gpio_chip		gc;
};
static int tegra_gpio_direction_output(struct gpio_chip *chip, 
                                       unsigned offset, int value)
{
...
	return 0;
}
static int tegra_gpio_probe(struct platform_device *pdev)
{
	tgi->gc.direction_output = tegra_gpio_direction_output;
}
static struct platform_driver tegra_gpio_driver = {
	.probe		= tegra_gpio_probe,
};
static int __init tegra_gpio_init(void)
{
	return platform_driver_register(&tegra_gpio_driver);
}
subsys_initcall(tegra_gpio_init);

subsys_initcall is used to build statically linked module only and serves to implement platform driver. init_module can be used to init a built-in or loadable module but subsys_initcall is guaranteed to be executed before init_module. For this experiment we implemented a device driver making use of init_module.

To step into an Ada implementation we needed to concede by creating our driver entry point in C first.

  • The needed MODULE_LICENSE() expansion turned out to be hardly portable to Ada as it expands to some complex annotations scheme.
  • Kbuild, the Linux kernel build system, uses this ‘main’ C file to produce dependable meta information before/while building the .ko object.

From there we extern the ada_init_module and ada_cleanup_module function where we will pick up, fully Ada, to implement the delayed work queue structure seen previously and all consequent modeling of our flashing led driver.

/* main c */

#include <linux/module.h>

extern void ada_init_module (void);
extern void ada_cleanup_module (void);

int init_module(void)
{
   ada_init_module();
   return 0;
}

void cleanup_module(void)
{
   ada_cleanup_module();
}

MODULE_LICENSE("GPL");

The need for a restricted runtime

If you compile the following C code using your default Linux desktop compiler toolchain

STR="void main(){}" && echo -e $STR | gcc -o output.o -xc -

And inspect its symbol table

$ nm output.o 
...
0000000000400510 T __libc_csu_init
                 U __libc_start_main@GLIBC_2.2.5
00000000004004f6 T main
                 U printf@GLIBC_2.2.5
...

You find references to libc you did not explicitly ask for. You need the heads-up that those undefined (U) won’t be resolved at kernel module loading. A lot of libc implements stuff at the userspace level which is not compatible with the kernel operations so it is forbidden altogether.

Using system default GCC, make calling Kbuild using a special syntax, Kbuild will automatically strip those dependencies to libc for you to produce a valid kernel module (*.ko). But what happens when you link object code ‘compiled as usual’ from another rich and complex language like Ada to your kernel module? The object code will most certainly contain machinery from the language runtime, complex routines that end up tapping in libc, or other forbidden operations in the kernel context. This is where you need a constrained, reduced runtime for your language of choice.

What is cool with Ada though is that the GNAT infrastructure has runtime separation architectured to be swapped. Using AdaCore codebases, you can build your runtime by embarking on just what you want/need in it to link against. GNAT Ada runs on countless barebone platforms so the runtime granularity and dependency problems have already, most of the time, been handled for you. To initialize this runtime properly you are given sensible control on where and when to run some elaboration code; more on that later when we cover the Ada side of things.

For this experiment, we built a light aarch64-linux native runtime compatible to run in kernel space while retaining convenient aspects of the language, eg. the secondary stack. Using the https://github.com/AdaCore/bb-runtimes scripts, we augmented a new target aarch64-linux and built the runtime. Getting to know ‘how to do’ took longer, building it takes seconds. You can find and use this runtime in the experiment repository under rts-native-light/ when cross compiling using GNAT Pro. If you are building using the platform GNAT FSF the runtime is found under rts-native-zfp/?.

Kbuild integration

Kbuild is somewhat flexible so GNAT object code can be linked into the kernel driver without too much effort. As implied previously, make understands syntax to leverage and activate Kbuild, eg. to produce our driver called flash_led.ko, start from transient flash_led.o that depends on obj/bundle.o to build. Our module makefile uses this special syntax

obj-m := flash_led.o
flash_led-y := obj/bundle.o

You can ‘trick’ Kbuild/make by providing already existing .o files as long as you also provide dependable *.o.cmd intermediary files to Kbuild. We leverage such substitution by coordinating GPRbuild (GNAT build system), Kbuild/make, and touch using Python. There are two phases, generate and build.

Generate

$ python make.py generate config:flash_led_jetson_nano.json

1. Build, in the background, a bare minimum known to be valid main_template.c kernel driver and extract the compilation switches used by make/Kbuild to successfully produce this main_template.ko guinea pig module. There are around ~80 such GCC switches captured and used to generate the basic, valid *.ko for this kernel-based v4.9.294, arm64 platform. This ‘buried deep into Kbuild’ knowledge extraction turned out to be key in stabilizing the production of valid kernel object code. Note that this trick should work well for any platform because it extracts its specifics.

2. Generate the GPRbuild project file injecting those ~80 switches for the compilation of our project main.c along with all Ada source files using different project configuration data found in the JSON file.

3. Generate Makefile using knowledge of configuration data compliant with Kbuild syntax (cross compiler location, project name, etc found in the JSON file).

You can inspect the different templates and their substitution markers eg. <replace_me_token> by looking inside the template folder of the project repository.

Build

python make.py build config:flash_led_jetson_nano.json rts:true

1. Build the GNAT runtime library (RTS) libgnat.a by driving its runtime_build.gpr project file. (optional on subsequent passes)

2. Build our driver project standalone library libflash_led.a using the generated GPRbuild project file.

3. Link our custom RTS libgnat.a with our project libflash_led.a to a tidy bundle.o object.

4. Create missing *.o.cmd intermediary files to keep Kbuild happy. Remember we are swapping already built objects under its nose!

5. Finally, launch the makefile to cross-compile our flash_led.ko driver for the Jetson Nano aarch64 platform!

The Ada driver

For this experiment we did two implementations of Led.adb (body file, Ada equivalent of C .c source file), one at src/linux_interface/led.adb, the other under src/raw_io/led.adb. You specify which driver implementation you want to build by setting “module_flavor” value in flash_led_jetson_nano.json. make.py will inject the proper source paths in the project driver flash_led.gpr file during the generate phase.

The first version implementation of the LED interface binds to standard kernel API Gpio_Request, Gpio_Direction_Output, Gpio_Get_Value, and Gpio_Free functions exposed in include/linux/gpio.h. This is rather straightforward as the binding is mostly one-to-one to the C functions. In this linux_interface version, as soon as you bind, you end up executing the C concrete implementation of the shipped GPIO driver.

Circumventing most Linux machinery, the second raw_io version implementing the LED interface is more interesting as we control the GPIO directly by writing to IO memory registers. Akin to doing bare-metal, directly driving GPIOs is a matter of configuring some IO registers mapped in physical memory. Remember an OS serves the role of a hardware orchestrator and consequently acts as having implicit ownership over your hardware. To tap directly onto physical memory in a kernel context often requires some kind of red tape crossing.

Here Linux requires (strongly suggests?) you write/read to kernel mapped memory instead of directly to physical memory. First, you need to acquire the kernel-mapped physical address using the in/famous ioremap call. Using the mapped address we read and write to our GPIO registers using ioread32 and iowrite32 respectively. This is the only Linux machinery involved in this raw_io version. As you probably figure this is more a peek at what one would code inside a driver responsible to implement the concrete implementations of functions offered by something like include/linux/gpio.h. We will even end up writing assembly code from Ada to achieve pure rawness!

What‘s next?

I had to set the table to write Linux kernel modules in Ada by first talking about C, object code, Kbuild, constrained runtime, and overall build strategy. The streamlined fun begins as we cross the Ada fence. If I picked your curiosity and you are ready to dig Ada, meet me here. Cheers!

I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.


olivier henley
Olivier Henley

The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.


The post Hacking the Linux Kernel in Ada – Part 1 appeared first on Linux.com.

]]>
Understanding Bluetooth Technology for Linux https://www.linux.com/news/understanding-bluetooth-technology-for-linux/ Thu, 13 Jan 2022 14:59:47 +0000 https://www.linux.com/?p=583815 This article was written by Martin Woolley of the Bluetooth SIG. Linux has been around in various forms for about 30 years, and the kernel is the basis of other operating systems such as Android and Chrome OS. Supercomputers use it at one end of the computing spectrum and in embedded devices at the other. […]

The post Understanding Bluetooth Technology for Linux appeared first on Linux.com.

]]>
This article was written by Martin Woolley of the Bluetooth SIG.

Linux has been around in various forms for about 30 years, and the kernel is the basis of other operating systems such as Android and Chrome OS. Supercomputers use it at one end of the computing spectrum and in embedded devices at the other. Linux is used on laptops, desktop computers, and servers in between these extremes.

And it’s also used in single-board computers — this category includes popular devices like the Raspberry Pi.

Figure 1 – Raspberry Pi 4 running Linux

Therefore it’s fair to say that Linux has been widely adopted.

While microcontrollers and lean, mean software frameworks necessarily dominate small electronic products that are generally single-purpose devices and have modest processing requirements, Linux meets the needs of another important subset. Some products have multiple features that need to be available concurrently. Some cases may require significant processor power and need RAM measured in gigabytes rather than the kilobytes of RAM more typically found in microcontrollers. IP security cameras are based on Linux. They can stream live video, respond to motion detection events, identify human faces in video streams in real-time, record video to an SD card, transfer files over FTP, and host a web server for management and configuration purposes. That mix of concurrently available functionality requires both sufficiently powerful hardware and an operating system that supports multiple processes and threads, provides a capable file system, and has a wide selection of applications readily available for it. Linux is a perfect fit. And it’s open source and free.

Bluetooth Technology and Linux

Bluetooth® technology can be used on Linux. The controller part of the Bluetooth stack is typically a system on a chip that is either an integral part of the mainboard or implemented in a peripheral like a USB dongle. The host part of the Bluetooth stack runs as a system service, and the standard Linux Bluetooth host implementation is called BlueZ.

BlueZ supports both the Bluetooth LE Peripheral and Central roles using GAP and GATT and Bluetooth mesh, provided the underlying controller supports dependent Bluetooth features. And its multi-process architecture means that multiple Bluetooth applications can be running simultaneously on a single device, which offers some exciting possibilities.

But for a developer, working with Bluetooth technology on Linux for the first time can be challenging. BlueZ defines a straightforward, logical API, but the way a developer must use it in applications is dissimilar to how a developer works with Bluetooth APIs on most other platforms. This is a consequence of the system’s architecture, which, whilst not unique, is typically very visible to the developer and usually needs to be well understood so that those logical BlueZ APIs can be used.

The Architecture of a Linux System using BlueZ

BlueZ APIs are not called directly by applications. Instead, Linux applications that run as independent processes make inter-process communication (IPC) calls to BlueZ APIs via an IPC broker named D-Bus. D-Bus is a system service and a type of message-oriented middleware which provides IPC support for many Linux applications and services, not just BlueZ.

BlueZ runs as a system daemon, either bluetoothd to provide applications with support for GAP and GATT or bluetooth-meshd when the physical device is to be used to run applications that act as Bluetooth mesh nodes.

Figure 2 – Architecture

Using D-Bus, applications can send messages which cause methods implemented in remote services or applications to be called and the results returned in another message. Applications and system services can also communicate events that have happened in the system to other applications by emitting special messages known as signals.

Figure 3 – DBus messages and signals

Applications work with BlueZ by sending and receiving DBus messages and signals, so developers generally need some knowledge (or perhaps a lot of knowledge) of DBus programming.

You may have noticed that we are not making the most definite statements here. Why did we say that the developer usually needs to have a solid understanding of the architecture rather than always? Why do they generally need some knowledge of DBus programming and sometimes a lot of knowledge? The answer lies in the very nature of Linux and of the Linux ecosystem.

Developers of Android or iOS applications typically use one or two programming languages favored by the operating system (o/s) owner, in this example, either Google or Apple. The APIs are designed and documented by the o/s owner, and there’s a wealth of supporting information to help developers achieve results. But the world of Linux is not like that. It’s very modular and open, which means there’s an enormous choice in programming languages that can be used. There may be a choice of different APIs for the exact same purpose provided by different supporting libraries from different originators for any given language.

The degree to which the architecture is abstracted by the APIs for different languages, hiding details so that an application developer feels they’re working directly with BlueZ APIs rather than making remote method calls using DBus messages varies. Still, it’s not uncommon for the developer to have to deal directly with DBus from their code and to need to have a thorough understanding of DBus IPC.

Some BlueZ or DBus APIs are well documented, while some do not add to the learning curve developers need to ascend. And, in some cases, there’s no documentation at all, leaving the developer to figure things out through searching the web, scrutinizing library source code, and so on. This is fine if you like that kind of thing and OK if you have the luxury of all the time in the world to finish your project. But for most people, life’s not like that.

The Bluetooth Technology for Linux Developers Study Guide

To help Linux developers quickly ascend the BlueZ learning curve, we’ve created an educational resource known as a study guide to add to our growing collection.

It’s modular and includes hands-on exercises so you can test your growing understanding of the theory by writing code and testing the results.

Figure 4 – Hands-on coding exercises included
Figure 5 – Testing

If you’re completely new to Bluetooth® Low Energy (LE), there’s a primer module that will explain the key concepts to get you started. Subsequent modules explain how Bluetooth technology works on Linux, DBus programming concepts and techniques, how to develop LE Central devices, and how to develop LE Peripheral devices, in both cases using BlueZ and Python. The appendix provides step-by-step instructions for configuring your Linux kernel and for building and installing BlueZ from the source.

After completing the work in this study guide, you should:

  • Be able to explain basic Bluetooth LE concepts and terminology such as GAP Central and GATT client
  • Be able to explain what BlueZ is and how applications use BlueZ in terms of architecture, services, and communication
  • Understand the fundamentals of developing applications that use DBus inter-process communication
  • Be able to implement key functionality, typically required by GAP Central/GATT client Bluetooth devices

Download the Bluetooth for Linux Developers Study Guide today.

The post Understanding Bluetooth Technology for Linux appeared first on Linux.com.

]]>
Download the 2021 Linux Foundation Annual Report https://www.linux.com/news/download-the-2021-linux-foundation-annual-report/ Wed, 08 Dec 2021 23:42:44 +0000 https://www.linux.com/?p=583675 In 2021, The Linux Foundation continued to see organizations embrace open collaboration and open source principles, accelerating new innovations, approaches, and best practices. As a community, we made significant progress in the areas of cloud-native computing, 5G networking, software supply chain security, 3D gaming, and a host of new industry and social initiatives. Download and read […]

The post Download the 2021 Linux Foundation Annual Report appeared first on Linux.com.

]]>

In 2021, The Linux Foundation continued to see organizations embrace open collaboration and open source principles, accelerating new innovations, approaches, and best practices. As a community, we made significant progress in the areas of cloud-native computing, 5G networking, software supply chain security, 3D gaming, and a host of new industry and social initiatives.

Download and read the report today.

The post Download the 2021 Linux Foundation Annual Report appeared first on Linux.com.

]]>
Linux as a Screensaver for Windows: The Gift of Open Source Games and SBOMs for the Holidays https://www.linux.com/news/linux-as-a-screensaver-for-windows-the-gift-of-open-source-games-and-sboms-for-the-holidays/ Tue, 07 Dec 2021 16:00:13 +0000 https://www.linux.com/?p=583645 Abstract: Construct and package a Linux® Live DVD to install using the standard Microsoft® Windows® install process and operate as a classic Windows screensaver.  Introduction Back in 2005, IBM wanted to promote Linux, so developerWorks was offering $1000 per article to IBMers who wrote articles for the Linux Zone. The 2005 article is no longer […]

The post Linux as a Screensaver for Windows: The Gift of Open Source Games and SBOMs for the Holidays appeared first on Linux.com.

]]>
Abstract: Construct and package a Linux® Live DVD to install using the standard Microsoft® Windows® install process and operate as a classic Windows screensaver. 

Introduction

  • Back in 2005, IBM wanted to promote Linux, so developerWorks was offering $1000 per article to IBMers who wrote articles for the Linux Zone. The 2005 article is no longer online from IBM but is available on ResearchGate https://www.researchgate.net/publication/272094609_Linux_screensaver_for_Windows for the interested reader.
  • This software still works and is still fun to use and to decorate your Windows desktop.
  • Since 2005, there have been improvements and changes. Debian is now used instead of the original KNOPPIX. Additionally, full mouse integration now works between Windows and the screensaver due to kernel contributions.
  • Future possibilities probably lie with the integration of hardware virtualization acceleration.
  • Like all software of significant size, many components need tracking. The modern standard for this is SPDX and SBOM; as this screensaver is built fully from public source code, it makes a cool demo for SPDX and SBOM, which anyone may use.
  • Though putting Linux on screen saver is a very interesting idea, there is a bit of a downside: power consumption. Screen savers initially proposed to protect the screen by providing moving pixels (by activating different pixels to avoid pixels burnin) when the user is not using their screen. If the power/energy option is not set properly it may draw more power/energy [1]. Basically, the Linux system (power governers) would prevent the OS from entering the deep power state where there are lots of opportunities to save energy when the system is idle. 

Answering the most common concern about open source software, this article shows that, yes, Linux will run under Windows. 

So why should you read this article? Why, indeed, should I write it? My motive is to help remove two obstacles to the wider adoption of free and open source software. 

Those obstacles are: 

  • The perceived difficulty and disruptive effects of installing Linux
  • The uncertainty of hardware support for Linux 

Most computer users are familiar with a Microsoft Windows environment and the variety of screensavers available to prevent unauthorized access to the data on the computer when unattended. The good news is that there is plenty of free and open source software available nowadays to enable Linux to install and run as a Windows screensaver. This article shows you how to construct an appropriate package, and in doing so, demonstrates that the “free” and “non-free” sides of the software Grand Canyon are not so far apart after all. 

Running Linux under Windows as a Screensaver App

But which Linux? Without knowing what a client intends to do, it would be irresponsible to make a blanket recommendation. However, on December 25, 2021, the demand for games will be great, and the delivery capability will be sufficient. And if you configure it as a screensaver, even the possibility of pressing the wrong key to start it is eliminated.

Making it work: Nuts, bolts, and screws 

Getting the ISO to run under another operating system requires an open source PC emulator, including an open source BIOS and an open source virtual graphics adapter (such as QEMU version 6.1.0). The emulator enables you to set up a virtual PC within a real one. To construct a screensaver, the best way is to configure it with a virtual DVD drive, keyboard, screen, and mouse, but without any virtual disks. This all runs using the magic of software emulation, but modern PC hardware is sufficiently fast for the task (which we originally designed in 2005). Some corporate environments would require the virtual PC not to have a network adapter — you can run Firefox in the screensavers here. This package has a network adapter, but it is simple to change this if required since all source code is supplied.

Here are the steps to make this work. 

QEMU 

You can build QEMU from source available here https://www.qemu.org/download/ , but there is a suitable prebuilt QEMU for Windows available at https://qemu.weilnetz.de/ . This example was built and tested with QEMU 6.1.0 .

It is necessary to write a small stub program to go into the C:\WINDOWS\SYSTEM32 directory as an SCR file, which runs QEMU with appropriate parameters. https://github.com/tjcw/screensavers/blob/master/packaging/crunqemu-usb.c is sufficient for this; it runs QEMU with 1024 MB of memory, one processor, and the mouse connected as if it were a USB tablet.

This stub can be built with mingw64, from the Cygwin open source package, or presumably (though untested) with a commercial Windows C compiler.

Disabling the network adapter in the virtual PC can be done with parameter “-nic none” on the QEMU command line.

Inno Setup

Inno Setup is an open-source packaging/installation tool for Windows available here https://jrsoftware.org/isinfo.php . I used version 6.0 for this example. Packaging with Inno Setup results in a warning from Microsoft Defender when installing the screensaver; this warning can be overridden with 2 mouse clicks. A future version of this blog will explain how to package with Microsoft-licenced (non-open-source) tooling to eliminate this warning.

Prebuilt screensaver distribution

The screensavers are available here on this torrent feed: 

https://linuxtracker.org/index.php?page=downloadcheck&id=1185c790b15b92b039d616ed742e873ae57db6ce

You will need a torrent client, such as Transmission, to download it. It is especially important to check the sha256sum values as this channel is not under the control of Linux Foundation.

After downloading, you should check the ‘sha256sum’ of the files. This validates that you have indeed got the files the author intends. For Windows there is a no-charge ‘Hash Tool’ in the Microsoft app store which will do the job; for Linux you use the command line.

$ sha256sum *

b483ed3250fbfdb91c3bace04f46ad9ad0b507a9890e3a58185c3342e6711441  QemuSaverOpen-1-6.zip

95f3a8d6217f2ff93932ab5ac6d8a2a30a4d0ea09afe3096f148f5be17961428  QemuSaverOpenGames-1-4.zip

Extract the two zip files using the built-in Windows extract feature, and run the installer .exe files. Then go to the Windows screensaver selection screen and select either ‘fr2’ or ‘gk2’ as appropriate. 

There will be a 4-minute hiatus in the middle of startup while the X server initializes — be patient.

QemuSaverOpen-1-6.zip’ is the required base package with the educational screensaver named fr2, and ‘QemuSaverOpenGames-1-4.zip’ is an optional extension package with the games screensaver called gk2.

The source code for all components is available on the public Internet, and these links will lead you to it.

The screensavers can be uninstalled with the standard Windows uninstall tool.

File structure for the extracted zip file

The following file structure is used for the live DVD filesystem: 

  • An exe file is the installer. 
  • Files in /qemu are the installable QEMU files, which will be copied to C:\Program Files\qemusaver. 
  • Files in /extras are the screensaver and the built Live Linux ISO
  • Files in /screensavers are a clone of my git repository. They are not used by the installed screensaver but are provided for the convenience of anyone who wants to explore how it works.

Creating the ISO image 

The live-build package does the ‘hard work’ of building the ISO in Debian Testing (There is currently a bug in the Debian 11 version of live-build). You will need to install a (real or virtual) machine with the Debian Testing image available here:

https://www.debian.org/devel/debian-installer/

A script https://github.com/tjcw/screensavers/blob/master/bin/do_oi wraps this to provide a simple interface; see https://github.com/tjcw/screensavers/blob/master/README.md for a short guide on how to use it.

The ISO is bootable, so it is also possible to write this to a USB key and boot your system from there. Rufus https://rufus.ie/en/ is a suitable open-source tool if you want to do this under Windows. You will need a USB key of 16GB or larger to try this option.

That’s really all it takes to install Linux from a zip file to run as a screensaver on a Windows machine.

Future directions

The screensaver could usefully be enhanced to exploit hardware virtualization acceleration. This is done with HAXM on an Intel processor or WHPX on an AMD processor. It requires changing a BIOS setting and some configuration in the internals of Windows, so it is not currently suitable for use in a simple screensaver application.

As Linux and Windows march forward, it may be necessary to rebuild the screensaver package from time to time, mainly to pick up new certificates for web browsing.

Software Bill of Materials (SBOM) for the Live DVD

In furthering the desire to improve education around open source software and increase awareness of how to minimize security vulnerabilities and exposure in the software supply chain, we wanted to update this article with a short tutorial on generating a Software Bill of Materials (SBOM) using the SPDX toolset.

This is how it is done.

The first is the script that needs to be injected into the screensaver build process:

#!/bin/bash -x

cp -pr live-build/config/content/. .

cd /var/cache/apt/archives && (

dpkg --version >/tmp/dpkg.version

COLUMNS=100 dpkg -l >/tmp/dpkg.dependencies

awk '{ print $2 }' </tmp/dpkg.dependencies >/tmp/dpkg.inslist

for p in $(</tmp/dpkg.inslist)

do

  dpkg --info $p*|grep Depends

done >/tmp/dpkg.deplist

for p in $(</tmp/dpkg.inslist)

do

  dpkg -p $p

done >/tmp/dpkg.depdetail

) </dev/null

This results in 5 files that need to be fed to the SPDX/SBOM tool. This script is in place in the ‘screensavers’ repository above and results in the files being placed in /tmp in the screensaver, also available as chroot/tmp on the screensaver build system.

Then it is a simple matter to run the SPDX/SBOM tool, and the ISO standards dependency list is generated.

[1] https://www.environment.admin.cam.ac.uk/resources/mythbusters-facts-top-tips/screens

Author: Chris Ward, Sr. Programmer, IBM
Co-authors: Nirav Patel, Vice President and Chief Architect, Linux Foundation and Eun Kyung Lee, Manager Hybrid Cloud Infrastructure Software Research, IBM

The post Linux as a Screensaver for Windows: The Gift of Open Source Games and SBOMs for the Holidays appeared first on Linux.com.

]]>
Support OLF and Possibly Win a Prize https://www.linux.com/news/support-olf-and-possibly-win-a-prize/ Wed, 01 Dec 2021 20:34:00 +0000 https://www.linux.com/?p=583639 OLF, previously known as Ohio Linuxfest, has been one of the most popular community-run open source events for nearly two decades. The event brings together individuals from around the country and world to gather and share information about Linux and open source software. This year’s event takes place December 3-4 in Columbus, Ohio, and The […]

The post Support OLF and Possibly Win a Prize appeared first on Linux.com.

]]>

OLF, previously known as Ohio Linuxfest, has been one of the most popular community-run open source events for nearly two decades. The event brings together individuals from around the country and world to gather and share information about Linux and open source software. This year’s event takes place December 3-4 in Columbus, Ohio, and The Linux Foundation is proud to be one of the event sponsors.

Even if you cannot join us in Columbus, you can help support the event and community by entering an online raffle fundraiser. You can purchase tickets for the raffle and choose the prize you would like to win. The raffle will take place at 7 pm Eastern on December 4. The Linux Foundation has donated the following prizes to the raffle:

  • Entry-level certification exam package including the Linux Foundation Certified IT Associate (LFCA) and Kubernetes & Cloud Native Associate (KCNA) exams
  • Kubernetes Fundamentals training course plus the Certified Kubernetes Administrator (CKA) exam
  • Open Source Management and Strategy seven-course training series

Prizes from other sponsors include a Raspberry Pi kit, original penguin artwork, and more. Purchase your tickets today and help support this great community event!

The post Support OLF and Possibly Win a Prize appeared first on Linux.com.

]]>