Open Source – Linux.com https://www.linux.com News For Open Source Professionals Thu, 15 Feb 2024 15:18:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Open Mainframe Summit Call for Papers Now Open https://www.linux.com/news/open-mainframe-summit-call-for-papers-now-open/ Wed, 07 Jun 2023 13:00:00 +0000 https://www.linux.com/?p=585473 Open Mainframe Project announces Co-Located Events with IBM TechXchange in September and Open Source in Finance Forum in November SAN FRANCISCO, June 7, 2023 – The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, today announced the launch of the Call for […]

The post Open Mainframe Summit Call for Papers Now Open appeared first on Linux.com.

]]>

Open Mainframe Project announces Co-Located Events with IBM TechXchange in September and Open Source in Finance Forum in November

SAN FRANCISCO, June 7, 2023 The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, today announced the launch of the Call for Proposals (CFPs) for the 4th annual Open Mainframe Summit. This year, the premier mainframe event will be co-located with two industry conferences – IBM TechXchange Conference 2023, hosted in Las Vegas on September 11-14; and Open Source in Finance Forum, hosted in New York City on November 1. 

“As mainframe technology and events evolve and mature, it becomes a more natural evolution to align Open Mainframe Projects and activities with other industry events,” said John Mertic, Director of Program Management at the Linux Foundation and Executive Director of the Open Mainframe Project. “This year, by partnering with IBM and FINOS, we are offering attendees the opportunity to enhance their experience with unique presentations and targeted conversations with industry experts.” 

“As open source becomes the default development model for many enterprises, mainframe customers are looking to embrace community developed code for their mainframe environments,” said Steven Dickens, Vice President and Practice Leader at the Futurum Group. “The Open Mainframe Project has established itself as the go-to community for mainframe developers, enterprises and vendors alike.  The events announced today are a key part of how the community will gather to advance code on the mainframe.” 

Open Mainframe Summit aims to connect and inform all those interested in growing the use of mainframes and related technology in dynamic technical and educational sessions. It is open to students, developers, corporate leaders, users and contributors of projects from around the globe looking to learn, network and collaborate. It will feature content tracks that tackle both business and technical strategies for enterprise development and deployment.

Open Mainframe Summit – Las Vegas

IBM TechXchange Conference offers technical breakout sessions, hands-on experiences, product demonstrations, instructor-led labs, and certifications tailored to your interests and learning style. Open Mainframe Summit will be featured as part of the TechXchange Community Day on September 11. Community Day unites diverse IBM user groups and technical communities to foster collaboration, networking and learning. Learn more here

Open Mainframe Summit – New York

Open Source in Finance Forum is dedicated to driving collaboration and innovation in financial services through open source software and standards. The event brings together experts across financial services, technology, and open source to engage our community in stimulating and thought-provoking conversations about how to best (and safely) leverage open source software to solve industry challenges. Open Mainframe Summit will be featured as part of a 6-session track and a 10-minute keynote presentation. Learn more about the event here

Submit a Proposal

The Call for Proposals is now open and will be accepting submissions until Friday, June 30, 2023. Interested speakers for either event can submit proposals with options for 20 minute talks, 30-minute sessions, 60-minute panel discussions or a 60-minute workshop or lab. All topics that benefit the Open Mainframe ecosystem are welcome and can include (but not limited to) AI, machine learning, building the next workforce, cloud native, COBOL, Java, hybrid cloud, diversity and Inclusion, z/OS and Linux on Z, and security. 

Submit a proposal: http://cfp.openmainframesummit.org/

Meet the Program Committee

A program committee, which includes active community members and project leaders, will review and rate the proposals. Open Mainframe Project welcomes Alan Clark, CTO Office and Director for Industry Initiatives, Emerging Standards and Open Source at SUSE, Donna Hudi, Chief Marketing Officer at Phoenix Software International, Elizabeth K. Joseph, Global Head of the OSPO for IBM zSystems at IBM, Rose Sakach, Offering Manager, Mainframe Division at Broadcom, Inc., and Len Santalucia, CTO at Vicom Infinity, A Converge Company.  

We encourage community leaders, creators, developers, implementers, and users to submit presentations.  Whether you are a seasoned presenter or a first-time speaker we welcome your submissions.  While we expect a key focus on work within the Open Mainframe Project’s 21 hosted projects/working groups, user experiences and tips and tricks are often some of the favorite sessions of attendees.

For more details about Open Mainframe or to watch the videos for Open Mainframe Summit 2022, check out the Open Mainframe Project 2022 Annual Report

For more about Open Mainframe Project, visit https://www.openmainframeproject.org/.  

About the Open Mainframe Project

The Open Mainframe Project is intended to serve as a focal point for deployment and use of Linux and Open Source in a mainframe computing environment. With a vision of Open Source on the Mainframe as the standard for enterprise class systems and applications, the project’s mission is to build community and adoption of Open Source on the mainframe by eliminating barriers to Open Source adoption on the mainframe, demonstrating value of the mainframe on technical and business levels, and strengthening collaboration points and resources for the community to thrive. Learn more about the project at https://www.openmainframeproject.org.

About The Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, PyTorch, RISC-V, SPDX, OpenChain, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Maemalynn Meanor

The Linux Foundation

maemalynn@linuxfoundation.org

The post Open Mainframe Summit Call for Papers Now Open appeared first on Linux.com.

]]>
AsyncAPI: A springboard for opensource professionals https://www.linux.com/news/asyncapi-a-springboard-for-opensource-professionals/ Mon, 24 Apr 2023 21:30:13 +0000 https://www.linux.com/?p=585347 Image: “Open Source Springboard” by Jason Perlow, Bing Image Creator We all start studying and training in what we like with enthusiasm and optimism. However, as time goes by, difficulties arise, making us rethink our position and values. Are we good at what we thought we were? Are we heading in the right direction? Are […]

The post AsyncAPI: A springboard for opensource professionals appeared first on Linux.com.

]]>

Image: “Open Source Springboard” by Jason Perlow, Bing Image Creator

We all start studying and training in what we like with enthusiasm and optimism. However, as time goes by, difficulties arise, making us rethink our position and values. Are we good at what we thought we were? Are we heading in the right direction? Are we investing our time correctly? Do our skills define us? Who are we, after all? Shall we go on?

Hopes and expectations always appear as two key concepts. They go hand in hand from the moment we think about what we want to train in, what we want to do, and how we imagine ourselves in the future.

And if we are persistent, or we just made the right choices when choosing our studies, we finally go on: achieving goals, passing exams, and showing to ourselves and the rest of the people that we are improving our expertise and gaining knowledge. We can keep on with our path reaching what we thought was the top, at least at that point: we got a certificate! In the form of a BA, a Masters’s degree, or even a Ph.D.

Reaching that point, we think we have completed something, but, on the contrary, doubts are more intense than ever. At least, the stats tell us we are not alone. Numerous studies show that it is common to suffer an existential crisis at the end of our studies. Be finally a grown-up, serious, predictable. It’s scary, sure.

The abyss

If there is one thing that the end of a training program leaves behind, it is emptiness. What do we know, after all? What can we do now? How can we apply what we have been trained for, after all? Responsibilities are coming closer. It’s impossible to run away. Make a career, make money, and be happy… and the feeling of approaching the abyss arrives in our minds. The fun is over. And begins the unavoidable.

In most cases, the end of the training is perceived as the end of a critical period in our lives. However, it is more about what is coming next than what we left behind. The next stage began quickly. We need to get a job, be good people, and make a living. Be honest, humble, active, competitive, successful, friendly, fitter, happier, more productive… altogether… no stress.

So, the questions reach an even higher level: Are we good at what we decided to devote our lives to? Are we attracted to what we are doing? Why should I do this? Are we doing something valuable? Are we making a real contribution? To whom? Are we free?

Get off to a good start

Fingers crossed, having a good start when choosing a first job is crucial in our careers. Ability or just luck. Whatever it takes. Getting into a good platform for landing in the professional realm can expand our horizons and increase our confidence in the long term. Fasten the seat belts!

Not every company or project relies on newcomers with such confidence in their talent. But in the open-source context, AsyncAPI is safe territory for landing. As a growing project focusing on communication between asynchronous APIs, AsyncAPI is a suggestive place to start: evolving, open-minded, communicative, and empathetic… the project is all ears when talking about getting the best of talented people.

Without hierarchy, no worries, and no pain, AsyncAPI shows itself as the best platform for coders starting to grow at their own pace. It fits like a glove. Since the beginning and throughout the different stages that the project has reached until today, the values and premises are clear, supported, and respected. Transparency and horizontality are unquestionable. People are at the forefront, and goals will come later.

A welcoming atmosphere

As an open-source specification, extensible and protocol-agnostic, AsyncAPI aims to make working with EDA as easy as doing it with RESTful APIs is today. Helping is the main contribution of the project: on the one hand, making messages more machine-readable and contributing to standardize communication; on the other hand, facilitating the work of developers working in that field.

All that is in a work-based collaboration, co-creation, and engagement. They are undeniable. Nothing can be built without a bit of help from our new friends. That’s a whole new concept of what a work environment can be.

Feeling comfortable, welcome, and trustworthy is the only way to create a sense of belonging. So, AsyncAPI mainly relies on accepting difference as a virtue and valuing trust in people to construct a solid and coherent community. This combination is probably the secret to the community’s constant growth and the project itself.

Under these premises, AsyncAPI is increasingly involved in programs such as Google Summer of Code, Google Season of Docs, OpenForce, Outreachy, and even started AsyncAPI’s own Mentorship program.

The main motto is that everybody has something to contribute. The more eyes, the more perspectives. The first and foremost skill is the willingness and eagerness to learn. All people are welcome, and someone is always ready to help those who need it. So, the idea of not knowing something can finally be empowering. Let’s watch it this way, as a blank page for starting—a fresh view.

After an arduous journey, are we forced to forget our hopes and expectations when working? Can we still follow our dreams and make a living? Let’s not forget the ideals that pushed us at the beginning. Let’s not blur or erase the old memories of a young student daydreaming about the possibilities of a utopian Neverland. It’s worth being persistent and a lifelong learner if we know we are heading in the right direction together.

Barbaño González

The post AsyncAPI: A springboard for opensource professionals appeared first on Linux.com.

]]>
Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia https://www.linux.com/news/multiculturalism-in-technology-and-its-limits-asyncapi-and-the-long-road-to-open-source-utopia-2/ Mon, 24 Apr 2023 21:13:17 +0000 https://www.linux.com/?p=585343 Image “Open Source Utopia” by Jason Perlow, Bing Image Creator “Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections – and it matters which ones get made and unmade.” ¬Donna J. Haraway The body is the best and the only tool humans […]

The post Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia appeared first on Linux.com.

]]>
"Open Source Utopia" by Jason Perlow, Bing Image Creator

Image “Open Source Utopia” by Jason Perlow, Bing Image Creator

“Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections – and it matters which ones get made and unmade.” ¬Donna J. Haraway

The body is the best and the only tool humans have for life; it is the physical representation of who we are, the container in which we move and represent ourselves. It reflects our identity, the matter that represents us socially.

Humans have differentiated themselves from other animals by creating tools, using elements that increase their physical and mental capacities, extending their limits, and mediating how they see and understand the world. The body is, thus, transfixed and intermediated by technology.

In the contemporary era, technological progress has led to global interconnection. Global access to the Internet has become the main propeller of globalization, a democratizing and liberating weapon.

It is a place where the absence of corporeality manages to resituate us all at the same level. It is a pioneering experience in which the medium can favor equality. It offers a space of representation in which anonymity and the absence of gender and ethnic, and cultural constraints facilitate equal opportunities.

A temporary autonomous zone

The absence of a previous reference of a historical past turned the Internet into a “temporary autonomous zone.” Thus, a new space was constituted where identities could be expressed and constructed freer. In this way, the Internet has provided oppressed collectives and communities with a means of alleviating cultural and gender biases in which people express themselves free of socio-political pigeonholing.

This same idea can be extrapolated to the new workspaces within technology. The modern workshop is on the network and is interconnected with colleagues who live in any corner of the world. This situation leads us to remote teamwork, multiculturalism, and all the positive aspects of this concept, creating diverse and heterogeneous teams where nationalities, ethnicities, and backgrounds are mixed.

In this idyllic world of the liberation of identities and construction of new spaces to inhabit, the shadows of the physical world, with a dense and unequal past, creep in. Open source projects have faced all these opportunities and constraints in the last years, trying to achieve the goals expressed within the heroic times of the internet in the ’90s.

Opening doors: For whom? For all?

AsyncAPI is an open source initiative sustained and driven by its community. It is a free project whose objective is to be made up of everyone who wants to participate. It follows the basic idea of being created by everyone for everyone.

Being part of the initiative is simple: join the Slack channel and contribute through GitHub. People join freely and form a team managing to take this project to a high level.

But all freedom is conditioned by the context and the system surrounding it. At this point, AsyncAPI as a project shows its limitations and feels constrained. Talking about an open, inclusive, and enthusiastic community is a start. 

There is no widespread access and literacy to technology in all geographical and social contexts. Potentially and hypothetically, the doors are open, as are the doors to libraries. That does not mean that everyone will enter them. The clash against the glass ceiling makes up the technology field, specifically in software development. This conflict emerges from the difficulties of having a multicultural community rich in gender or ethnic identities and equality due to the limitations of the field.

In 2019 the number of software developers worldwide grew to 23.9 million and was expected to reach 28.7 million software engineers by 2024. In these promising numbers, there are huge inequalities. The majority of developers come from specific world areas, and women represent only 10% of the total.

Towards a utopian future: Let’s try it!

The data shows us that beyond the democratizing possibilities of the Internet, most of the advances are only hypothetical and not real. We can see approximately the same numbers reflected in the AsyncAPI community. The community knows what is happening and wants to reverse this situation by being more heterogeneous and multicultural. That’s a challenge that many factors influence.

AsyncAPI has grown in all directions, tackling this situation and creating an ecosystem that embraces variety. It comprises a community of almost 2,000 people of more than 20 nationalities from diverse cultures, ethnicities, and backgrounds.

AsyncAPI was born as an open source initiative, a liberating software model in every sense, a code made by all and for all. It is not a model closed exclusively to the technological field but a movement with a solid ethical base that crosses screens and shapes principles. That is why AsyncAPI is committed to this model. No matter how many external factors are against it, there is a clear direction. 

The decisions taken now will be vital to building a better future – a freer and more inclusive one –. We do not want a unidirectional mirror where only some can see themselves reflected. The key is to search for a diverse and multifaceted mirror.

Aspiring to form a community that is a melting pot of cultures and identities may seem somewhat utopian, but we believe it is a worthy goal to keep in mind and for which to strive. Proposals are welcome. Minds, eyes, and ears always remain open. Let us at least try it. 

Barbaño González

The post Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia appeared first on Linux.com.

]]>
PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever https://www.linux.com/news/pytorch-2-0-our-next-generation-release-that-is-faster-more-pythonic-and-dynamic-as-ever/ Thu, 23 Mar 2023 01:10:19 +0000 https://www.linux.com/?p=585253 We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed. This next-generation release includes a Stable […]

The post PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever appeared first on Linux.com.

]]>
We are excited to announce the release of PyTorch® 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.

This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.

Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog.

This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.

Summary:

  • torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
  • As an underpinning technology of torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on OpenAI Triton deep learning compiler to generate performant code and hide low level hardware details. OpenAI Triton-generated kernels achieve performance that’s on par with hand-written kernels and specialized cuda libraries such as cublas.
  • Accelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator.
  • Metal Performance Shaders (MPS) backend provides GPU accelerated PyTorch training on Mac platforms with added support for Top 60 most used ops, bringing coverage to over 300 operators.
  • Amazon AWS optimizes the PyTorch CPU inference on AWS Graviton3 based C7g instances. PyTorch 2.0 improves inference performance on Graviton compared to the previous releases, including improvements for Resnet50 and Bert.
  • New prototype features and technologies across TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
StableBetaPrototypePerformance Improvements
Accelerated PT 2 Transformerstorch.compileDTensorCUDA support for 11.7 & 11.8 (deprecating CUDA 11.6)
PyTorch MPS BackendTensorParallelPython 3.8 (deprecating Python 3.7)
Scaled dot product attention2D ParallelAWS Graviton3
functorchTorch.compile (dynamic=True)
Dispatchable Collectives
Torch.set_default & torch.device
X86 quantization backend
GNN inference and training performance

*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click here.

STABLE FEATURES

[Stable] Accelerated PyTorch 2 Transformers

The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference “fastpath” architecture, previously known as “Better Transformer.”

Similar to the “fastpath” architecture, custom kernels are fully integrated into the PyTorch Transformer API – thus, using the native Transformer and MultiHeadAttention API will enable users to:

  • transparently see significant speed improvements;
  • support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and
  • continue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.

To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported (see below), with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. Accelerated PyTorch 2 Transformers are integrated with torch.compile() . To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with model = torch.compile(model).

We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile().

alt_text Figure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.

BETA FEATURES

[Beta] torch.compile

torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.

Underpinning torch.compile are new technologies – TorchDynamo, AOTAutograd, PrimTorch and TorchInductor:

  • TorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks and is a significant innovation that was a result of 5 years of our R&D into safe graph capture.
  • AOTAutograd overloads PyTorch’s autograd engine as a tracing autodiff for generating ahead-of-time backward traces.
  • PrimTorch canonicalizes ~2000+ PyTorch operators down to a closed set of ~250 primitive operators that developers can target to build a complete PyTorch backend. This substantially lowers the barrier of writing a PyTorch feature or backend.
  • TorchInductor is a deep learning compiler that generates fast code for multiple accelerators and backends. For NVIDIA and AMD GPUs, it uses OpenAI Triton as a key building block. For intel CPUs, we generate C++ code using multithreading, vectorized instructions and offloading appropriate operations to mkldnn when possible.

With all the new technologies, torch.compile is able to work 93% of time across 165 open-source models and runs 20% faster on average at float32 precision and 36% faster on average at AMP precision.

For more information, please refer to https://pytorch.org/get-started/pytorch-2.0/ and for TorchInductor CPU with Intel here.

[Beta] PyTorch MPS Backend

MPS backend provides GPU-accelerated PyTorch training on Mac platforms. This release brings improved correctness, stability, and operator coverage.

MPS backend now includes support for the Top 60 most used ops, along with the most frequently requested operations by the community, bringing coverage to over 300 operators. The major focus of the release was to enable full OpInfo-based forward and gradient mode testing to address silent correctness issues. These changes have resulted in wider adoption of MPS backend by 3rd party networks such as Stable Diffusion, YoloV5, WhisperAI, along with increased coverage for Torchbench networks and Basic tutorials. We encourage developers to update to the latest macOS release to see the best performance and stability on the MPS backend.

Links

  1. MPS Backend
  2. Developer information
  3. Accelerated PyTorch training on Mac
  4. MetalMetal Performance Shaders & Metal Performance Shaders Graph

[Beta] Scaled dot product attention 2.0

We are thrilled to announce the release of PyTorch 2.0, which introduces a powerful scaled dot product attention function as part of torch.nn.functional. This function includes multiple implementations that can be seamlessly applied depending on the input and hardware in use.

In previous versions of PyTorch, you had to rely on third-party implementations and install separate packages to take advantage of memory-optimized algorithms like FlashAttention. With PyTorch 2.0, all these implementations are readily available by default.

These implementations include FlashAttention from HazyResearch, Memory-Efficient Attention from the xFormers project, and a native C++ implementation that is ideal for non-CUDA devices or when high-precision is required.

PyTorch 2.0 will automatically select the optimal implementation for your use case, but you can also toggle them individually for finer-grained control. Additionally, the scaled dot product attention function can be used to build common transformer architecture components.

Learn more with the documentation and this tutorial.

[Beta] functorch -> torch.func

Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:

We’re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work. Please see the docs and the migration guide for more details.

Furthermore, we have added support for torch.autograd.Function: one is now able to apply function transformations (e.g. vmap, grad, jvp) over torch.autograd.Function.

[Beta] Dispatchable Collectives

Dispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. “NCCL” or “GLOO”). Existing backend specifications by users will be honored and will not require change.

Usage example:

import torch.distributed.dist
…
# old
dist.init_process_group(backend=”nccl”, ...)
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors does not work

# new
dist.init_process_group(...) # backend is optional
dist.all_reduce(...) # with CUDA tensors works
dist.all_reduce(...) # with CPU tensors works

Learn more here.

[Beta] torch.set_default_device and torch.device as context manager

torch.set_default_device allows users to change the default device that factory functions in PyTorch allocate on. For example, if you torch.set_default_device(‘cuda’), a call to torch.empty(2) will allocate on CUDA (rather than on CPU). You can also use torch.device as a context manager to change the default device on a local basis. This resolves a long standing feature request from PyTorch’s initial release for a way to do this.

Learn more here.

The post PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever appeared first on Linux.com.

]]>
Slice and Save Costs with Open Packet Broker https://www.linux.com/news/slice-and-save-costs-with-open-packet-broker/ Tue, 21 Mar 2023 14:04:10 +0000 https://www.linux.com/?p=585239 Enterprise data centers continuously monitor network traffic to improve performance, provide better customer experience, and identify threats. All these appliances or tools require only a portion of the network payload to meet the monitoring requirements. Modern Network Packet brokers use “Packet truncation” technique to optimize the processing of network traffic which involves the removal of […]

The post Slice and Save Costs with Open Packet Broker appeared first on Linux.com.

]]>
Enterprise data centers continuously monitor network traffic to improve performance, provide better customer experience, and identify threats. All these appliances or tools require only a portion of the network payload to meet the monitoring requirements. Modern Network Packet brokers use “Packet truncation” technique to optimize the processing of network traffic which involves the removal of portions of network packets that are not needed for analysis.

Need for Packet Truncation

Reduce Storage: Network traffic payload can be very large (64 to 9216), and storing everything will be expensive. Packet truncation helps reduce the amount of data that needs to be stored by removing irrelevant or redundant information from packets.

Reduce CPU Cycles: Truncated packets require less processing to analyze, which can improve the overall speed and performance of the tools.

Simplify Analysis: Network administrators can easily identify network performance issues more quickly and efficiently since truncated packets have the relevant portions of the packet.

Improve Security: By removing sensitive information from the payload, security can be improved by limiting the exposure of confidential data.

Open Packet Broker for Truncation

Aviz Open Packet Broker industry first Packet broker solution built on Open Networking NOS SONiC supporting truncation on commodity ASICs supporting wire-speed packet truncation. Open Packet broker truncation has the following capabilities.

  • Packet Truncation based on custom offsets (48 bytes to 4094 bytes).
  • VLAN tag insertion for truncated packets for different tooling purposes.
  • Load Balance across tools for optimal processing 

Packet Truncation or slicing will allow only the user-defined byte from an incoming packet, and the remaining bytes are discarded. This helps in reducing the quantity of data processed on the tool port.


Figure 1: Truncation configured on a network port of flow1
Figure 2: Flow configure through APIs

Conclusion

Packet Truncation helps reduce storage requirements, improve analysis, speed up processing, and enhance network security. Open Packet Broker from Aviz Aviz OPB  improves cost savings by providing customers the choice of Open Networking HW SKUs supporting line-rate packet processing.

Authors: Chid Perumal, CTO, and Rajasekaran S, Member of Technical Staff, Aviz Networks

The post Slice and Save Costs with Open Packet Broker appeared first on Linux.com.

]]>
Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates https://www.linux.com/news/blues-wireless-irnas-and-sternum-join-the-zephyr-project-as-widespread-industry-adoption-of-the-open-source-rtos-accelerates/ Thu, 23 Feb 2023 14:21:07 +0000 https://www.linux.com/?p=585175 See Zephyr RTOS in Action at Embedded World on March 14-16 in Nuremberg, Germany SAN FRANCISCO, February 23, 2023 – Today, the Zephyr® Project announced that Blues Wireless, IRNAS, and Sternum have joined as Silver members just as the real-time operating system (RTOS) has hit widespread adoption in products. Members such as Google, Meta, Oticon […]

The post Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates appeared first on Linux.com.

]]>
See Zephyr RTOS in Action at Embedded World on March 14-16 in Nuremberg, Germany

SAN FRANCISCO, February 23, 2023 Today, the Zephyr® Project announced that Blues Wireless, IRNAS, and Sternum have joined as Silver members just as the real-time operating system (RTOS) has hit widespread adoption in products. Members such as Google, Meta, Oticon and T-Mobile have products powered by Zephyr RTOS. 

“Adoption of Zephyr has increased dramatically in the last few years,” said Kate Stewart, Vice President of Dependable Embedded Systems. “In addition to Zephyr being used in a variety of industrial applications, we’re finding it in all sorts of emerging markets like wearables, trackers, intelligent IoT devices, animal monitoring systems, and more. We hope being product ready will help these new members and the community with development, delivery, and maintenance across a wide variety of additional devices and solutions.”

Products that are powered by Zephyr include: 

  • Google Chromebooks: The embedded controller is an ultra-low-power microcontroller that is always on. It is critical to the all-day battery life as it handles all the things a Chromebook has to do when the application processor is off or sleeping. Google recently decided to move the EC application to Zephyr so that  vendors can write their drivers once and capture design wins in product areas beyond Chromebooks. Zephyr’s device model is based on the industry standards of devicetree⁠ and Kconfig⁠. These technologies simplify the customization steps needed for each Chromebook model, lessening the engineering effort for Chromebook manufacturers. Learn more here.
  • Oticon MoreTM Hearing Aids: The revolutionary Oticon More is the world’s first hearing aid that allows users to hear all relevant sounds thanks to an on-board Deep Neural Network. It is powered by the Polaris chipset, integrating Zephyr RTOS for Bluetooth LE connectivity. This novel hearing instrument is an advanced medical product that will help millions of hearing-impaired people to a better quality of life. Learn more here.
  • T-Mobile’s DevEdge: The DevEdge is a self-serve developer platform that offers access to the T-Mobile network to create connected wireless solutions. The IoT Developer Kit, which runs on Zephyr RTOS, gives developers immediate access to T-Mobile’s network – no testing hardware, no lengthy build time required. Learn more here

Even as a new member, IRNAS has been using Zephyr RTOS for the last 4 years as part of their strategy to work with the best technologies to build industrial solutions for global clients, particularly focusing on Zephyr RTOS running on Nordic Semiconductor’s nRF52 and nRF91 series products. Advanced applied solutions range from critical infrastructure monitoring devices such as RAM-1 developed for Izoelektro all the way to livestock management and tracking products engineered for Telespor. As part of the IRNAS responsible environmental strategy, they also formed a partnership with Smart Parks to design Open Collar animal trackers for nature conservation. These are mounted on wildlife animal collars for monitoring and their safety.

“Zephyr has been at our core for a number of years, and now we are happy to take the next step and support the project that enabled us to build better connected products and be part of the Zephyr community,” said Luka Mustafa, CEO and Founder of IRNAS. “Zephyr RTOS has already achieved significant adoption in industry, enabling us to design applications and value add logic to products running on multiple architectures, designing products that can continue evolving over the decades to come without massive rewrites in the process.”

Blues Wireless and Sternum also joined the project as Silver members. 

“At Blues, we are proud to support the Zephyr Project and officially join the community,” said Brandon Satrom, Vice President of Developer Experience & Engineering at Blues. “Zephyr’s open source, multi-architecture approach is perfect for our customers as they scale, and are looking for a robust RTOS to pair with the device agnostic, secure, and simple cellular connectivity that Blues provides. We look forward to introducing more of our customers to Zephyr, and leveraging our expertise to help Zephyr developers add low-power cellular to their tool belt.”

“Zephyr is already the platform of choice for some of our largest customers, allowing us a clear view of how it’s being used to power medical devices, payment devices, gateways, and industrial infrastructure,” says Natali Tshuva, CEO and Co-Founder of Sternum. “We see growing demand from device manufacturers for advanced security controls, post-market surveillance capabilities, and threat mitigation that go beyond perpetual security patching. Our built-in support for Zephyr RTOS and toolchains allows us to address these needs and offer an easy way to bring our patented technology to all Zephyr-based devices.”

Zephyr, an open source project at the Linux Foundation that builds a safe, secure, and flexible RTOS for resource-constrained devices, is easy to deploy, connect and manage. It supports more than 450 boards running embedded microcontrollers from Arm and RISC-V to Tensilica, NIOS, ARC and x86 as single and multicore systems. It has a growing set of software libraries that can be used across various applications and industry sectors such as Industrial IoT, wearables, machine learning, and more. Zephyr is built with an emphasis on broad chipset support, security, dependability, longterm support releases, and a growing open source ecosystem. 

Zephyr Project Platinum members include Antmicro, Baumer, Google, Intel, Meta, Nordic Semiconductor, NXP, Oticon, Qualcomm Innovation Center, and T-Mobile. Silver members include AVSystem, BayLibre, Golioth, Infineon, Laird Connectivity, Linaro, Memfault, Parasoft, Percepio, SiFive, Silicon Labs, Synopsys, Texas Instruments, and Wind River. 

Where to see Zephyr 

Zephyr will be on-site at Embedded World on March 14-16 in Nuremberg, Germany. The booth, located in Hall 4 – Stand 170,  will host live demonstrations from members such as Antmicro,  AVSystem, Blues Wireless, Golioth, IRNAS, Memfault, Nordic Semiconductor, NXP, Parasoft and Sternum. Stop by to talk to project members and ambassadors about these demos and check out products running on Zephyr. Click here for a total list of demos and products featured at Embedded World. 

Additionally, the Zephyr community will be at the Zephyr Developer Summit, which takes place on June 27-30 in Prague, Czech Republic, and virtually as part of the first-ever Embedded Open Source Summit (EOSS). The annual Zephyr Developer Summit, which launched in 2021, is for developers using or considering Zephyr RTOS in embedded products. This year will focus on topics of interest to users, developers contributing upstreams and maintainers who want to learn more about technical details, products that leverage the RTOS and the ecosystem.

To submit a speaking proposal, click here by February 27. Learn more about sponsoring the event here or register to attend the event here.  

About the Zephyr Project

The Zephyr® Project is an open source, scalable real-time operating system (RTOS) supporting multiple hardware architectures. To learn more, please visit www.zephyrproject.org.

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

Media Contact:

Maemalynn Meanor

Director of Public Relations & Communications

maemalynn@linuxfoundation.org

The post Blues Wireless, IRNAS, and Sternum join the Zephyr Project as Widespread Industry Adoption of the Open Source RTOS Accelerates appeared first on Linux.com.

]]>
Increasing 5G Quality of Experience (QoE) Using SONiC and Open Packet Broker https://www.linux.com/news/increasing-5g-quality-of-experience-qoe-using-sonic-and-open-packet-broker/ Mon, 23 Jan 2023 19:41:07 +0000 https://www.linux.com/?p=585107 5G has revolutionized the use of data services for mobile users worldwide, providing high data rate / high capacity, low latency, and massive connectivity. These characteristics of 5G have forced mobile carriers to increase their focus on ways to improve network service and their customer’s Quality of Experience (QoE). This requires sophisticated network monitoring to […]

The post Increasing 5G Quality of Experience (QoE) Using SONiC and Open Packet Broker appeared first on Linux.com.

]]>
5G has revolutionized the use of data services for mobile users worldwide, providing high data rate / high capacity, low latency, and massive connectivity. These characteristics of 5G have forced mobile carriers to increase their focus on ways to improve network service and their customer’s Quality of Experience (QoE). This requires sophisticated network monitoring to detect and resolve issues that impact QoE immediately. Network monitoring tools need to receive control and user plane data traffic to help mobile operators meet customer expectations.

GTP (GPRS Tunneling) is a group of IP-based communications protocols used to carry GPRS traffic within mobile GSM networks. It works as a carrier for mobile packets over an underlay IP network using tunneling. GTP is used between the base station and the gateway, which are part of the mobile elements in 5G transport architecture. The packet is encapsulated over IP and delivered across the IP network.

Why do we need GTP Parsing and Filtering?

Network monitoring tools require inner header information for the mobile network for threat monitoring, analysis, and inspection. So, network packet brokers (NPB) residing in the GPRS core networks need to filter, forward, and load balance packets toward the tools for inspection. This requires NPBs to have the capability to filter based on outer and inner headers to identify the GTP sessions in the data stream to control data flow within your infrastructure. This deep packet inspection will result in the decision-making of allowing or denying traffic based on the packet policies from the mobile operator station.

A major challenge in today’s mobile network is the data traffic from the user equipment, and its application is rapidly growing. To effectively monitor the performance and obtain a better quality of service, service providers should be able to correlate the traffic flow based on each subscriber’s data and service gateway tunnel endpoint identifiers (TEID). Therefore, GTP user and control packets need to be parsed by NPBs in the core GPRS network and packets towards the underlay IP. 

Open Networking Approach 

The evolution of modern ASICs in their programmability, providing flexible parsers for filtering, and TCAM-scale, has created an opportunity for using them on Network Packet Brokers for the 5G mobile network to perform deep packet inspection of GTP sessions. SONiC open-source NOS, regarded as the “Linux of Networking,” supports these modern ASICs. The flexible micro-services-based software architecture exposing the ASIC capabilities using standardized SAI (Switch Abstraction Interface) has created a clear opportunity to build network packet brokers for 5G deployments.

Aviz’s Open Packet Broker (OPB) is the industry’s first software-based microservice built on SONiC using ASIC (NVIDIA Spectrum) programmability capabilities to provide deep insights on 5G mobile traffic.

Open Packet Broker
flow flow1
network-ports Ethernet13/1
tool-ports Ethernet16/1
tool-ports port-channel1
rule 1 permit src-ip 1.1.1.1/32 dest-ip 2.2.2.2/32 protocol tcp gtp "teid 0x13467254 inner-sip 3.3.3.3/32 inner-dip 4.4.4.4/32 inner-protocol udp inner_l4srcport 567 inner_l4destport 789" counters enable
rule 2 permit src-ip 2401::1 src-netmask f::f dest-ip 2401::2 dest-netmask f::f protocol udp l4portsrc 789 l4portdst 456 gtp "teid 0x11112222 inner-sip 1203::1 inner-smask f::f inner-dip 1203::2 inner-dmask f::f inner-protocol tcp inner_l4srcport 909 inner_l4destport 657" counters enable

Figure 1: Simple (IPv4/IPv6) Rule configuration for GTP session monitoring with LoadBalancing

Figure 2: GTP configuration using APIs

Conclusion

By providing 5G’s high capacity, low latency, and massive connectivity to customers, mobile carriers must ensure uninterrupted network service with a higher quality of experience. Therefore, mobile operators need a cost-effective solution that can meet the increase in speeds and provide deep inspection. Aviz leverages the strengths of the open networking ecosystem for both hardware and software to provide mobile network operators with the solution that’s key to greater QoE at a lower cost: OPB (Open Packet Broker).

Authors: Chid Perumal, CTO, and Rajasekaran S, Member of Technical Staff, Aviz Networks

The post Increasing 5G Quality of Experience (QoE) Using SONiC and Open Packet Broker appeared first on Linux.com.

]]>
Mental Wellness Month at Open Mainframe Project https://www.linux.com/news/mental-wellness-month-at-open-mainframe-project/ Fri, 13 Jan 2023 19:06:57 +0000 https://www.linux.com/?p=585083 “January is Mental Wellness Month, and I think it’s the perfect time to talk about Neurodiversity. Simply said, neurodiversity is the difference among all of our brains, like fingerprints…no two are alike. Neurodivergency includes specific differences, such as autism, ADHD, anxiety or tic disorders (like Tourette’s). While neurodivergency is not a mental health or mental wellness […]

The post Mental Wellness Month at Open Mainframe Project appeared first on Linux.com.

]]>

January is Mental Wellness Month, and I think it’s the perfect time to talk about Neurodiversity. Simply said, neurodiversity is the difference among all of our brains, like fingerprints…no two are alike. Neurodivergency includes specific differences, such as autism, ADHD, anxiety or tic disorders (like Tourette’s).

While neurodivergency is not a mental health or mental wellness issue, being neurodivergent is related to significantly higher incidences of depression and suicide, particularly in the female population.

We don’t know exactly why this is the case, though of course, there are often concurrent conditions (e.g., depression, bipolar, etc.) that can be present for someone who is neurodivergent. However…the reason for this is not necessarily because a neurodivergent brain is pre-wired to be depressed or suicidal. Research and experience tell a different story…basically, people who are neurodivergent (particularly autistic) mask their symptoms, meaning they try and hide them by acting how society expects a neurotypical person to act…and this hiding can cause anxiety and depression. Additionally, people who are late identified as neurodivergent often lack proper support, which can lead to feelings of isolation and depression.”

Read more at Open Mainframe Project blog

The post Mental Wellness Month at Open Mainframe Project appeared first on Linux.com.

]]>
Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project https://www.linux.com/news/maintainer-confidential-opportunities-and-challenges-of-the-ubiquitous-but-under-resourced-yocto-project/ Wed, 11 Jan 2023 16:04:01 +0000 https://www.linux.com/?p=585066 By Richard Purdie Maintainers are an important topic of discussion. I’ve read a few perspectives, but I’d like to share mine as one of the lesser-known maintainers in the open source world. Who am I, and what do I do? I have many job titles and, in many ways, wear many hats. I’m the “architect” […]

The post Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project appeared first on Linux.com.

]]>
By Richard Purdie

Maintainers are an important topic of discussion. I’ve read a few perspectives, but I’d like to share mine as one of the lesser-known maintainers in the open source world.

Who am I, and what do I do? I have many job titles and, in many ways, wear many hats. I’m the “architect” for the Yocto Project and the maintainer and lead developer for both OpenEmbedded-Core and BitBake. I’m the chair of the Yocto Project Technical Steering Committee (TSC) and a member of the OpenEmbedded TSC. I am also a Linux Foundation Fellow, representing a rare “non-kernel” perspective. The fellowship was partly a response to an industry-wide desire for me to work in a position of independence for the good of the projects and communities I work with rather than any one company.

The different roles I’ve described hint at the complexities that are part of the everyday tasks of maintaining a complex open source project. Still, to many, it could look like a complex labyrinth of relationships, directions, and decisions to balance.

What the Yocto Project is

I still need to tell you more about what I do, so I should explain what the Yocto Project does. Most people realize Linux is all around us but have yet to think much about how it gets there or how to maintain or develop such systems. There is much more to a Linux system than just a kernel, and there are many use cases where a traditional desktop Linux distribution isn’t appropriate. In simple terms, the Yocto Project allows people to develop custom Linux (and non-Linux) systems in a maintainable way.

For a sense of scale, around 65% of the world’s internet traffic runs through devices from a specific manufacturer, and they have hundreds of millions of devices in the field. Those devices have software derived from the Yocto Project. The copy of Linux in Windows, “Windows Subsystem for Linux”, originally derived from the Yocto Project. Alongside the main operating system, most servers have a base management controller, which looks after the server’s health. The openBMC project provides that software and builds on the Yocto Project. A similar situation exists for cars using Automotive Grade Linux, which derives from the Yocto Project as well. The Comcast RDK is an open source UI software stack built using the project and is widely used on media devices such as set-top boxes, and the Yocto Project has also built LG’s TV WebOS operating system. We’ve even had a Yocto Project built system orbiting Mars!

Those examples are tips of the iceberg, as we only know some of the places it is in use; being open source, they don’t have to tell us. The Yocto Project feeds into things all around us. The fact that people don’t know about it is a sign we’ve done a good job—but a low profile can also mean it misses out on recognition and resourcing.

The premise of the Yocto Project is to allow companies to share this work and have one good shared toolset to build these custom systems in a maintainable, reproducible, and scalable way.

How we got here

Now, we come to my role in this. I’m the crazy person who thought this project was possible and said so to several companies just over a decade ago. Then, with the support of some of them, many very talented developers, and a community, I took some existing open source projects and grew and evolved them to solve the problem, or at least go a significant way to doing so! 

The project holds the principle of shared contributions and collaboration, resulting in a better toolset than any individual company or developer could build. Today, I keep this all working.

It may sound like a solved problem, but as anyone working with a Linux distribution knows, open source is continually changing, hardware is continually changing, and the “distro” is where all this comes together. We must work to stay current and synchronized with the components we integrate. 

The biggest challenge for us now is being a victim of our success. The original company sponsorship of developers to work on Yocto understandably scaled back, and many of those developers moved on to other companies. In those companies, they’re often now focused on internal projects/support, and the core community project feels starved of attention. It takes time to acquire the skillsets we need to maintain the core, as the project is complex. Everyone is hoping someone else helps the project core.

I’m often asked what features the project will have in its next release. My honest answer is that I don’t know, as nobody will commit to contributions in advance. Most people focus on their own products or projects, and they can’t get commitment from their management to spend time on features or bug fixing for the core, let alone agree to any timescale to deliver them. This means I can’t know when or if we will do things.

A day in my life as the Yocto Project architect 

I worked for a project member company until 2018, which generously gave me time to work on the project. Times change, and rather than moving on to other things, I took a rather risky decision at the time to move to get funding directly from the project as I feared for its future. Thankfully it did work out, and I’ve continued working on it.

Richard Purdie, Linux Foundation Fellow and Yocto Project architect

There are other things the project now funds. This includes our “autobuilder” infrastructure, a huge automated test matrix to find regressions. Along with the autobuilder and admin support to keep it alive, the project also funds a long-term support (LTS) release maintainer (we release an LTS every two years), documentation work, and some help in looking after incoming patch testing with the autobuilder, integrating new patches and features. 

There are obvious things in my day-to-day role, such as reviewing patches, merging the ones that make sense, and giving feedback on those with issues. Less obvious things include needing to debug and fix problems with the autobuilder. 

Sadly, no one else can keep the codebase that supports our test matrix alive. The scale of our tests is extensive, with 30+ high-power worker machines running three builds at a time, targeting the common 32- and 64-bit architectures with different combinations of core libraries, init systems, and so on. We test under qemu and see a lot of “intermittent” failures in the runtime testing where something breaks, often under high load or sometimes once every few months. Few people are willing to work on these kinds of problems, but, left unchecked, the number of them makes our testing useless as you can’t tell a real failure from the random, often timing-related ones. I’m more of a full-time QA engineer than anything else!

Bug fixing is also an interesting challenge. The project encourages reporting bugs and has an active team to triage them. However, we need help finding people interested in looking into and fixing identified issues. There are challenges in finding people with both the right skills and time availability. Where we have trained people, they generally move on to other things or end up focused on internal company work. The only developer time I can commit is my own.

Security is a hot topic. We do manage to keep versions of software up to date, but we don’t have a dedicated security team; we rely on the teams that some project users have internally. We know what one should do; it is just unfortunate that nobody wants to commit time to do it. We do the best we can. People love tracking metrics, but only some are willing to do the work to create them or keep them going once established.

Many challenges arise from having a decent-sized team of developers working on the project, with specific maintainers for different areas, and then scaling back to the point where the only resource I can control is my own time. We developed many tools currently sitting abandoned or patched up on an emergency basis due to a lack of developer resources to do even basic maintenance. 

Beyond the purely technical, there are also collaboration and communication activities. I work with two TSCs, the project member organizations, people handling other aspects of the project (advocacy, training, finance, website, infrastructure, etc.), and developers. These meetings add up quickly to fill my calendar. If we need backup coverage in any area, we don’t have many options besides my time to fall back on.

The challenges of project growth and success

Our scale also means patch requirements are more demanding now. Once, when the number of people using the project was small, the impact of breaking things was also more limited, allowing a little more freedom in development. Now, if we accept a change commit and something breaks, it becomes an instant emergency, and I’m generally expected to resolve it. When patches come from trusted sources, help will often be available to address the regressions as part of an unwritten bond between developers and maintainers. This can intimidate new contributors; they can also find our testing requirements too difficult.

We did have tooling to help new contributors—and also the maintainers—by spotting simple, easily detected errors in incoming patches. This service would test and then reply to patches on the mailing list with pointers on how to fix the patches, freeing maintainer time and helping newcomers. Sadly, such tools require maintenance, and we lost the people who knew how to look after this component, so it stopped working. We formed plans to bring it back and make the maintenance easier, but we’ve struggled to find anyone with the time to do it. I’ve wondered if I should personally try to do it; however, I just can’t spend the chunk of time needed on one thing like that, as I would neglect too many other things for too long.

I wish this were an isolated issue, but there are other components many people and companies rely upon that are also in a perilous state. We have a “layer index,” which allows people to search the ecosystem to find and share metadata and avoid duplicating work. Nobody is willing and able to spend time to keep it running. It limps along; we do our best to patch up issues, but we all know that, sooner or later, something will go badly wrong, and we will lose it. People rely on our CROPs container images, but they have no maintainer.

I struggle a lot with knowing what to do about these issues. They aren’t a secret; the project members know, the developers know, and I raise them in status reports, in meetings, and wherever else I can. Everyone wants to work elsewhere as long as they ‘“kind of ’work” or aren’t impacting someone badly. Should I feel guilty and try to fix these things, risking burnout and giving up a social life, so I have enough time to do so? I shouldn’t, and I can’t ask others to do that, either. Should I just let these things crash and burn, even if the work in rebuilding would be much worse? That will no longer be a choice at some point, and we are slowly losing components.

Over the holiday period, I also realized that project contributions have changed. Originally, many people contributed in their spare time, but many are now employed to work on it and use it daily as part of their job. There have been more contributions during working hours than on weekends or holidays. During the holiday period, some key developments were proposed by developers having “fun” during their spare time. Had I not responded to these, helping with wider testing, patch review, and feedback, they likely would have stalled and failed, with people no longer having time when back outside the holiday period. The contributions were important enough that I strongly felt I should support them, so I did, the cost being that I didn’t get so much of a break myself.

As you read this blog and get a glimpse of my day, I want you to leave with an understanding that all projects, large and small, have their own challenges, and Yocto isn’t alone. 

I love the project; I’m proud of what we’ve done with it, companies, and a community together. Growth and success have their downsides, though we see some issues I never expected. I am confident that the project can and will survive one way or another, come what may, as I’ve infused survival traits into its DNA.

Where the Yocto Project is going

There is also the future-looking element. What are the current trends? What do we need to adapt to? How can we improve our usability, particularly for new users? There is much to think about.

Recently, after I raised concerns about feature development, the project asked for a “five-year plan” showing what we could do in that timeframe. It took a surprising amount of work to pull together the ideas and put cost/time estimates against them, and I put a lot of time into that. Sadly, the result doesn’t yet have funding. I keep being asked when we’ll get features, but there needs to be more willingness to fund the development work needed before we even get to the question of which developers would actually do it!

One question that comes up a lot is the project’s development model. We’re an “old school” patch on a mailing list, similar to the kernel. New developers complain that we should have GitHub workflows so they can make point-and-click patch submissions. I have made submissions to other projects that way, and I can see the attraction of it. Equally, it does depend a lot on your review requirements. We want many people to see our patches, not just one person, and we greatly benefit from that comprehensive peer review. There are benefits in what we do, and being told that we need to understand the reasons and benefits to stay the course is unhelpful and gets a bit worn over time! Our developer/maintainer base is used to mailing list review, and changing that would likely result in one person looking at patches, to the detriment of the project. Maintainers like myself also have favored processes and tools, and changing them would likely at least cause productivity issues for a while.

Final thoughts: The future?

Governments are asking some good questions about software and security, but there are also very valid concerns about the lifecycle of hardware and sustainability issues. What happens to hardware after the original manufacturer stops supporting it? Landfill? Can you tell if a device contains risky code?

The project has some amazing software license and SBoM capabilities, and we collaborate closely with SPDX. We’re also one of the few build environments that can generate fully reproducible binaries and images down to the timestamps for all the core software components straight out of the box.

Combining these technologies, you can have open and reproducible software for devices. That means you can know the origin of the code on the device, you can rebuild it to confirm that what it runs is really what you have instructions/a manifest for, and if—or, in reality, when—there is a security issue, you have a path to fixing it. There is the opportunity for others to handle software for the device if the original provider stops for whatever reason, and devices can avoid landfill.

I dream of a world where most products allow for this level of traceability, security, and sustainability, and I believe it would drive innovation to a new level. I know a build system that could help it become a reality!

Get involved to help the Yocto Project community grow

Basic survival isn’t my objective or idea of success. I’d love to see more energy, engagement, and collaboration around new features, establish that security team and see the project playing a more prominent role in the broader FOSS ecosystem.

Help can take different forms. If you already use the Yocto Project, say so publicly, or let us list you as a user! We’re open to developer help and new contributors too, be it features, bug fixing, or as maintainers.

The project is also actively looking to increase its number of member companies. That helps us keep doing what we’re doing today, but it might also let us fund development in the critical areas we need it and allow us to keep things running as the ecosystem has grown to expect. Please contact us if you’re interested in project membership to help this effort.

About the author: Richard Purdie is the Yocto Project architect and a Linux Foundation Fellow.

The post Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project appeared first on Linux.com.

]]>
Why you should use SPDX for security https://www.linux.com/news/why-you-should-use-spdx-for-security/ Tue, 10 Jan 2023 19:21:51 +0000 https://www.linux.com/?p=585058 By Phil Odence Software Package Data Exchange® (SPDX®) is a standard format for describing a software bill of materials that supports a range of use cases, not least SBOMs to manage security vulnerabilities.  SPDX has been an open project under the auspices of the Linux Foundation for over a decade, all the time with the […]

The post Why you should use SPDX for security appeared first on Linux.com.

]]>
By Phil Odence

Software Package Data Exchange® (SPDX®) is a standard format for describing a software bill of materials that supports a range of use cases, not least SBOMs to manage security vulnerabilities. 

SPDX has been an open project under the auspices of the Linux Foundation for over a decade, all the time with the purpose of describing software content. More recently, SPDX became an ISO standard. (https://www.iso.org/standard/81870.html) Well ahead of its time, SPDX was initially designed with an eye toward license compliance but has also become a standard for supporting security use cases as recognized by the Cybersecurity and Infrastructure Security Agency (CISA).

Arguably, the most complex use case is including license and copyright information about software packages. This is because legal ownership can apply to the package, files within, and even a small collection of lines of code. To fully support such usage, that standard needs to be able to handle the granularity, which is also needed for some security use cases. Additionally, legal analysis requires a standard, immutable list of licenses, their text, and ways to express complex licensing situations. So, SPDX includes a collection of licenses (also adopted by other projects) and a license expression language. 

From early in the days of the project, there was interest from some corners in a lighter version that was more straightforward, allowing companies to get started providing package license information in a standard format. The standard evolved to specify a core set of required fields and a superset range of optional fields. This concept laid the foundation for “profiles” currently being defined, different sets of optional and required fields required for a given use case. Another key dimension of SPDX’s evolution was adding relationships and references to external documents. This was originally developed with the idea of linking different SPDX docs to allow, for example, the structure of an SPDX description to mirror the structure of the software it describes. The core capability, though, allows for linking to other types of documents and is critical for support linking and SBOM to associated security information.

The recent ramp in interest in SBOMs has been focused on the security use case, driven by the general climate of cybersecurity risk and, more specifically, by the US government’s intent to require SBOMs of vendors. SPDX has been moving in this direction for some time, and the specification includes the functionality to support it. In 2016, version SPDX released version 2.1, which included specific external reference fields for the security use cases.  The current 2.3 version was specifically aimed at increasing security support, and a key profile planned for the 3.0 release targets this use case.

SPDX utilizes the external reference capability of SPDX and adds new reference categories to support linking to security documents. The spec’s Annex K: How To Use SPDX in Different Scenarios https://spdx.github.io/spdx-spec/v2.3/how-to-use/ goes into detail and provides examples of how to link to various vulnerability-related resources, including CPEs, CVEs, VEX information including CSAF-formatted security information, code fixes, and other information. There is also a section mapping the NTIA minimum elements to the corresponding fields in SPDX.

SPDX 3.0, currently under active development, extends the concept of profiles introduced in 2.2 and has one specifically designed for security. With 3.0, SPDX documents will embed more context for the linked security data, allowing tool efficiency.

Profiles are sets of required and optional fields for a specific use case. So whether a document conforms to the spec will be relative to a use case. For example, a security-oriented SBOM may not have the features required to comply with the legal profile but on the other hand, could have all the required fields to comply with both. A core profile will include a minimal set of elements to describe a software package, corresponding roughly to what has previously been referred to as SPDX Lite. Beyond that, then, will be several profiles, including Security, Legal, and others, such as one for AI apps or one that includes build information.

One important external reference will be to CISA Vulnerability Exploitability eXchange docs, an envisioned machine-readable, vendor-supplied assessment of vulnerabilities in their software. VEX is still a bit of a moving target, and multiple “flavors” seem to be arising without a standard having been nailed down. In any case, SPDX 3.0 will support it. Additionally, based on input from various open source projects, the group is considering incorporating a simple set of minimal security elements as optional SPDX fields in the Security profile. This would not be a new alternative to CSAF or VEX but rather a lightweight way for projects, not set up to go deep, to provide the basic security info common to all vulnerability description formats.

A challenge remains for anyone exchanging SBOMs (in any format) concerning unambiguously referencing software elements. One SBOM’s “Log4j” is another’s “Apache Log4j.” It’s a similar issue to the one SPDX solved for license references. A loose analogy: If airlines were to share flight schedules without agreeing on how to refer to London Heathrow, they’d be useless. This can’t be solved locally to SPDX, as it’s needed for other formats and applications. The group believes there may be a solution in the combination of Package URL (PURL) for components associated with package managers and CPEs and SWID for commercial software components. Support for OmniBOR (Universal Bill OReceipts) has also been added to SPDX 2.3, another possible solution to uniquely identifying software components.

SPDX has demonstrated a solid foundation and the ability to evolve to meet users’ evolving needs. The introduction of profiles allows for considerable flexibility. Recently a constituency has started up a functional safety (FuSa) profile. The subject of hardware has come up for discussion as well, and the spec may one day be referred to as System Package Data Exchange… SPDX.

SPDX Myths

SPDX can’t support security.

False. SPDX currently supports linking to security information, and that capability will be extended for a broader range of use cases in the future.

SPDX is old and complicated.

Partially True. The team would say “well-established.” “Complex” might be a better word than “complicated,” and so is the set of problems it addresses. And with optional fields, SPDX Lite and profiles, it can be as simple as it needs to be and still address the problem. The architects of SPDX have taken the long view to build the flexibility to handle an uncertain future. 

SPDX is not human-readable

Partially True. SPDX supports various formats, including a very human-readable spreadsheet for simple examples. It gets most challenging with XML and JSON, which depends on the human. But the reality is to describe software of any size and do anything useful with the information, machines need to be involved, and human eyes would only slow the process.

SPDX doesn’t support VEX.

Mostly false. Today SPDX documents can make external reference to VEX and VDR documents. We are in the camp of people who believe that is the best way to support VEX. Because SBOMs and knowledge about the vulnerabilities in contained components move a very different pace, we don’t think it makes sense to expect that information is always included in the SBOM document.

SPDX is only for license compliance.

False. OK, it depends on when the statement was made. Ten years ago… True.

SPDX is not a format for describing SBOMs.

False. It is.

SPDX cannot describe hardware BOMs.

True… today. The format is flexible enough to evolve in this direction in the future, and it is currently being explored.

The post Why you should use SPDX for security appeared first on Linux.com.

]]>