adavis – Linux.com https://www.linux.com News For Open Source Professionals Thu, 18 Jul 2024 12:24:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 LF Energy: Solving the Problems of the Modern Electric Grid Through Shared Investment https://www.linux.com/news/lf-energy-solving-the-problems-of-the-modern-electric-grid-through-shared-investment/ Thu, 17 Mar 2022 20:00:00 +0000 https://www.linux.com/news/lf-energy-solving-the-problems-of-the-modern-electric-grid-through-shared-investment/ Arresting climate change is no longer an option but a must to save the planet for future generations. The key to doing so is to transition off fossil fuels to renewable energy sources and to do so without tanking economies and our very way of life.  The energy industry sits at the epicenter of change […]

The post LF Energy: Solving the Problems of the Modern Electric Grid Through Shared Investment appeared first on Linux.com.

]]>
Arresting climate change is no longer an option but a must to save the planet for future generations. The key to doing so is to transition off fossil fuels to renewable energy sources and to do so without tanking economies and our very way of life. 

The energy industry sits at the epicenter of change because energy makes everything else run. And inside the energy industry is the need for a rapid transition to electrification and our vast power grids. Like it or not, utilities face existential decisions on transforming themselves while delivering ever more power to more people without making energy unaffordable or unavailable.

The challenges are daunting:

How to move away from fossil fuels without crashing the global economy that is fueled by energy?Is it possible to speed up the modernization of the electric grid without spending trillions of dollars?Can this be done while ensuring that power is safe, reliable, and affordable for all?

These are all significant problems to solve and represent 75% of the problem in combating climate change through decarbonization. In the Linux Foundation’s latest case study, Paving the Way to Battle Climate Change: How Two Utilities Embraced Open Source to Speed Modernization of the Electric Grid, LF Energy explores the opportunities for digital transformation within electric utility providers and the role of open source technologies in accelerating the transition.

Open Source meets climate change challenges with LF Energy

The growth of renewable energy sources is making the challenges of modernizing the modern grid more complicated. In the past, energy flowed from coal and gas generating plants onto the big Transmission System Operator (TSO) lines and then to the smaller Distribution System Operator (DSO) lines to be transformed into a lower voltage suitable for homes and businesses. 

But now, with solar panels and wind turbines increasingly feeding electricity back into the grid, the flow of power is two-way.

This seismic shift requires a new way of thinking about generating, distributing, and consuming energy. And it’s one that open source can help us navigate.

Today, energy travels in all directions, from homes and businesses, and from wind and solar farms, through the DSOs to the TSOs, and back again. This fundamental change in how power is generated and consumed has resulted in a much more complicated system that utilities must administer. They’ll require new tools to guarantee grid stability and manage the greater interaction between TSOs and DSOs as renewables grow.

Open source software allows utilities to keep up with the times while lowering expenses. It also gives utilities a chance to collaborate on common difficulties rather than operating in isolation. 

The communities developing LF Energy’s various software projects provide those tools. It’s helping utilities to speed up the modernization of the grid while reducing costs. And it’s giving them the ability to collaborate on shared challenges rather than operate in silos.

Two European utility providers, the Netherlands’ Alliander and France’s RTE are leading the change by upgrading their systems – markets, controls, infrastructure, and analytics – with open source technology.

RTE (a TSO) and Alliander (a TSO) joined forces initially (as members of the Linux Foundation’s LF Energy projects) because they faced the same problem: accommodating more renewable energy sources in infrastructures not originally designed for them and doing it at the speed and scale required. And while they are not connected due to geography, the problems they are tackling apply to all TSOs and DSOs worldwide.

Two electric utility providers collaborate on shared technology investments, together

The way that Alliander and RTE collaborated via LF Energy on a project known as Short Term Forecasting, or OpenSTEF, illustrates the benefits of open source collaboration to tackle common problems. 

“Short-term forecasting, for us, is the core of our existence,” According to Alliander’s Director of System Operations, Arjan Stam. “We need to know what will be happening on the grid. That’s the only way to manage the power flows,” and to configure the grid to meet customer needs.“The same is true for RTE and “every grid operator across the world,” says Lucian Balea, RTE’s Director of Open Source. 

Alliander has five people devoted to OpenSTEF, and RTE has two.

Balea says that without joining forces, OpenSTEF would develop far less quickly, and RTE may not have been able to work on such a solution in the near term.

Since their original collaboration on OpenSTEF, they have collaborated on additional LF Energy Projects, CoMPAS, and SEAPATH. 

CoMPAS is Configuration Modules for Power industry Automation Systems, which addresses a core need to develop open source software components for profile management and configuration of a power industry protection, automation, and control system. ComPAS is critical for the digital transformation of the power industry and its ability to move quickly to new technologies. It will enable a wide variety of utilities and technology providers to work together on developing innovative new solutions.

SEAPATH, Software Enabled Automation Platform and Artifacts (THerein): aims to develop a platform and reference design for an open source platform built using a virtualized architecture to automate the management and protection of electricity substations. The project is led by Alliander, with RTE and other consortium members contributing.

As we move to a decarbonized future, open source will play an increasingly important role in helping utilities meet their goals. It’s already helping them speed up the grid’s modernization, reduce costs, and collaborate on shared challenges. And it’s only going to become essential as we move toward a cleaner, more sustainable energy system.

Read Paving the Way to Battle Climate Change: How Two Utilities Embraced Open Source to Speed Modernization of the Electric Grid to see how it works and how you and your organization may leverage Open Source. Together, we can develop solutions. 

The post LF Energy: Solving the Problems of the Modern Electric Grid Through Shared Investment appeared first on Linux Foundation.

The post LF Energy: Solving the Problems of the Modern Electric Grid Through Shared Investment appeared first on Linux.com.

]]>
Looking to Hire or be Hired? Participate in the 10th Annual Open Source Jobs Report and Tell Us What Matters Most  https://www.linux.com/news/looking-to-hire-or-be-hired-participate-in-the-10th-annual-open-source-jobs-report-and-tell-us-what-matters-most/ Thu, 17 Mar 2022 09:55:06 +0000 https://www.linux.com/news/looking-to-hire-or-be-hired-participate-in-the-10th-annual-open-source-jobs-report-and-tell-us-what-matters-most/ Last year’s Jobs Report generated interesting insights into the nature of the open source jobs market – and informed priorities for developers and hiring managers alike. The big takeaway was that hiring open source talent is a priority, and that cloud computing skills are among the top requested by hiring managers, beating out Linux for […]

The post Looking to Hire or be Hired? Participate in the 10th Annual Open Source Jobs Report and Tell Us What Matters Most  appeared first on Linux.com.

]]>
Last year’s Jobs Report generated interesting insights into the nature of the open source jobs market – and informed priorities for developers and hiring managers alike. The big takeaway was that hiring open source talent is a priority, and that cloud computing skills are among the top requested by hiring managers, beating out Linux for the first time ever in the report’s 9-year history at the Linux Foundation. Here are a few highlights:

 
 

Now in its 10th year, the jobs survey and report will uncover current market data in a post-COVID (or what could soon feel like it) world. 

This year, in addition to determining which skills job seekers should develop to improve their overall employability prospects, we also seek to understand the nature and impact of the “Great Resignation.” Did such a staffing exodus occur in the IT industry in 2021, and do we expect to feel additional effects of it in 2022? And what can employers do to retain their employees under such conditions? Can we hire to meet our staffing needs, or do we have to increase the skill sets of our existing team members?

The jobs market has changed, and in open source it feels hotter than ever! We’re seeing the formation of new OSPOs and the acceleration of open source projects and standards across the globe. In this environment, we’re especially excited to uncover what the data will tell us this year, to confirm or dispel our hypothesis that open source talent is much in demand, and that certain skills are more sought after than others. But which ones? And what is it going to take to keep skilled people on the job? 

Only YOU can help us to answer these questions. By taking the survey (and sharing it so that others can take it, too!) you’ll contribute to a valuable dataset to better understand the current state of the open source jobs market in 2022. The survey will only take a few minutes to complete, with your privacy and confidentiality protected. 

Thank you for participating!

Take 10th Annual Survey (Click)

Who We Are Looking To Participate

  • Employers
  • Hiring Managers
  • Human Resources Staff
  • Job Seekers
  • IT Directors and IT Management
  • IT Training Developers and Training Providers

Project Leadership

The project will be led by Clyde Seepersad, SVP & General Manager of Linux Foundation Training & Certification, and Hilary Carter, VP Research at the Linux Foundation.

The post Looking to Hire or be Hired? Participate in the 10th Annual Open Source Jobs Report and Tell Us What Matters Most  appeared first on Linux Foundation.

The post Looking to Hire or be Hired? Participate in the 10th Annual Open Source Jobs Report and Tell Us What Matters Most  appeared first on Linux.com.

]]>
DENT 2.0, Secure and Scalable Open Source Network Operating System Aimed at Small and Mid-Size Enterprises, Released https://www.linux.com/news/dent-2-0-secure-and-scalable-open-source-network-operating-system-aimed-at-small-and-mid-size-enterprises-released/ Wed, 09 Mar 2022 01:55:03 +0000 https://www.linux.com/news/dent-2-0-secure-and-scalable-open-source-network-operating-system-aimed-at-small-and-mid-size-enterprises-released/ The DENT project is an open source network operating system utilizing the Linux Kernel, Switchdev, and other Linux based projects, hosted under the Linux Foundation. The project has announced DENT 2.0 is available for immediate download.  The “Beeblebrox” release adds key features utilized by distributed enterprises in retail and remote facilities, providing a secure and […]

The post DENT 2.0, Secure and Scalable Open Source Network Operating System Aimed at Small and Mid-Size Enterprises, Released appeared first on Linux.com.

]]>

The DENT project is an open source network operating system utilizing the Linux Kernel, Switchdev, and other Linux based projects, hosted under the Linux Foundation. The project has announced DENT 2.0 is available for immediate download

The “Beeblebrox” release adds key features utilized by distributed enterprises in retail and remote facilities, providing a secure and scalable Linux-based Network Operating System (NOS) for disaggregated switches adaptable to edge deployment. This means DENT provides a smaller, more lightweight NOS for use at the small, remote edges of enterprise networks.

DENT 2.0 adds secure scaling with Internet Protocol version 6 (IPv6) and Network Address Translation (NAT) to support a broader community of enterprise customers. It also adds Power over Ethernet (PoE) control to allow remote switching, monitoring, and shutting down. Connectivity of IoT, Point of Sale (POS), and other devices is highly valuable to retail storefronts, early adopters of DENT. DENT 2.0 also adds traffic policing, helping mitigate attack situations that overload the CPU. 

“DENT has made great strides this past year and with its edge and native Linux approach, with a rich feature set for distributed enterprises like retail or remote facilities. DENT continues to expand into new use cases and welcomes community input with an open technical community, under the Linux Foundation,” said Arpit Joshipura, GM of Networking & Edge at The Linux Foundation.

DENT 2.0 Main Features to enable secure and scalable development

Secure scaling with IPv6 and NAT to appeal to a broader community of SME customers
PoE control to allow remote switching, monitoring, and shutting down
Rate limiting to protect against broadcast storms, creating a stronger OS under erroneous BUM (Broadcast, Unicast, Multicast) traffic

DENT enables enterprises to transition to disaggregated network switches and use cases available with the distributed enterprise and edge networking. The open source NOS provides key technology leverage in retail, a sector that is leading innovation in digital transformation. The Amazon public showcase of DENT hardware at re:Invent in November 2021 reached 20,000+ attendees.

“This new release of DENT 2.0 adds critical updates focused on smaller enterprise needs. This was the goal of DENT all along, and I would like to thank our members and the wider community for this broad, concerted effort to move DENT significantly forward,” said Steven Noble, DENT Technical Steering Committee Chair. “It’s not easy building a flexible, accessible network OS, and this is why I’m proud of all the effort and coordination by so many talented individuals. If you are looking for an open source disaggregated network OS, now is great timing for looking at DENT.”

Retail stores, warehousing, remote locations, enterprise, and Small and Mid-Size Enterprises are all ideal environments for DENT deployment. Wiring closets in many facilities are small. Staff expertise may be limited, and branch-office switches from leading suppliers can require costly contracts. DENT is easily deployed on white-box hardware in small spaces. It can be set up to support dozens of wireless access points and IoT sensors, creating a manageable network to track inventory, monitor shelf real estate, scan customer activity, and perform automated checkouts.

DENT premier members include Amazon, Delta Electronics Inc, Edgecore Networks, and Marvell. Important contributions to the DENT project have come from NVIDIA, Keysight Technologies, and Sartura.

“Delta has built complete white box networking platforms based on DENT technology, helping drive a disaggregation model in edge that offers cost and flexibility benefits to customers looking for OEM solutions,” said Charlie Wu, Vice President, Solution Center at Delta Networks. “The deployment of our 1G and 10G Ethernet switch boxes with Marvell’s Prestera® devices and the DENT OS in real world applications demonstrates the power of open source to accelerate technology innovation in networking.” 

“Edgecore Networks, as the premier member of DENT, is pleased to see the groundbreaking second release of DENT 2.0, enabling DENT community members to use the DENT’s simplified abstracts, APIs, drivers, to lessen development and deployment overhead,” said Taskin Ucpinar, Senior Director of SW Development. “This innovative product development approach enables the community to build robust solutions with minimal effort and immediately help System Integrators deploy a networking solution to remote campuses and retail stores.”

“As the chairing company for DENT Test Working Group, Keysight has partnered with the open-source community to host the system integration test bed in Keysight labs,” said Dean Lee, Senior Director Cloud Solution Team. “Being a neutral test vendor, we have worked with the community to harden the DENT NOS in multi-vendor interoperability, performance, and resiliency. We are delighted to contribute to the success and wide adoption of DENT.”

“Marvell is accelerating the build-out of Ethernet switching infrastructure in emerging edge and borderless enterprise applications, and DENT is a key component to our offerings,” said Guy Azrad, Senior Vice President and General Manager, Switch Business Unit at Marvell. “With DENT incorporated on our Prestera® switch platforms, we are currently enabling retailers to transform physical stores to smart retail connected environments that benefit consumers through easy and efficient in-store experiences.”

Download and test DENT 2.0: https://github.com/dentproject/dentOS

Additional DENT Resources

Main repo: https://github.com/dentproject/dentOS 
Supported Hardware (DNI, Edge-core, WNC platforms): https://dent.dev/dentos/  
Getting Started Guide: https://github.com/dentproject/dentOS/wiki 
Video demo: https://youtu.be/ZGstgS9d4p0 
DENT Market Leadership Brief: https://dent.dev (email registration required)

 

The post DENT 2.0, Secure and Scalable Open Source Network Operating System Aimed at Small and Mid-Size Enterprises, Released appeared first on Linux Foundation.

The post DENT 2.0, Secure and Scalable Open Source Network Operating System Aimed at Small and Mid-Size Enterprises, Released appeared first on Linux.com.

]]>
A Summary of Census II: Open Source Software Application Libraries the World Depends On https://www.linux.com/news/a-summary-of-census-ii-open-source-software-application-libraries-the-world-depends-on/ Mon, 07 Mar 2022 22:00:00 +0000 https://www.linux.com/news/a-summary-of-census-ii-open-source-software-application-libraries-the-world-depends-on/ Introduction It has been estimated that Free and Open Source Software (FOSS) constitutes 70-90% of any given piece of modern software solutions. FOSS is an increasingly vital resource in nearly all industries, public and private sectors, among tech and non-tech companies alike. Therefore, ensuring the health and security of FOSS is critical to the future […]

The post A Summary of Census II: Open Source Software Application Libraries the World Depends On appeared first on Linux.com.

]]>
Introduction

It has been estimated that Free and Open Source Software (FOSS) constitutes 70-90% of any given piece of modern software solutions. FOSS is an increasingly vital resource in nearly all industries, public and private sectors, among tech and non-tech companies alike. Therefore, ensuring the health and security of FOSS is critical to the future of nearly all industries in the modern economy. 

In March of 2022, The Linux Foundation, in partnership with the Laboratory for Innovation Science at Harvard (LISH), released the final results of an ongoing study, “Census II of Free and Open Source Software – Application Libraries.” This follows the preliminary release, “Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software” in February 2020 and now identifies more than one thousand of the most widely deployed open source application libraries found from scans of commercial and enterprise applications. This study informs what open source projects are commonly used in applications warrant proactive analysis of operations and security support. 

The completed report from the Census II study identifies the most commonly used free and open source software (FOSS) components in production applications. It begins to examine the components’ open source communities, which can inform actions to sustain FOSS’s long-term security and health. The stated objectives were:

  • Identify the most commonly used free and open source software components in production applications. 
  • Examine for potential vulnerabilities in these projects due to:
  • Widespread use of outdated versions; Understaffed projects
  • Use this information to prioritize investments and other resources needed to support the security and health of FOSS

What did the Linux Foundation and Harvard learn from the Census II study?

The study was the first to analyze the security risks of open source software used in production applications. It is in contrast to the earlier Census I study that primarily relied on Debian’s public repository package data and factors that would identify the profile of each package as a potential security risk.

To better understand the commonality, distribution, and usage of open source software within organizations, the study used software composition analysis (SCA) data supplied by SnykSynopsys, and FOSSA. SCA is the process of automating visibility into any software, and these tools are often used for risk management, security, and license compliance. SCA solution providers routinely scan codebases used by private and public sector organizations. The scans and audits provide a deep insight into what open source is being used in production applications.

With this data, the study created a baseline and unique identifiers for common packages and software components used by large organizations, which were then tied to a specific project. This baselining effort allowed the study to identify which packages and components were the most widely deployed. 

Census II includes eight rankings of the 500 most used FOSS packages among those reported in the private usage data contributed by SCA partners. The analysis performed is based on 500,000 observations of FOSS usage in 2020.

These include different slices of the data based on versions, structure, and packaging system.  For example, this research enables identification of the top 10 version-agnostic packages available on the npm package manager that were called directly in applications:

Other slices of the data examined in the study include versioned versus version agnostic, npm versus non-npm, direct versus indirect (and direct) packages. All eight top 500 lists are included in an open data repository on Data.World. 

Observations and analysis of these specific metrics led the study to come to certain conclusions. These were:

Software components need to be named in a standardized schema for security strategies to be effective. The study determined that a lack of naming conventions used by packages and components across repositories was highly inconsistent. Thus, any ongoing effort to create software security and transparency strategies without industry participation would have limited effect and slow such efforts. 

The complexities associated with package versioning. In addition to the need for standardized naming schema mentioned above, Software Bill of Materials (SBOM) guidance will need to reflect versioning information consistent with the public “main” repository for that package, rather than private repositories. Many of the versions that our data partners reported did not exist in the public repositories for those packages because developers maintained internal forks of the code.

Developer accounts must be secured. The analysis of the software packages with the highest levels of usage found that many were hosted on individual (personal) developer accounts. Lax developer security practices have considerable implications for large organizations that use these software packages because they have fewer protections and less granularity of associated permissions. The OpenSSF encourages MFA tokens or organizational accounts to achieve greater account security.

Legacy open source is pervasive in commercial solutions. Many production applications are being deployed that incorporate legacy open source packages. This prevalence of legacy packages is an issue as they are often no longer supported or maintained by the developers or have known security vulnerabilities. They often lack updates for known security issues both in their codebase or in the codebase of dependencies they require to operate. Apache log4j, version 1.x, for example, was ten times more prevalent than log4j 2.x (the version requiring recent remediation), and 1.x still has known unpatched disclosed vulnerabilities because the software was declared end-of-life (EOL) in 2015.Legacy packages present a vulnerability to the companies deploying them in their environments — it means they will need to know what open source packages they have deployed and where to maintain and update these codebases over time.

The prevalence of “supercoders” in the FOSS community. Much of the most widely used FOSS is developed by only a handful of contributors – results in one dataset show that 136 developers were responsible for more than 80% of the lines of code added to the top 50 packages. Additionally, as stated in the Census II preliminary results in 2020, project atrophy and contributor abandonment is a known issue with legacy open source software. The number of developer contributors who work on projects to ensure updates for feature improvements, security, and stability decreases over time as they prioritize other software development work in their professional lives or decide to leave the project for any number of reasons. Therefore, it is much more likely that these communities may face challenges without sufficient developers to act as maintainers as time goes by.

What resources exist to better understand and mitigate potential problem areas in Open Source Software development? 

The Linux Foundation’s community and other open source projects initiatives offer important standards, tooling, and guidance that will help organizations and the overall open source community gain better insight into and directly address potential issues in their software supply chain.

Software Bill of Materials: Adopt the ISO/IEC 5962:2021 SPDX SBOM Standard

An actionable recommendation from Census II is to adopt Software Bill of Materials (SBOM) within your organization. SBOMs serve as a record that delineates the composition of software systems. Software Package Data Exchange (SPDX) is an open international standard for communicating SBOM information that supports accurate identification of software components, explicit mapping of relationships between components, and the association of security and licensing information with each component. 

Many enterprises concerned about software security are making SBOMs a cornerstone of their cybersecurity strategy. The Linux Foundation recently published a separate study on SBOM readiness within organizations, The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness. The report offers fresh insight into the state of SBOM readiness by enterprises across the globe, identifying patterns from innovators, early adopters, and procrastinators. 

Differentiated by region and revenue, these organizations identified current SBOM production and consumption levels and the motivations and challenges regarding their present and future adoption. This report is for organizations looking to better understand SBOMs as an important tool in securing software supply chains and why it is now time to adopt them.

Take the free training on secure software development 

The Open Source Security Foundation (OpenSSF) has developed a trio of free courses on how to develop secure software. These courses are part of the Secure Software Development Fundamentals Professional Certificate program.  There’s a fee if you want to try to earn a certificate (to prove that you learned the material). However, if you just want to learn the material without earning a certificate, that’s free; simply audit the course. You can also start for free and upgrade later if you pay within the upgrade deadline. All three courses are available on the edX platform.

The courses included in the program are:

Secure Software Development: Requirements, Design, and Reuse (LFD104x)Secure Software Development: Implementation (LFD105x)Secure Software Development: Verification and More Specialized Topics (LFD106x)

Focus on building security best practices into your open source projects

The OpenSSF develops and hosts its Best Practices badging program for open source software developers. This initiative was one of the first outputs produced as a result of the Census I, completed in 2015. Since then, over 4,000 open source software projects have engaged, started, or completed obtaining a  Best Practices Badge.

Projects that conform to OpenSSF best practices can display a badge on their GitHub page or their own web pages and other material. In contrast, consumers of the badge can quickly assess which FLOSS projects are following best practices and, as a result, are more likely to produce higher-quality and secure software. Additionally, a Badge API exists that allows developers and organizations to query the practice score of a specific project, such as Silver, Gold, and Passing. This means any organization can do an API check within their workflow to check against the open source packages they’re using and see if that project’s community has obtained a badge.

More information on the OpenSSF Best Practices Badging program, including background and criteria, is available on GitHub. The projects page shows participating projects and supports queries (such as a list of projects that have a passing badge). Project statistics and criteria statistics are available. 

Understand the vulnerability vectors of your software supply chain

In addition to reviewing the Census II findings, we encourage you to read the Linux Foundation’s Open Source Supply Chain Security Whitepaper. This publication explores vulnerabilities in the open source software ecosystem through historical examples of weaknesses in known infrastructure components (such as lax developer security practices and end-user behavior, poorly secured dependency package repositories, package managers, and incomplete vulnerability databases). It provides a set of recommendations for organizations to navigate potential problem areas. 

Conclusion

The Census II study shows that even the most widely deployed open source software packages can have issues with security practices, developer engagement, contributor exodus, and code abandonment. Therefore, open source projects require supporting toolsets, infrastructure, staffing, and proper governance to act as a stable and healthy upstream project for your organization. 

The post A Summary of Census II: Open Source Software Application Libraries the World Depends On appeared first on Linux Foundation.

The post A Summary of Census II: Open Source Software Application Libraries the World Depends On appeared first on Linux.com.

]]>
The Linux Foundation and Harvard’s Lab for Innovation Science Release Census of Most Widely Used Open Source Application Libraries https://www.linux.com/news/the-linux-foundation-and-harvards-lab-for-innovation-science-release-census-of-most-widely-used-open-source-application-libraries/ Wed, 02 Mar 2022 22:00:00 +0000 https://www.linux.com/news/the-linux-foundation-and-harvards-lab-for-innovation-science-release-census-of-most-widely-used-open-source-application-libraries/ Census II identifies more than one thousand of the most widely deployed applications libraries that are most critical to operations and security  SAN FRANCISCO – March 2, 2022 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the final release of “Census II of Free and Open Source Software […]

The post The Linux Foundation and Harvard’s Lab for Innovation Science Release Census of Most Widely Used Open Source Application Libraries appeared first on Linux.com.

]]>

Census II identifies more than one thousand of the most widely deployed applications libraries that are most critical to operations and security 

SAN FRANCISCO – March 2, 2022 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the final release of “Census II of Free and Open Source Software – Application Libraries.” This follows the preliminary release of Census II, “Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software” and identifies more than one thousand of the most widely deployed open source application libraries found from scans of commercial and enterprise applications. This study informs what open source packages, components and projects warrant proactive operations and security support.  

The original Census Project (“Census I”) was conducted in 2015 to identify which software packages in the Debian Linux distribution were the most critical to a Linux server’s operation and security. The goal of the current study (Census II) is to pick up where Census I left off and to identify and measure which open source software is most widely deployed within applications developed by private and public organizations. This Census II allows for a more complete picture of free and open source software (FOSS) adoption by analyzing anonymized usage data provided by partner Software Composition Analysis (SCA) companies Snyk, the Synopsys Cybersecurity Research Center (CyRC), and FOSSA and is based on their scans of codebases at thousands of companies.

“Understanding what FOSS packages are the most widely used in society allows us to proactively engage the critical projects that warrant operations and security support,” said Brian Behlendorf, executive director at Linux Foundation’s Open Source Security Foundation (OpenSSF). “Open source software is the foundation upon which our day-to-day lives run, from our banking institutions to our schools and workplaces. Census II provides the foundational detail we need to support the world’s most critical and valuable infrastructure.” 

Census II includes eight rankings of the 500 most used FOSS packages among those reported in the private usage data contributed by SCA partners. These include different slices of the data based on versions, structure, and packaging system.  For example, this research enables identification of the top 10 version-agnostic packages available on the npm package manager that were called directly in applications:

  • lodash
  • react
  • axios
  • debug
  • @babel/core
  • express
  • semver
  • uuid
  • react-dom
  • jquery

To review all of the Top 500 lists in their entirety, please visit Data.World.

The study also surfaces these five overall findings that are detailed in the report: 

1) The need for a standardized naming schema for software components so that application libraries can be uniquely identified

2) The complexities associated with package versioning – SBOM guidance will need to reflect versioning information that is consistent with the public “main” repository for that package, rather than private repositories

3) Much of the most widely used FOSS is developed by only a handful of contributors – results in one dataset show that 136 developers were responsible for more than 80% of the lines of code added to the top 50 packages

4) The increasing importance of individual developer account security – the OpenSSF encourages the use of MFA tokens or organizational accounts to achieve greater account security

5) The persistence of legacy software in the open source space

Census II is authored by Frank Nagle, Harvard Business School; James Dana, Harvard Business School; Jennifer Hoffman, Laboratory for Innovation Science at Harvard; Steven Randazzo, Laboratory for Innovation Science at Harvard; and Yanuo Zhou, Harvard Business School. 

“Our goal is to not only identify the most widely used FOSS but also provide an example of how the distributed nature of FOSS requires a multi-party effort to fully understand the value and security of the FOSS ecosystem. Only through data-sharing, coordination, and investment will the value of this critical component of the digital economy be preserved for generations to come,” said Frank Nagle, Assistant Professor, Harvard Business School. 

Supporting Quotes

FOSSA

“Open source software plays a foundational role in enabling global economic growth. Of course, the ubiquitous nature of OSS means that severe vulnerabilities — such as Log4Shell — can have a devastating and widespread impact. Mounting a comprehensive defense against supply chain threats starts with establishing strong visibility into software — and we at FOSSA are thrilled to be able to contribute our market-leading SBOM capabilities and experience helping thousands of organizations successfully manage their open source dependencies to improve transparency and trust in the software supply chain.” – Kevin Wang, Founder & CEO, FOSSA

Snyk

“The Linux Foundation’s latest multi-party Census effort is further evidence that OSS is at the very heart of not only today’s modern application development process, but also plays an increasingly vital behind the scenes role throughout all of society,” said Guy Podjarny, Founder, Snyk. “We’re honored to have made significant contributions to this latest comprehensive assessment and welcome all future efforts that help to empower the developers building our future with the right information to also effectively secure it.”

Synopsys

“With businesses increasingly dependent upon open source technologies, if those same businesses aren’t contributing back to the open source projects they depend upon, then they are increasing their business risk. That risk ranges from projects becoming orphaned and containing potentially vulnerable code, through to implementation changes that break existing applications. The only meaningful way to mitigate that risk comes from assigning resources to contribute back to the open source powering the business. After all, while there are millions of developers contributing to open source, there might just be only one developer working on something critical to your success.” – Tim Mackey, Principal Security Strategist, Synopsys Cybersecurity Research Center

 

Additional Resources

Download the Report
Join the Webinar TODAY to learn more directly from the authors of this report. 

 

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members. The Linux Foundation is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 

###

 

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer

503-867-2304

jennifer@storychangesculture.com

The post The Linux Foundation and Harvard’s Lab for Innovation Science Release Census of Most Widely Used Open Source Application Libraries appeared first on Linux Foundation.

The post The Linux Foundation and Harvard’s Lab for Innovation Science Release Census of Most Widely Used Open Source Application Libraries appeared first on Linux.com.

]]>
The Linux Foundation Releases The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness Research https://www.linux.com/news/the-linux-foundation-releases-the-state-of-software-bill-of-materials-sbom-and-cybersecurity-readiness-research/ Tue, 01 Feb 2022 19:00:00 +0000 https://www.linux.com/news/the-linux-foundation-releases-the-state-of-software-bill-of-materials-sbom-and-cybersecurity-readiness-research/ New data from Linux Foundation measures SBOM progress and adoption to address cybersecurity concerns  SAN FRANCISCO, Calif., – February 1, 2022 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, in partnership with OpenSSF, SPDX, and OpenChain, today announced the availability of the first in a series of research projects to […]

The post The Linux Foundation Releases The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness Research appeared first on Linux.com.

]]>
New data from Linux Foundation measures SBOM progress and adoption to address cybersecurity concerns 

SAN FRANCISCO, Calif., – February 1, 2022 — The Linux Foundation, the nonprofit organization enabling mass innovation through open source, in partnership with OpenSSF, SPDX, and OpenChain, today announced the availability of the first in a series of research projects to understand the challenges and opportunities for securing software supply chains. “The State of Software Bill of Materials and Cybersecurity Readiness” reports on the extent of organizational SBOM readiness and adoption tied to cybersecurity efforts. The study comes on the heels of both the U.S. Administration’s Executive Order on Improving the Nation’s Cybersecurity and the recent White House Open Source Security Summit. Its timing coincides with increasing recognition across the globe of the importance of identifying software components and helping accelerate response to newly discovered software vulnerabilities. 

“SBOMs are no longer optional. Our Linux Foundation Research team revealed 78% of organizations expect to produce or consume SBOMs in 2022,” said Jim Zemlin, executive director at the Linux Foundation. “Businesses accelerating SBOM adoption following the publication of the new ISO standard (5962) or the White House Executive Order, are not only improving the quality of their software, they are better preparing themselves to thwart adversarial attacks following new open source vulnerability disclosures like those tied to log4j.”

An SBOM is formal and machine-readable metadata that uniquely identifies a software component and its contents; it may also include copyright and license data. SBOMs are designed to be shared across organizations and are particularly helpful at providing transparency of components delivered by participants in a software supply chain. Many organizations concerned about application security are making SBOMs a cornerstone of their cybersecurity strategy.

Key findings from survey participants analyzed for the report include:

82% are familiar with the term Software Bill of Materials (SBOM)76% are actively engaged in addressing SBOM needs47% are producing or consuming SBOMs78% of organizations expect to produce or consume SBOMs in 2022, up 66% from the prior year

Survey participants also revealed their top three benefits for producing SBOMs:

51% say it’s easier for developers to understand dependencies across components in an application49% state it’s easier to monitor components for vulnerabilities44% noted it’s easier to manage license compliance.

Linux Foundation researchers also revealed that additional industry consensus and government policy will help drive SBOM adoption and implementation. The researchers noted:

62% are looking for better industry consensus on how to integrate the production/consumption of SBOMs into their DevOps practices58% want consensus on integration of SBOMs into their risk and compliance processes. 53% desire better industry consensus on how SBOMs will evolve and improve80% of organizations worldwide are aware of the White House Executive Order on improving cybersecurity 76% are considering changes as a direct consequence of the Executive Order

Finally, research participants revealed their top attributes used to prioritize which open source software components would be used by developers: security ranked highest, followed by license compliance.

Linux Foundation Research conducted this worldwide empirical research into organizational SBOM readiness and adoption in the third quarter of 2021. A total of 412 organizations from around the world participated in the 65-question survey. The Report is authored by Stephen Hendrick, vice president of Research at the Linux Foundation.  The Linux Foundation has also prioritized research to aid collective understanding of the scope of cybersecurity challenges with the first in a series of core research projects to explore important issues related to implementing cybersecurity best practices and standards adoption, beginning with this study of SBOM readiness. 

The Linux Foundation supports numerous open source SBOM and security-related programs, including Open Source Security Foundation (OpenSSF), SPDX (ISO/IEC 5962), sigstore, Let’s Encrypt, in-toto, The Update Framework (TUF), Uptane, and OpenChain (ISO 5230).

Additional Resources

Download the The State of Software Bill of Materials and Cybersecurity Readiness report

Watch the playback of our February 1 webinarUnderstanding the Role of Software Bill of Materials in Cybersecurity Readiness

Join one of six OpenSSF working groups to help improve open source security

Read about SPDX as the ISO standard for SBOMs

Access free training on generating a free software bill of materials

Get certified as a secure software development professional

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members. The Linux Foundation is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer

503-867-2304

jennifer@storychangesculture.com

The post The Linux Foundation Releases The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness Research appeared first on Linux Foundation.

The post The Linux Foundation Releases The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness Research appeared first on Linux.com.

]]>
On DEI Research: Why the Linux Foundation? Why now? https://www.linux.com/news/on-dei-research-why-the-linux-foundation-why-now/ Fri, 21 Jan 2022 00:00:00 +0000 https://www.linux.com/news/on-dei-research-why-the-linux-foundation-why-now/ The open source community is working on many simultaneous challenges, not the least of which is addressing vulnerabilities in the core of our projects, securing the software supply chain, and protecting it from threat actors. At the same time, community health is equally as important as the security and vitality of software code.  We need […]

The post On DEI Research: Why the Linux Foundation? Why now? appeared first on Linux.com.

]]>
The open source community is working on many simultaneous challenges, not the least of which is addressing vulnerabilities in the core of our projects, securing the software supply chain, and protecting it from threat actors. At the same time, community health is equally as important as the security and vitality of software code. 

We need to retain talented people to work on complex problems. While we work urgently on implementing security best practices such as increasing SBOM adoption to avoid another Log4J scenario, we can’t put the health of our communities on the open source back burner, either. 

Our communities are ultimately made up of people who contribute, have wants and needs, and have feelings and aspirations. So while having actionable data and metrics on the technical aspects of open source projects is key to understanding how they evolve and mature, the human experience within project communities also requires close examination. 

How participants in open source projects interact with each other and whether they feel included make up a large component of a community’s overall long-term health. It can determine whether or not they can continue productively and positively, attract new participants, create representative technologies, and spawn new projects and communities.

Motivations for a DEI study at the Linux Foundation 

DEI was always something that we wanted to include in the early days of the Linux Foundation Research agenda. The topic fell into the category of “ecosystem” research, where uncovering insights about the community at large was as critical as digging into the state of open source in a given technology horizontal or industry vertical.

As community health and DEI are core values of the Linux Foundation, conducting new research in this area was a complementary and necessary activity to support related inclusion and belonging initiatives already underway.

Research, in general, is essential to dispel myths and misperceptions about open source, regardless of the subject matter. DEI insight, generated through new research, is a vital tool to evaluate success criteria beyond looking solely at the growth of open source in terms of the supply and demand of code. With data, we can determine gaps, trends, and opportunities broadly.

This is why in the spring of 2021, we were thrilled to work with GitHub, the CHAOSS project, and Jessica Groopman from Kaleido Insights on a dedicated study on DEI in open source expanding on GitHub’s Open Source Survey in 2017. Together, we formed a dedicated working group to design and deliver the study, manifesting the notion that research really is a team sport. 

The importance of understanding DEI in open source

We have so many team members working on DEI initiatives, so this topic was a natural area of interest across the organization and within our project communities. Fortunately, we also had a dozen organizations provide sponsorship for this research, which enabled the translation of the survey into ten different languages. The goal of translation was to make the survey as accessible as possible for non-native English speakers.

The research was structured to determine how well we were doing as a community in terms of diversity, but importantly, how underrepresented groups feel within open source – do they feel welcome or unwelcome? Over time, we’ll want to see how this dynamic will change for the better.  

People of varying backgrounds and nationalities participate in open source, so how we measure their sentiment when they show up to work is important. There was no shortage of questions needing answers. For example, how do people view the efficacy of codes of conduct, or do people believe that they are given fair and equal opportunities? And for underrepresented groups, in particular, do they face barriers that others do not? How do we treat each other? 

We designed this research to uncover gaps in belonging within open source so that we can begin not just to think about how we can “do better,” but to inspire the implementation of inclusion strategies. Why? Because study after study shows that diverse teams are smarter and financially outperform their less diverse peers.

Barriers and challenges to achieving DEI in open source

From the data, we know that barriers in open source communities exist depending on the demographics or different segmentations of participants. Whether specific to race, gender, sexual orientation, language, geographic region, or religion – which we didn’t specifically study in this report – there are clear obstacles we need to remove. For example, communities can be more conscious about not scheduling conferences or meetings during religious holidays, such as Rosh Hashanna or Yom Kippur.

Download this infographic for key takeaways from the Linux Foundation DEI study

We also need to be mindful that off-color jokes, sexual imagery, hostility, unwelcome sexual advances, rudeness, and name-calling don’t go over very well in open source, nor in any community for that matter. We need greater awareness that these types of behaviors exist and methods to improve how we deal with them when they occur.

And although English is the lingua franca of open source projects, native language and English fluency are barriers for some open source participants, as are geopolitical factors.

The uncomfortable truth revealed in the survey data is that people from the LGBTQ+ community are more likely to experience threats, inappropriate language, sexual advances, and other forms of toxic behavior. 

So what do we do about it? We need a full-fledged commitment to abiding by and enforcing codes of conduct within our communities. It is incumbent upon us to not tolerate inappropriate and toxic behavior and appropriately support community members when abuse arises.

Above all, it’s perhaps too easy to forget the human being at the other end of a transaction or professional exchange, especially as COVID-19 exacerbated the remoteness nature of our interactions.

The remedy is a combination of many facets of our society – not just within open source – to dedicate resources, inspire leadership, demonstrate moral courage, pursue greater educational initiatives, and spread awareness of the opportunities that come from diverse communities. 

Let’s remember that diverse teams, where inclusion practices are upheld, are stronger, better teams that make more robust, more thoughtful, and higher performing technologies.

You can help the Linux Foundation spread awareness of DEI in your Open Source community by using these graphics and suggested verbiage for including in your social posts.

Linux Foundation DEI Report: By the numbers

The report was sponsored by AWS, CHAOSS, Red Hat, VMware, GitHub, GitLab, Intel, Comcast, Renesas, Panasonic, Fujitsu, Hitachi, Huawei, and NEC. It was written by Hilary Carter, Vice of Linux Foundation Research, and Jessica Groopman of Kaleido Insights. Researcher/Analyst Lawrence Hecht performed a quantitative analysis of the data with the support of Stephen Hendrick, VP of Linux Foundation Research, who conducted a peer review of the survey instrument.

2 Authors

2 Analysts

2 Designers

3 Editors

4 Deliverables

10 Survey Languages

14 Sponsors

24 Infographics

30 Research Contributors

2350 Survey Completes

7000 Survey Respondents

The post On DEI Research: Why the Linux Foundation? Why now? appeared first on Linux Foundation.

The post On DEI Research: Why the Linux Foundation? Why now? appeared first on Linux.com.

]]>
Classic SysAdmin: How to Check Disk Space on Linux from the Command Line https://www.linux.com/news/classic-sysadmin-how-to-check-disk-space-on-linux-from-the-command-line/ Sat, 08 Jan 2022 03:00:00 +0000 https://www.linux.com/news/classic-sysadmin-how-to-check-disk-space-on-linux-from-the-command-line/ This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course. Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you […]

The post Classic SysAdmin: How to Check Disk Space on Linux from the Command Line appeared first on Linux.com.

]]>
This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you happen to use a GUI desktop (e.g., GNOME, KDE, Mate, Pantheon, etc.), the task is probably pretty simple. But what if you’re looking at a headless server, with no GUI? Do you need to install tools for the task? The answer is a resounding no. All the necessary bits are already in place to help you find out exactly how much space remains on your drives. In fact, you have two very easy-to-use options at the ready.

In this article, I’ll demonstrate these tools. I’ll be using Elementary OS, which also includes a GUI option, but we’re going to limit ourselves to the command line. The good news is these command-line tools are readily available for every Linux distribution. On my testing system, there are a number of attached drives (both internal and external). The commands used are agnostic to where a drive is plugged in; they only care that the drive is mounted and visible to the operating system.

With that said, let’s take a look at the tools.

df

The df command is the tool I first used to discover drive space on Linux, way back in the 1990s. It’s very simple in both usage and reporting. To this day, df is my go-to command for this task. This command has a few switches but, for basic reporting, you really only need one. That command is df -H. The -H switch is for human-readable format. The output of df -H will report how much space is used, available, percentage used, and the mount point of every disk attached to your system (Figure 1).

 

Figure 1: The output of df -H on my Elementary OS system.

What if your list of drives is exceedingly long and you just want to view the space used on a single drive? With df, that is possible. Let’s take a look at how much space has been used up on our primary drive, located at /dev/sda1. To do that, issue the command:

df -H /dev/sda1

The output will be limited to that one drive (Figure 2).

Figure 2: How much space is on one particular drive?

You can also limit the reported fields shown in the df output. Available fields are:

source — the file system source

size — total number of blocks

used — spaced used on a drive

avail — space available on a drive

pcent — percent of used space, divided by total size

target — mount point of a drive

Let’s display the output of all our drives, showing only the size, used, and avail (or availability) fields. The command for this would be:

df -H –output=size,used,avail

The output of this command is quite easy to read (Figure 3).

Figure 3: Specifying what output to display for our drives.

The only caveat here is that we don’t know the source of the output, so we’d want to include source like so:

df -H –output=source,size,used,avail

Now the output makes more sense (Figure 4).

Figure 4: We now know the source of our disk usage.

du

Our next command is du. As you might expect, that stands for disk usage. The du command is quite different to the df command, in that it reports on directories and not drives. Because of this, you’ll want to know the names of directories to be checked. Let’s say I have a directory containing virtual machine files on my machine. That directory is /media/jack/HALEY/VIRTUALBOX. If I want to find out how much space is used by that particular directory, I’d issue the command:

du -h /media/jack/HALEY/VIRTUALBOX

The output of the above command will display the size of every file in the directory (Figure 5).

Figure 5: The output of the du command on a specific directory.

So far, this command isn’t all that helpful. What if we want to know the total usage of a particular directory? Fortunately, du can handle that task. On the same directory, the command would be:

du -sh /media/jack/HALEY/VIRTUALBOX/

Now we know how much total space the files are using up in that directory (Figure 6).

Figure 6: My virtual machine files are using 559GB of space.

You can also use this command to see how much space is being used on all child directories of a parent, like so:

du -h /media/jack/HALEY

The output of this command (Figure 7) is a good way to find out what subdirectories are hogging up space on a drive.

Figure 7: How much space are my subdirectories using?

The du command is also a great tool to use in order to see a list of directories that are using the most disk space on your system. The way to do this is by piping the output of du to two other commands: sort and head. The command to find out the top 10 directories eating space on a drive would look something like this:

du -a /media/jack | sort -n -r | head -n 10

The output would list out those directories, from largest to least offender (Figure 8).

Figure 8: Our top ten directories using up space on a drive.

Not as hard as you thought

Finding out how much space is being used on your Linux-attached drives is quite simple. As long as your drives are mounted to the Linux system, both df and du will do an outstanding job of reporting the necessary information. With df you can quickly see an overview of how much space is used on a disk and with du you can discover how much space is being used by specific directories. These two tools in combination should be considered must-know for every Linux administrator.

And, in case you missed it, I recently showed how to determine your memory usage on Linux. Together, these tips will go a long way toward helping you successfully manage your Linux servers.

The post Classic SysAdmin: How to Check Disk Space on Linux from the Command Line appeared first on Linux Foundation.

The post Classic SysAdmin: How to Check Disk Space on Linux from the Command Line appeared first on Linux.com.

]]>
Classic SysAdmin: Understanding Linux File Permissions https://www.linux.com/news/classic-sysadmin-understanding-linux-file-permissions/ Fri, 07 Jan 2022 03:00:00 +0000 https://www.linux.com/news/classic-sysadmin-understanding-linux-file-permissions/ This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course. Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that […]

The post Classic SysAdmin: Understanding Linux File Permissions appeared first on Linux.com.

]]>
This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission-based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Permission Groups

Each file and directory has three user based permission groups:

owner – The Owner permissions apply only to the owner of the file or directory, they will not impact the actions of other users.group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not affect the actions of other users.all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

read – The Read permission refers to a user’s capability to read the contents of the file.write – The Write permissions refer to a user’s capability to write or modify a file or directory.execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.

Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the “ls -l” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

User rights/PermissionsThe first character that I marked with an underscore is the special permission flag that can vary.The following set of three characters (rwx) is for the owner permissions.The second set of three characters (rwx) is for the Group permissions.The third set of three characters (rwx) is for the All Users permissions.Following that grouping since the integer/number displays the number of hardlinks to the file.The last piece is the Owner and Group assignment formatted as Owner:Group.

Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicitly define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

u – Ownerg – Groupo – Othersa – All users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

r – Readw – Writex – Execute

So for example, let’s say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group, and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

r = 4w = 2x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple

chown owner:group filename,

so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

_ – no special permissionsd – directoryl– The file or directory is a symbolic links – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions

Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers, you don’t think about permissions, but those environments don’t focus so aggressively on user-based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission-based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

home directories– The users’ home directories are important because you do not want other users to be able to view and modify the files in another user’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1’s home directory that can be done by issuing the command chmod 700 /home/user1.bootloader configuration files– If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.system and daemon configuration files– It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.

Other examples can be given, but this article is already very lengthy, so if you want to share other examples of needed restrictions please do so in the comments.

The post Classic SysAdmin: Understanding Linux File Permissions appeared first on Linux Foundation.

The post Classic SysAdmin: Understanding Linux File Permissions appeared first on Linux.com.

]]>
Classic SysAdmin: How to Move Files Using Linux Commands or File Managers https://www.linux.com/news/classic-sysadmin-how-to-move-files-using-linux-commands-or-file-managers/ Thu, 06 Jan 2022 03:00:00 +0000 https://www.linux.com/news/classic-sysadmin-how-to-move-files-using-linux-commands-or-file-managers/ This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course. There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same […]

The post Classic SysAdmin: How to Move Files Using Linux Commands or File Managers appeared first on Linux.com.

]]>
This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same simple tasks begin to require a small portion of your brain’s power to complete. One such task is moving files from one location to another. Sure, it’s most often considered one of the more rudimentary actions to be done on a computer. When you move to the Linux platform, however, you may find yourself asking “Now, how do I move files?”

If you’re familiar with Linux, you know there are always many routes to the same success. Moving files is no exception. You can opt for the power of the command line or the simplicity of the GUI – either way, you will get those files moved.

Let’s examine just how you can move those files about. First, we’ll examine the command line.

Command line moving

One of the issues so many users new to Linux face is the idea of having to use the command line. It can be somewhat daunting at first. Although modern Linux interfaces can help to ensure you rarely have to use this “old school” tool, there is a great deal of power you would be missing if you ignored it altogether. The command for moving files is a perfect illustration of this.

The command to move files is mv. It’s very simple and one of the first commands you will learn on the platform. Instead of just listing out the syntax and the usual switches for the command – and then allowing you to do the rest – let’s walk through how you can make use of this tool.

The mv command does one thing – it moves a file from one location to another. This can be somewhat misleading because mv is also used to rename files. How? Simple. Here’s an example. Say you have the file testfile in /home/jack/ and you want to rename it to testfile2 (while keeping it in the same location). To do this, you would use the mv command like so:

mv /home/jack/testfile /home/jack/testfile2

or, if you’re already within /home/jack:

mv testfile testfile2

The above commands would move /home/jack/testfile to /home/jack/testfile2 – effectively renaming the file. But what if you simply wanted to move the file? Say you want to keep your home directory (in this case /home/jack) free from stray files. You could move that testfile into /home/jack/Documents with the command:

mv /home/jack/testfile /home/jack/Documents/

With the above command, you have relocated the file into a new location, while retaining the original file name.

What if you have a number of files you want to move? Luckily, you don’t have to issue the mv command for every file. You can use wildcards to help you out. Here’s an example:

You have a number of .mp3 files in your ~/Downloads directory (~/ – is an easy way to represent your home directory – in our earlier example, that would be /home/jack/) and you want them in ~/Music. You could quickly move them with a single command, like so:

mv ~/Downloads/*.mp3 ~/Music/

That command would move every file that ended in .mp3 from the Downloads directory, and move them into the Music directory.

Should you want to move a file into the parent directory of the current working directory, there’s an easy way to do that. Say you have the file testfile located in ~/Downloads and you want it in your home directory. If you are currently in the ~/Downloads directory, you can move it up one folder (to ~/) like so:

mv testfile ../ 

The “../” means to move the folder up one level. If you’re buried deeper, say ~/Downloads/today/, you can still easily move that file with:

mv testfile ../../

Just remember, each “../” represents one level up.

As you can see, moving files from the command line isn’t difficult at all.

GUI

There are a lot of GUIs available for the Linux platform. On top of that, there are a lot of file managers you can use. The most popular file managers are Nautilus (GNOME) and Dolphin (KDE). Both are very powerful and flexible. I want to illustrate how files are moved using the Nautilus file manager.

Nautilus has probably the most efficient means of moving files about. Here’s how it’s done:

Open up the Nautilus file manager.Locate the file you want to move and right-click said file.From the pop-up menu (Figure 1) select the “Move To” option.When the Select Destination window opens, navigate to the new location for the file.Once you’ve located the destination folder, click Select.

This context menu also allows you to copy the file to a new location, move the file to the Trash, and more.

If you’re more of a drag and drop kind of person, fear not – Nautilus is ready to serve. Let’s say you have a file in your home directory and you want to drag it to Documents. By default, Nautilus will have a few bookmarks in the left pane of the window. You can drag the file into the Document bookmark without having to open a second Nautilus window. Simply click, hold, and drag the file from the main viewing pane to the Documents bookmark.

If, however, the destination for that file is not listed in your bookmarks (or doesn’t appear in the current main viewing pane), you’ll need to open up a second Nautilus window. Side by side, you can then drag the file from the source folder in the original window to the destination folder in the second window.

If you need to move multiple files, you’re still in luck. Similar to nearly every modern user interface, you can do a multi-select of files by holding down the Ctrl button as you click each file. After you have selected each file (Figure 2), you can either right-click one of the selected files and then choose the Move To option, or just drag and drop them into a new location.

The selected files (in this case, folders) will each be highlighted.

Moving files on the Linux desktop is incredibly easy. Either with the command line or your desktop of choice, you have numerous routes to success – all of which are user-friendly and quick to master.

The post Classic SysAdmin: How to Move Files Using Linux Commands or File Managers appeared first on Linux Foundation.

The post Classic SysAdmin: How to Move Files Using Linux Commands or File Managers appeared first on Linux.com.

]]>