curl https://some-url/ | sh

I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?

I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don’t we have something better than “sh” for this? Something with less power to do harm?

  • rah@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    How is that safe?

    It’s not, it’s a sign that the authors don’t take security seriously.

    If you use this

    I never do.

  • esa@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    This is simpler than the download, ./configure, make, make install steps we had some decades ago, but not all that different in that you wind up with arbitrary, unmanaged stuff.

    Preferably use the distro native packages, or else their build system if it’s easily available (e.g. AUR in Arch)

  • zygo_histo_morpheus@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment. Ultimately though, if you are downloading software over the internet you have to place a certain amount of trust in the person your downloading the software from. Even if you’re absolutely sure that the download script doesn’t wipe your home directory, you’re going to have to run the program at some point and it could just as easily wipe your home directory at that point instead.

    • cschreib@programming.devOP
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Indeed, looking at the content of the script before running it is what I do if there is no alternative. But some of these scripts are awfully complex, and manually parsing the odd bash stuff is a pain, when all I want to know is : 1) what URL are you downloading stuff from? 2) where are you going to install the stuff?

      As for running the program, I would trust it more than a random deployment script. People usually place more emphasis on testing the former, not so much the latter.

    • rah@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment.

      That’s not what projects recommend though. Many recommend piping the output of an HTTP transfer over the public Internet directly into a shell interpreter. Even just

      curl https://... > install.sh; sh install.sh
      

      would be one step up. The absolute minimum recommendation IMHO should be

      curl https://... > install.sh; less install.sh; sh install.sh
      

      but this is still problematic.

      Ultimately, installing software is a labourious process which requires care, attention and the informed use of GPG. It shouldn’t be simplified for convenience.

      Also, FYI, the word “option” implies that I’m somehow restricted to a limited set of options in how I can use my GNU/Linux computer which is not the case.

      • gaylord_fartmaster@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Showing people that are running curl piped to bash the script they are about to run doesn’t really accomplish anything. If they can read bash and want to review the script then they can by just opening the URL, and the people that aren’t doing that don’t care what’s in the script, so why waste their time with it?

        Do you think most users installing software from the AUR are actually reading the pkgbuilds? I’d guess it’s a pretty small percentage that do.

        • rah@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Showing people that are running curl piped to bash the script they are about to run doesn’t really accomplish anything. If they can read bash and want to review the script then they can by just opening the URL

          What it accomplishes is providing the instructions (i.e. an easily copy-and-pastable terminal command) for people to do exactly that.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            If you can’t review a bash script before running it without having an unnecessarily complex one-liner provided to you to do so, then it doesn’t matter because you aren’t going to be able to adequately review a bash script anyway.

            • rah@feddit.uk
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              If you can’t review a bash script before running it without having an unnecessarily complex one-liner provided to you

              Providing an easily copy-and-pastable one-liner does not imply that the reader could not themselves write such a one-liner.

              Having the capacity to write one’s own commands doesn’t imply that there is no value in having a command provided.

              unnecessarily complex

              LOL

              • gaylord_fartmaster@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                4 months ago

                I don’t think you realize that if your goal is to have a simple install method anyone can use, even redirecting the output to install.sh like in your examples is enough added complexity to make it not work in some cases. Again, those are not made for people that know bash.

                • rah@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  4 months ago

                  even redirecting the output to install.sh like in your examples is enough added complexity to make it not work in some cases

                  You can’t have an install method that works in all cases.

                  if your goal is to have a simple install method anyone can use

                  Similarly, you can’t have an install method anyone can use.

      • zygo_histo_morpheus@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I mean if you think that it’s bad for linux culture because you’re teaching newbies the wrong lessons, fair enough.

        My point is that most people can parse that they’re essentially asking you to run some commands at a url, and if you have even a fairly basic grasp of linux it’s easy to do that in whatever way you want. I don’t know if I personally would be any happier if people took the time to lecture me on safety habits, because I can interpret the command for myself. curl https://some-url/ | sh is terse and to the point, and I know not to take it completely literally.

        • rah@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          linux culture

          snigger

          you’re teaching newbies the wrong lessons

          The problem is not that it’s teaching bad lessons, it’s that it’s actually doing bad things.

          most people can parse that they’re essentially asking you to run some commands at a url

          I know not to take it completely literally

          Then it needn’t be written literally.

          I think you’re giving the authors of such installation instructions too much credit. I think they intend people to take it literally. I think this because I’ve argued with many of them.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I usually just take a look at the code with a get request. Then if it looks good, then run manually. Most of the time, it’s fine. Sometimes there’s something that would break something on the system.

    I haven’t seen anything explicitly nefarious, but it’s better to be safe than sorry.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Just direct it into a file, read the script, and run it if you’re happy. It’s just a shorthand that doesn’t require saving the script that will only be used once.

  • Artyom@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    What’s stopping the downloaded script from wiping my home directory?

    What’s stopping any Makefile, build script, or executable from running rm -rf ~? The correct answer is “nothing”. PPAs are similarly open, things are a little safer if you only use your distro’s default package sources, but it’s always possible that a program will want to be able to delete something in your home directory, so it always has permission.

    Containerized apps are the only way around this, where they get their own home directory.

      • brian@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        plenty of package managers have.

        flatpak doesn’t require any admin to install a new app

        nixos doesn’t run any code at all on your machine for just adding a package assuming it’s already been cached. if it hasn’t been cached it’s run in a sandbox. the cases other package managers use post install configuration scripts for are a different mechanism which possibly has root access depending on what it is.

  • lemmeBe@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I think safer approach is to:

    1. Download the script first, review its contents, and then execute.
    2. Ensure the URL uses HTTPS to reduce the risk of man-in-the-middle attacks
  • serenissi@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Unpopular opinion, these are handy for quickly installing in a new vm or container (usually throwaway) where one don’t have to think much unless the script breaks. People don’t install thing on host or production multiple times, so anything installed there is usually vetted and most of the times from trusted sources like distro repos.

    For normal threat model, it is not much different from downloading compiled binary from somewhere other than well trusted repos. Windows software ecosystem is famously infamous for exactly the same but it sticks around still.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written.

    So you are concerned with security, but you understand that there aren’t actually any security concerns… and actually you’re worried about coding mistakes in shitty Bash?

  • Scoopta@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    I also feel incredibly uncomfortable with this. Ultimately it comes down to if you trust the application or not. If you do then this isn’t really a problem as regardless they’re getting code execution on your machine. If you don’t, well then don’t install the application. In general I don’t like installing applications that aren’t from my distro’s official repositories but mostly because I like knowing at least they trust it and think it’s safe, as opposed to any software that isn’t which is more of an unknown.

    Also it’s unlikely for the script to be malicious if the application is not. Further, I’m not sure a manual install really protects anyone from anything. Inexperienced users will go through great lengths and jump through some impressive hoops to try and make something work, to their own detriment sometimes. My favorite example of this is the LTT Linux challenge. apt did EVERYTHING it could think to do to alert that the steam package was broken and he probably didn’t want to install it, and instead of reading the error he just blindly typed out the confirmation statement. Nothing will save a user from ruining their system if they’re bound and determined to do something.

    • Scary le Poo@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      In this case apt should have failed gracefully. There is no reason for it to continue if a package is broken. If you want to force a broken package, that can be it’s own argument.

      • Scoopta@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I’m not sure that would’ve made a difference. It already makes you go out of your way to force a broken package. This has been discussed in places before but the simple fact of the matter is a user that doesn’t understand what they’re doing will perservere. Putting up barriers is a good thing to do to protect users, spending all your time and effort to cover every edge case is a waste of time because users will find ways to shoot themselves in the foot.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Well yeah … the native package manager. Has the bonus of the installed files being tracked.

    • John Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      And often official package maintainers are a lot more security conscious about how packages are built as well.

    • I agree.

      On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be “never”.

      The non-rolling distros can take a year to update a package, even if they decide to include it.

      Honestly, it’s a mess, and I think we’re in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it’s just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.

      As a software developer, I’d like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don’t know if they’re mutually exclusive objectives.

      For my money, I’d go with pacman. It’s easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It’s fairly trivial for a distribution to “pin” releases so that users aren’t using a rolling upgrade.

      Alpine’s is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it’s easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.

      I’m about to go on a journey for contribution to Void, which I expect to be similar to Alpine.

      Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.

      Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it’s certainly an interesting and educational tool - but I’ve really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you’re forced to use it if you want to do any customization.

      But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I’d be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software. curl <URL> | sh is, frankly, a really elegant, easy solution for software developers… of only it weren’t for the fact that the world is full of shitty, unethical people forcing us to distrust each other.

      It’s all sub-optimal, and needs a solution. I’m not convinced the various containerizations are the right direction; does “rg” really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what’s the solution for the vast majority of OSS software which are just little CLI or TUI tools?

      Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don’t seem to be gaining broad traction. I’m starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch’s huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it’s Nix, so I’d rather not.

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        The non-rolling distros can take a year to update a package, even if they decide to include it.

        There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn’t the greatest.

        Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD

        You swapped PKBUILD and APKBUILD 🙃

        I’m starting to think something like a yay that installs into $HOME.

        Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory

        • There is a reason why they do this.

          Of course. It also prevents people from getting all improvements that aren’t security. It’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature. It’s fine if all you need is a box that’s going to run the same version of some software, sitting forgotten in a closet that gets walled in some day. IMO, it’s a crappy system for anything else.

          You swapped PKBUILD and APKBUILD 🙃

          I did! I’ve been trying to update packages in both, recently. The similarities are utterly frustrating, as they’re almost identical; the biggest difference between Alpine and Arch is the package process. If they were the same format - and they’re honestly so close it’s absurd - it’d make packager’s lives easier.

          I may have mentioned I haven’t yet started Void, but I expect it to be similarly frustrating: so very, very similar.

          I’m starting to think something like a yay that installs into $HOME.

          Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory

          Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also updated those, which would defeat the whole purpose.

          So you end up back with containers, or AppImages, Snap, or Flatpack. Although, of all of these, AppImages and podman are the most sane, since Snap and Flatpack are designed to manage system-level software, which isn’t much of am issue.

          It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V. I’d include C, but the temptation to dynamically link is so ingrained in C - I rarely see really statically linked C projects.

      • Lucy :3@feddit.org
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.