- This topic is empty.
-
AuthorPosts
-
May 21, 2014 at 11:08 am #170819nixnerdParticipant
I’m just joking. I installed cmatrix and just ran it in my terminal to see how awesome it was… Guess I should follow the white rabbit now.
May 21, 2014 at 2:26 pm #170825nixnerdParticipantHey @traq, I’ve got a question about file permissions on this setup.
/srv/http
, which is the localhost is owned by the root user. Now, it’s obviously not located in the/
directory, but it’s got very strict permissions set and root owns it. This means, I can create read-only files with my user but I CANNOTmkdir
anywhere inside of/srv/http
.Now, this really isn’t a big deal. But, it’s preventing me from git cloning in that location. So… I have two options. I can either keep root as the owner and everytime I want to clone/pull with git, I’ll have to
sudo
. Or… I can chmod/chown the directory to either lessen the permissions (probably a horrible idea), or I could take control with my user. That’s probably also a bad idea.But, consider this: According to Github, they say and I quote:
You should not be using the sudo command with Git.
Read all about it here:
https://help.github.com/articles/error-permission-denied-publickey
So, what do you think is the smartest thing to do in this situation?
May 21, 2014 at 2:52 pm #170827__ParticipantWho is your user? is it the same user as your webserver user?
Two ideas —and, sorry, I’m not sure exactly how they work; it’s been a while since I did this. Also note that, even though I’ve done this, I’m a casual, not a sysadmin. You’ll have to mess around with it a bit.
-
If your user is not the webserver user, then each user can have different permissions for the group. I think you’d probably make your user the owner, with full permissions, and use lesser (group) permissions for the webserver user.
-
Keep in mind that if your webserver can’t write, you can’t have your scripts do any filesystem stuff. (e.g., uploading, or even PHP sessions, might stop working if you lock things down too tight.) As long as the webserver user cannot write outside of the webserver directory, and you’re careful about its bash profile and what files it is allowed to execute, you should be okay. If your code is modular enough, you can keep it all in a r+x directory (so it can be used but not messed with), and set aside r+w+x directories for the webserver to have free reign in.
May 21, 2014 at 3:15 pm #170828nixnerdParticipantAlright, that all sounds reasonable. However… I think I’m asking the wrong question. Let’s start here:
Is it even safe to clone a repo from github onto a live server? I’m going to say probably not, ESPECIALLY when I work with collaborators. How do I know their workstations are secure? Who has access to it? How do I know they didn’t push a shell script into the repo that I missed?
May 21, 2014 at 3:25 pm #170830nixnerdParticipantIs it even safe to clone a repo from github onto a live server?
I’ll readily admit that I’ve been configuring servers for about 3 days and I’m a little paranoid at this point. But… I wonder if the whole local machine >> github >> live server is the best and most secure workflow. I mean… I guess it’s all run off of SSH keys but still. Seems a little suspect and I can’t put my finger on why.
May 21, 2014 at 4:13 pm #170831__ParticipantI wonder if the whole local machine >> github >> live server is the best and most secure workflow.
No, I absolutely wouldn’t do that. (And it “feels wrong” because you’re making github an unchecked gateway to your website.) Ideally, though, it’s less of a security concern, and more of a breaking-things concern.
What I would do (and have done, in fact) is more like this:
local machine » [github] » server (dev/testing) « server (live site)
So,
- local machine (contributor) pushes to github
(I leave github out sometimes, which is why it’s in brackets above) - github auto-pushes to a dev repo on the server.
This repo might also be a testing site.
Additionally, the push may not be automatic depending on how much you trust the contributor. - the server (admin, you) pulls the dev repo to the live repo (the site itself) when ready (this step is definitely not automated).
May 21, 2014 at 4:30 pm #170832__ParticipantThe “dev repo” is also important because not all changes you push are going to be intended for the “live” site: future features, etc..
On github, I would make sure each dev (and/or each feature) has its own branch, and the admin (you) pull them into master at your own pace, which is then auto-pushed to the server dev repo.
May 21, 2014 at 8:27 pm #170838nixnerdParticipantThe “dev repo” is also important because not all changes you push are going to be intended for the “live” site: future features, etc..
Yeah, I branch the shit out of my repos. Maybe I merge… maybe I don’t.
local machine » [github] » server (dev/testing) « server (live site)
Now, do the staging server and the production server have to be separate servers? Or do you just cordon off a part of the same VPS? Do you partition it? Ideally, I’d like to have it be as separate as possible.
Think about it… the whole point of Github is that you can upload your code and work with other people. However… what if there’s some malicious PHP that gets merged into the master? You can put it on your stage server/area but that’s not guaranteed to be enough of a quarantine.
But… what’s the alternative? Pull it to your home machine? All things being equal, I’d rather have something go wrong on my server. That’s WAY easier to fix.
Sorry I’m still hung up on security. I know I need a staging area and I’ll set that up… but I’m more concerned about security to be honest.
May 21, 2014 at 8:29 pm #170839nixnerdParticipantwhat if there’s some malicious PHP that gets merged into the master?
I get that there should ideally be some code review but when you’re dealing with 20,000 SLOC… it’s sometimes hard to spot everything that could be an issue.
May 22, 2014 at 12:22 am #170844__ParticipantNow, do the staging server and the production server have to be separate servers?
Don’t have to be. could be.
what if there’s some malicious PHP that gets merged into the master?
How many people do you plan on being allowed to push-at-will? Requests, sure. But actually move code to master, and the server? If your team is big enough, I could see having a few people that can push through to the server (dev) on their own authority, but not so many that you wouldn’t have selected them and know they are dependable. And I’d still have one specific person responsible for the live site.
And you forget: it’s git. If something goes bad, revert. Being distributed, you’re sure to have clean copies in a few places even if you don’t make off-site backups intentionally (but you should, of course).
I get that there should ideally be some code review but when you’re dealing with 20,000 SLOC…
Code review should cover every line, at some point. You might not look at a particular line personally, but break it into components and try to arrange it so at least two people do (one of whom was not involved in writing it).
How much of this is a practical problem, vs. theoretical? Are you actually setting up a million-line web app with thousands of contributors?
If so, then I’d say it’s time for a different process. IMO, completely auto-pushing works best for one-man teams with the occasional outside contribution. Large teams with big projects need more bureaucracy.
May 22, 2014 at 8:33 am #170858nixnerdParticipantThis is admittedly more of a theoretical exercise.
However… I think that really, the only thing I’m missing is a staging area/ dev server. I have achieved total parity between my dev environment and production server… Which is rare, unless you run Debian on both. Literally my local dev environment is exactly like my server.
However, it would be nice to have a test environment on a server for device testing.
What do you think is the best way to set this up? …I was thinking about a sub domain that isn’t indexed and requires login. What are your thoughts?
Btw, I’m just really interested in sys. admin. and dev ops at the moment. That’s why I’m spending so much time on this.
May 22, 2014 at 9:33 am #170862nixnerdParticipant(And it “feels wrong” because you’re making github an unchecked gateway to your website.)
Is it really an unchecked gateway though? Let’s say someone obtains my password to login to Github… WHICH I DOUBT! But even still… let’s assume they do. The could attempt to add a new SSH key to the account… but I’d be alerted of that. Let’s say they got my email too though… so I don’t know. They still can’t push to/pull from the server without getting control of that… which would be much harder. As long as I monitor the SSH keys associated with the account, I should be fine. Right??
May 22, 2014 at 11:21 am #170868James BurtonParticipantHello,
_However, it would be nice to have a test environment on a server for device testing.__> _
I would make a copy of your release server or setup a VM on my home network and get some ip addresses from my isp to link with my test domain name.
What do you think is the best way to set this up? …I was thinking about a sub domain that isn’t indexed and requires login. What are your thoughts?>
Yes, I login is a good idea and If you have test domain name like for example: example.co.uk. You can create a test environment where you know someone will not go on one of your dev site by typing in the wrong url.
Thank you
From
James BurtonMay 22, 2014 at 11:24 am #170869nixnerdParticipantI would make a copy of your release server or setup a VM on my home network and get some ip addresses from my isp to link with my test domain name.
This is a good idea… but might require some hardware in my case.
What do you think is the best way to set this up?
I think I found the answer to this question: https://www.docker.io/
May 22, 2014 at 1:24 pm #170874nixnerdParticipantFor anyone reading this in the future (Hi future people!)… Linux Containers or LXC, seem to solve every single problem that I’ve laid out here. They address both staging and security and MUCH MORE!
My new plan is to spin up a new VPS with three Linux Containers. One to run the real, production LEMP stack, one to test and stage code on, and one to check for package breaks upon updating. All three will have complete and total parity. They will have the EXACT same packages installed and all of them will be modeled after my Dev environment. This will give me MULTIPLE opportunities to catch mistakes, bugs, security flaws and package breaks.
Keep in mind… all three of these come after the local testing/github phase in the workflow.
Also, these are essentially chroot $JAILs on steroids… as on person put it. Even if someone gains access to your server through your site… they will be stuck in a container that is completely separate from the system.
Also, some of you might be thinking “Wow, three virtual environments on one server… that’s going to mean a performance hit.” Well, not really. LXC is different than Xen or KVM, in that it doesn’t emulate the hardware at all. So, there’s not the same resource overhead that hyper-visors might have.
-
-
AuthorPosts
- The forum ‘Other’ is closed to new topics and replies.