Tech Question about my Linux Server

My favourite of those was at that job N+1, where they had the heavily customised sendmail.cf but not the source that had been used to create it. (Not m4, oh no, it had been in their own more specialised macro language.)

2 Likes

The saga continues…
For two days now around 4pm suddenly my webserver stopped working or more correctly “stopped answering” (it was working just fine as a local wget told me)

Yesterday, it recovered miraculously after an hour of panicked “debugging” (aka searching for error logs and upgrading php and restarting apache/nginx and the server). Today… it eventually would have recovered. 503 is a suspicious error code though, isn’t it? And mail was still working and my plesk panel…

I happened to fail2ban our own IP for port 80/443. In Germany you still get disconnected once a day and get a new IP address so it’s not as easy as just putting down your personal IP address as “trustworthy” (although as a temporary fix I did just that now).

I am 90% sure, that the reason for the server to think we are attempting to hack it, is this: after I had to reinstall nextcloud, I moved to another subdomain and my partner forgot to change his caldav/carddav syn credentials because apparently he cannot do that without my help and we were too busy during the week-end. 4pm is precisely the time carddav and calddav try to sync.

Oh the neverending adventures that start once one touches a “running” system.

PS: this is definitely not the first time I’ve firewalled myself out of my server. Yay for rescue systems although in this case it isn’t even necessary.

3 Likes

There was a router (commercial, not the little home boxes) which I won’t name though the maker rhymed with “pretend”. To pass packets faster, it would split them in two: most of the content went into a FIFO-ish ring buffer, source and destination IP address plus buffer index went off as their own data structure to go through routeing rules.

Sometimes under heavy load this ring buffer got out of sync.

So we (the ISP I was working for) got customers saying “why are you trying to access my DNS server”, because someone’s DNS query had got tagged with another customer’s address…

One of the admins got “pretend” to commit to replacing all the boxes with real routers bought from a competitor if they couldn’t fix it within six months. They failed to fix it and paid up.

3 Likes

After a downgrade to ubuntu 18 (*), I successfully installed Foreman :slight_smile: Yay!

Next up certbot.

As I’ve been running everything via plesk lately and that comes with both apache and nginx… I am a bit out of the loop. Does anyone still run a barebones apache? I mean I do not expect any high traffic on anything I do and I will not have multiple nodes that need to be balanced and I kind of feel proxy-ing everything through an additional layer is overkill and error-prone. The only reason I’d be willing to make the effort is if there is an added security bonus.

I know apache configs way better than nginx so I am hesitant to have to configure two servers for every subdomain.

(*) telling hetzner: yes please just re-install the whole thing. there was nothing on it but my user account anyway

1 Like

I do, when I can. In fact, I run it on my Windows machine for my own personal development work, as well as for work (where it’s not quite barebones).

I’ve always made every effort to avoid plesk, which seems to give linux users all the benefits of Windows server administration.

2 Likes

My aforementioned 10+ year old server started out with a plesk-like software package to manage it. I honestly don’t remember what it’s called and the only portion of that software I still leverage was its cron scripts/templates.

I had to drop out of the management UI quickly into the server’s lifespan due to how it was (or, rather, wasn’t) managing my local DNS zones; to this day, BIND still doesn’t behave correctly on that server and I have no idea why (probably some opinionated config put in place by the software; in my experience very few people actually know BIND… they just know zonefile syntax. I’m not different and cannot suss out why this installation acts strangely)

1 Like

Barebones Apache except where it’s barebones lighttpd (or for this place nginx but that’s not really my setup, just a standard config).

Admin is done by ssh root@server and editing config files. I’m not managing a great big farm full of identical servers, I’m managing a small number of specific ones.

I’ve never wanted a GUI thingy to get between me and the config files. :slight_smile:

2 Likes

I always advise not to do this. SSH as a user and then sudo.

1 Like

First thing i do is prevent password auth and second root login

1 Like

I won’t say you’re wrong in the general case, but in mitigation:

  • I’m the only person who gets root
  • I have a normal user account too, which I use for anything that doesn’t need root
  • root password login is completely disabled; that ssh is with my privkey
2 Likes

The horror of a datacenter burning down oO…

Strassbourg isn’t very far from here. And apparently Hetzner ran an add “check out our new firewalls” right after. I really hope this had been scheduled long before the fire.

As I am still fighting with my new server setup, I did a panicky manual offsite backup today. Offsite being our local NAS.

The whole thing has supposedly automated backups in place but there is too much plesk magic involved for me to trust any of this. Which is incidentally the main reason for my new server.

1 Like

OVH being bottom-feeders on cost this incident doesn’t necessarily suggest anyone else is at risk. (I have a box with them, for reasons, and they put out a series of work notices at all their other sites to “replace a large number of power supply cables that could have an insulation defect”.)

That said, I back up everything onto my home NAS (which is bigger, at 44Tibi, than anything I could afford elsewhere) every day, including this place. And the home NAS onto the backup home NAS.

I gather the FPS survival game “Rust” has lost over a week of player data, upgrades, and so on, which has hacked off its users quite a bit. And the Centre Pompidou is still offline.

Cloud just means “somebody else’s computer”.

1 Like

In my old job (as part of a security organization for a large US telecom), most of my peers had memes or fake motivational posters in their cubes with this mantra.

It’s scary how to laypersons (including most executives), “cloud” = “magic”… but it’s really just “cloud” = “implicit trust in a shared platform owned by someone else”

3 Likes

I am resurrecting the techie thread so I am not posting this stuff in the general how are you but I just need to write my frustration out.

Apparently a 2 year break doesn’t let me forget how to program but software development is a lot more than that and in addition i have to use an IDE I have never used before and the key-bindings from the old one are hard-wired into my hands. I have had to switch to light-mode because in dark-mode I cannot find the error messages. And while IntelliJ looks quite good and seems to be very powerful, I miss Eclipse. I miss remembering all the key-bindings and refactoring everything on the fly. I took the whole tutorial, made notes and a cheat-sheet but it’s horrible. The only thing I remember that I trigger a search with Shift-Shift.

And while I love working from home, getting to know a new project without ever meeting colleagues in person with the size of this legacy thing (this project got started around the year 2000) is hard.

Also, infrastructure problems with proxy servers, the WSL subsystem and docker from the start… that have nothing to do with me.

And to top it off they need a Oauth implementation which is really not that difficult to code but notoriously difficult to organize. And the organization is huge enough that it is hard to get all participants to a table. Or several tables. And now I am helping out the colleague who got stuck with that task and who has to struggle with the Oauth spec. I think I managed to convince him that the only valid flow is the Auth-Code flow and that there is a solution that does not involve browser popups in a native app… obviously this also involves ADFS and Keycloak and… meeeh. I must say though the MS documentation for developers looks much better than for users. I mean it actually exists.

PS: remote desktop = bad.

1 Like

I know IDEs have many advantages but my preferred style is still save, mouse to terminal window, cursor up, return (runs compile and test).

In my world, where we have more files in the code base than most code bases have lines of code, an ide is pretty much a requirement, just to figure out how to use things, and to cope with the style rules (there are actually command line tools for that (which is what the IDE is using, more or less, even), which work well, but are a little bit of a pain). I do use vi key bindings though, because, well, I’m not a savage.

1 Like

I shouldn’t be doing what I am doing as I evidently do not know what I am doing. And yet here I am.

So maybe one of you knows more than the rest of the internet.

My plan is to have the following setup working

  • windows laptop: :white_check_mark:
  • WSL2: :white_check_mark:
  • ansible controller (or whatsitcalled) inside WSL2 :white_check_mark:
  • vagrant creating VMs on windows with virtualbox as provider :white_check_mark:
  • vagrant installed and nominally working in WSL2 :white_check_mark:
  • vagrant actually creating VMs from WSL2 … NOPE

later…:

  • testing my ansible playbooks via vms created with vagrant
  • deploying my tested playbooks to my server
  • checking my configurations into git

(and that is without getting into WHAT I want to do on the server. Setting it all up more or less manually, I would probably be done already but that is not the point)

Vagrant does create VMs when I start it inside WSL2 but it hangs on testing ssh connection because it doesn’t have access to the network the VM lives in and I have now spent a whole day trying to figure out a way around this. Running the same vagrantfile in powershell gives me a VM I can log into.

So my best guess is that I need to improve my networking configuration on WSL2.

(The other way round works fine. F.e. for work I am running docker inside WSL2 and I have a powershell script that does port-forwarding from WSL2 to windows. But I cannot get the other way to work)

Does anyone have any expertise in this? Am I even asking the right questions? I have already exhausted my local network-able friends and stackoverflow and superuser have a lot of posts on this but most people cannot even get Virtualbox and WSL2 running at the same time which is not my problem.

I will also take any other setup that allows me to test ansible playbooks before I use them on my live server. It does need to be something though that works from commandline. If necessary even powershell (I hate powershell)… but ansible generally doesn’t like windows. It’s a surprise it deigns to work inside WSL.

2 Likes

I haven’t done vagrant via wsl2, but docker via wsl2 is easier than you could possibly imagine.

Docker I’ve got already–thanks to needing it for work :slight_smile:
The problem is mostly that networking vs me is a fight I am losing…
Especially with all the shenanigans with host-only vs bridged vs nat what Virtualbox is doing and with WSL2 being a VM itself…

I have almost resigned myself to compromising with the setup. I imagine I could setup the vagrant machine from powershell which works and then I only need to somehow make it so WSL can see that machine manually and instead of using vagrant to provision with ansible directly I will manually trigger ansible to run against the VM.

This leaves me with enough pieces to figure out the full process later and still give me a way to test before deploying.

I even made diagrams of what I want it to look like on the server…

  • docker container for my 2 wordpress blogs
  • docker container for my mail server setup with postfix/dovecot/roundcube
  • docker container next cloud
  • docker container for my own homebrew tools and static webcontent

all nicely hidden behind firewall and with an nginx to distribute everything–a bit of basic port forwarding isn’t too complicated for me. The most complicated part of the main server is going to figure out letsencrypt. I’ve used plesk for this up to now and even that created some problems

If I get this configured into an ansible playbook, I have configurations that I can commit to my git instead of keeping a running tally of what I did in some kind of text document.

The only thing I haven’t quite figured out is how to do a clean separation and backup of my data so it doesn’t get mixed up with the configurations.

The thing is that part is fun because I have done most of these things before successfully. And by the time I am there I am working with linux and not windows and there will be some progress… stuff I can look at in a browser and see my work getting done. Getting the tools ready is not visible and sooo frustrating. The worst part of my current job was setting up my workspace.

2 Likes

Oh foobar, this has been ages. But with job and renovations somehow this never got done.

So I am somewhat back at it because my nextcloud is being facetious. Mail settings are also deteriorating although I am unsure that it is really me or just some big a…hole providers blocking off small mailservers as “not trustworthy” (looking at you gmail)

So I think my biggest hurdle… seems to be domains. I want to be able to test the new server while leaving the old one up and running. So what I need is some kind of temporary domain (I have enough domains to set up something) which I then want to switch back to the one that is in use now. Is there a way to do this somewhat safely (meaning I don’t want to have hours of downtime because of stupid editing mistakes during the switchover)?

This is not a question about DNS, my DNS provider has a nice web interface that allows me to change around where my Domains and or subdomains or services are pointing at any time.

This is purely how to minimize impact while I am switching over from test to production mode.

In my mind I am doing most domain configurations for anything web in nginx and then forward to the docker container in question.

But what about the mailserver? This is also the most sensitive to lengthy downtimes… everything else is really quite optional.

I have given up on doing anything fancy with ansible et al. I would be happy if I just got a basic server up and running without plesk.