Introduction\n\nAs a resource, the internet is a highly useful tool that connects a vast amount of computers together across a shared network globally. With this brings some very real benefits, including the ability to quickly and easily share information among large groups of people spread out across different regions, as well as collaborate on ideas effectively across these various territories. It has also revolutionised how we manage our important services such as utility bills, shopping and managing our finances on the go. Judging by the ways that our lives have been enriched by having such a far reaching, intertwined network, it is clear to see just how useful the internet is and how it can potentially shape our lives over the coming years.\n\nWith these overwhelming benefits, it can be tempting to put all of our services online so that we can access them wherever and whenever we want. Following a little research, we could easily host our computer files online ourselves using a myriad of different applications available. We could also host our media, access office-based software and even control various internal processes across the web just by making it accessible on the internet. Just think how easy we could make our lives! Think of the possibilities, where the only restrictions would be anything we haven't yet put on the World Wide Web!\n\nExcept, it isn't that easy WHATSOEVER, because putting things online can be very dangerous if the risks haven't been properly considered. Remember the golden rule, "if I can access this information, then potentially unauthorised parties can also access the same information". When you make something available on the internet, it's classification must be ENFORCED BY YOU. If something is private (for designated eyes only) but you've made it publicly accessible, then anyone could potentially find and make use of it. It doesn't matter if you haven't shared the link, someone WILL FIND IT using a whole slew of tools available out there (including potentially a quick Google search). As information owners, we need to act responsibly to ensure that data is ONLY read by authorised parties.\n\nTaking the time to consider the security elements of our data\n\nConsideration of Data Classification\n\nSo, after carefully weighing up all of the pros and cons you've decided that it would be very useful to host your data on the public internet. Great 👍!!! This will likely make it very easy for you and whoever else you are collaborating with to find the information they need from their respective locations without physical limitations. However, I advise that before you stick a server online with an entry point to your goodies, that you properly consider who the data is for and how it will be used. This process will determine your approach to hosting, as well as dictate any protective controls that you may need. I always find that when getting started, it helps to apply a few simple data access levels to your information and see which one fits the best. These are by NO MEANS exhaustive, but they do allow for the conversation to start, and you can tune them to your specific use case(s) afterwards.\n\nPublic Access: You don't mind who consumes the data, and likely want as many people as possible to have access. This is popular for news sites, organisational homepages and even political campaigns.\n\nPrivate Access: You want to ensure that only selected individuals can access all the data, and subsequently reject anyone not permitted. This is normally used for intranet sites.\n\nPrivate Access with Finer Permissions: You want to ensure that only selected individuals can access only certain bits of data, where individuals can access different information based on the access that they have been given. This is used for collaborative spaces, and sharing applications.\n\nAdministrative Access: You want individuals with high permissions to be able to make important (potentially disruptive) changes to services. This includes tweaking running configurations, managing users, managing permission schemes and potentially restarting the service(s) in question. Such administrators need added protections to ensure that malicious individuals cannot hijack such credentials and wreak havoc on your organisation.\n\nYou can use these to help produce your own classifications pertinent to your own organisation(s). The important element here is that you properly consider how your data will be consumed and then use that to plan measures which ensure that your data will be used in this manner. If you trip up, make a mistake and the information is accessed by the wrong individual(s), YOU CANNOT UNDO THIS. Once data is out there, it's out of your control for ever, so take this step seriously. I'm not telling you to not put it on the internet, I'm telling you to treat this process with the respect that it deserves.\n\nI'd like to make another point about information security. This topic most certainly isn't "somebody else's problem", and the privacy and security of the data we manage is OUR responsibility. Don't be naive and think STUPID thoughts like "it won't happen to me", "an attacker would never do that", "why would someone attack me?". Thoughts like this lead to silly approaches, ridiculous data policies and ultimately a data breach where you have to come out and say something along the lines of "we take your privacy and security very seriously", which I've always found interesting to say AFTER you've been breached. The moment you put something online is the moment that you will start getting "interesting" requests from across the internet (check your logs if you don't believe me). You may not have an actual person testing your defences all the time, but you WILL have automated scripts that scan the scan the web for vulnerabilities sniffing around your resources, SO BE PREPARED.\n\nWorld's Biggest Data Breaches & Hacks\n\nHaveIBeenPwned: Pwned Websites\n\nTechCrunch: Stop Saying, ‘We Take Your Privacy and Security Seriously’\n\nTaking the time to effectively consider our data so that we lock it up correctly\n\nIf you are uploading public information, knock your socks off! Don't upload anymore than you need to, and ensure that you make your sites easy to crawl by search engine spiders so that they can be indexed by the likes of Google and Bing. This makes it easy for people to easily find your content and boost your overall traffic. As this article focuses on putting resources on the internet securely, I won't be elaborating further on public data. Instead I will be focusing solely on the methods you can use to keep your more private data safe, and have included an example server configuration block for the NGinX webserver to help illustrate all the points I've described. This can be used as a template to get started using a fictional my-domain.com.\n\nEncrypting All Connections to Your Data\n\nIrrespective of the classification of your information, ALL access should be encrypted, no ifs no buts. Anything else will mean that you will be potentially revealing information about your users to others who can listen to connections. These listeners could be malicious attackers looking to hoover up information about users, as well as network operators looking to potentially inject adverts/messages into pages where this information doesn't belong. By encrypting your connections, you make this pretty much impossible to do this.\n\nNational Cyber Security Centre: Serve Websites Over HTTPS (Always)\n\nUS Government: The HTTPS-Only Standard\n\nPC World: Comcast's Open Wi-Fi Hotspots Inject Ads into Your Browser\n\nGizmodo: Comcast to Customer Who Noticed It Secretly Injecting Code: Maybe It’s Your Fault?\n\nArs Technica: Researcher Catches AT&T Injecting Ads on Free Airport Wi-Fi Hotspot\n\nEven if your data is intended to be public, you should encrypt your connections. Every request made by your users will carry supplementary information (including user operating system, browser, session tokens, cookies, referrers, any custom headers you are using etc.) which should remain between your users and you. No one else needs to know this information outside of this. Nowadays, encrypting connections is relatively easy to setup, and most importantly FREE, so there really isn't any excuse to not use it. I have previously written an article on this which I recommend that you have a look at.\n\nSi's HeyJournal: LetsEncrypt Our Sites to Keep Our Users Safe, NO Excuses!!!\n\nTo achieve this using the NGinX webserver, this is shown in Sections 1a and 1b of the NGinX configuration file.\n\nProtecting our data by putting in measures to keep the bad people out\n\nUsing Authentication to Ensure Your Data Reaches the Intended Audience\n\nAs a bare minimum, any private information should be protected with a login so that only identified users that you control can access it. Do not put private data on the internet and rely on not having a link to stop unintended parties from accessing it. This may sound like a silly thing to say, but it happens! If the service that you are running doesn't have authentication built in, then you can install a webserver like NGinX (example config included), Apache, Microsoft's Internet Information Services (IIS) etc. that sits in the middle between your service and your users and use their built-in authentication capabilities. For a relatively low effort investment, you can add a lot of protection with a simple setting.\n\nWhen users are setting their passwords/passphrases, you must ensure they set ones which are suitably complex. There is no point setting up authentication when someone is using "cat", "dog" "hi" etc. as their password, because it can be brute forced by password cracking software very quickly. Also, AVOID stupid password rules like the plague, they don't actually help and incentivise using predictable patterns which hurts the strength of passwords. Instead, use a measure of complexity based on length and entropy, and block passwords which have already appeared in previous breaches. I've talked about this at length in a previous article!\n\nSi's HeyJournal: Stupid Password Rules Are Stupid\n\nNowadays, single-factor authentication using only passwords/passphrases isn't enough against a backdrop of ever evolving tools that attackers have at their disposal and the new normal is to use multifactor authentication. Here, during a single session a user will authenticate their identity using one of the following: 'something they know' (password/passphrase), 'something they have' (software/physical token) and 'something they are' (fingerprint, iris scan). This limits the ability of criminals from hacking accounts remotely as they would need the other elements in order to gain access. This is a subject that I've covered in far more detail in the following articles. Where possible, ensure that you use at least two factors to protect your systems, and if you need another example of a possible 'something they have' we come to this later with 'Mutual TLS Authentication' where installed certificates are used.\n\nSi's HeyJournal: Tell Me More: Multi-Factor Authentication\n\nSi's HeyJournal: Windows Now Supporting Hardware Keys for Login\n\nTo achieve this using the NGinX webserver, this is shown in Section 2 of the NGinX configuration file.\n\nEnsuring that only the right people have the key to our data\n\nLimit Access to Specific IP Addresses\n\nSo you're putting a private resource up on the internet for specific people to access. The next question to ask is where does it need to be accessed from? If your information is only going to be managed from known locations with static IP addresses, then we can limit access to these specific IP addresses and block access from everywhere else. This then limits the damage caused by parties from around the web because they wouldn't have the right IP address to access your information. Such attackers would instead have to make far more of effort i.e. somehow gain control of a machine on your network(s) to be able to access your resource(s). There are many ways to limit IP addresses if the software you are running doesn't provide such a functionality, including:\n\nManaging in hardware using your outward-facing firewall by limiting the inbound connections\n\nUsing the software firewall that comes bundled with your operating system like iptables for Linux and Defender that comes with Windows\n\nInstalling a webserver like (the previously mentioned) NGinX (example configuration given) Apache, IIS etc. using this to permit only specific addresses and denying all others\n\nIn this case, attackers can't attack something they cannot reach across the internet! If pursuing this route, then IP addresses with access must be properly managed by the system administrators to ensure access where needed and revoked when not. To achieve this using the NGinX webserver, this is shown in Section 3 of the NGinX configuration file.\n\nValidate the User with Mutual TLS Authentication\n\nWhen encrypted connections are negotiated with your online resources (normally resulting in that lock in browser address bars), the server satisfying the requests(s) returns a public certificate that the connecting agent (web browser, app, terminal program etc) validates prior to agreeing a secure connection. Such checks include:\n\nThe certificate has been signed by a trusted Certificate Authority (CA), a list of which comes pre-loaded with the connecting agent (at the time of writing, the trusted CA for heyjournal.com is LetsEncrypt).\n\nThe Common Name (CN) on the certificate matches the domain requested, so the CN on the signed by LetsEncrypt should be heyjournal.com.\n\nThe certificate returned by the server hasn't expired, as all certificates signed by CAs are only valid for a predetermined amount of time. Typically certificates last for a year, but LetsEncrypt certificates last 3 months. \n\nWhen all checks pass, the user application and the server agree an encrypted connection which is used to transfer data securely between them which should only be able to read by them an NO ONE ELSE. In this (typical) configuration, it is the user application checking whether or not it should trust server presenting the resources. With Mutual TLS Authentication, we can set the server to check credentials of the user application in a similar way, where the user application must present a certificate to the server signed by a CA that the server trusts. If the user application cannot do this, then the server rejects the connection.\n\nGuarding the connections of our online resources using Mutual TLS Authentication\n\nThe beauty of this is that we can act as our own CA, issue certificates to the applications under our control and use them to authenticate with the server. As external parties will not have access to our internal CA, and will not posses valid signed certificates issued by us, they won't even be able to make a connection with our online resources in the first place! To act as a CA using the excellent open source OpenSSL, I recommend the following link which takes you through all of the steps you need in a comprehensive manner.\n\nJamie Nguyen: OpenSSL Certificate Authority\n\nTo achieve this using the NGinX webserver, this is shown in Section 4 of the NGinX configuration file.\n\nPutting Things on the Internet Securely\n\nThe internet is a fantastic resource that enables us to share data easily and effectively, and thus it can be very tempting to serve more and more of our content over it. When putting things on the web for ourselves and others to access, we must consider the very real risks that exist and determine who needs the information and how they are going to access it. This helps us determine the measures we need to put in place including encryption, authentication, access control and two-way encryption negotiation to ensure that our data is only seen by the intended parties. By taking these elements into account, we can wield the powers of the internet to greatly improve how we interact with our data without becoming the victims of a malicious attack.\n\nTake care and all the best, Si.