A database administrator will have several responsibilities. 

The most important thing they do is to maintain the recovery chain. If something goes wrong it is their responsibility to be able to get the data back.

  • Security
  • Disaster recovery/backups
  • Availability
  • Check integrity of backups
  • Performance
    • Indexing
    • query tuning
    • hardware
    • config
  • Config

Web application Vulnerabilities.

Even though most of the exploits used in the low setting of the Damn Vulnerable Web Application (DNWA) should be known to web developers, they are commonly spread across the web. DNWA is developed for students and web developers to get a better understanding of application security by hacking away at their arsenal of vulnerabilities. So, here is my try at sql injection and reflected scripting attacks!

After Lab review

The application I am experimenting with in this lab is made to be tampered with, and several exploits are built in by design. Especially playing around at the low setting would be considered trivial in the security industry, and any business that exposes risks like these does not take their security seriously. I learned how easy it is to take advantage of such exploits, simply by using a website's own search field. 


What's wrong with HTTP?

At the dawn of the internet, there was a need for a standard protocol to exchange "documents" and it was called the HyperText Transfer Protocol (HTTP). Security was not a priority during the development of the protocol, and no one had any idea of how widely used it would be one day. That day eventually came, and because the lack of any security measures it was trivial to intercept information flowing on the protocol. HTTPS saw the daylight not long after and introduced SSL certificates as a way to encrypt and protect data on the wire.

In this lab, I demonstrate just how insecure pure HTTP is. 

After Lab Review

An interesting lab that shows the true reason why to consider security as part of your day to day life. Setting up a website is something most IT professionals will do, and while a plain http site might be quicker and easier to set up, it really impacts your security! 


Active Directory Rights Management Service (AD RMS)  is an SDK containing both a client and a server module. From the server side, it allows you to restrict the way your users handle data in your system, and in our lab 12, we will use it to prevent data leakage. I think this method is rather naive and creates a false sense of security in that there are various other ways of attaining data than to ie. print, copy or duplicate it.

End of Lab Review

Easy and quick lab, this one. As earlier mentioned I think it's a bit naive to implement these features to feel secure from the point of data leakage. I'm happy to have more knowledge about the RMD service.   


DHCP and DNS are both core networking protocols. 

Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides an Internet Protocol (IP) host with its IP address and other related configuration information such as the subnet mask and default gateway.
— https://technet.microsoft.com/en-us/library/dd145320(v=ws.10).aspx
An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses (IPv4) and 2606:2800:220:6d:26bf:1447:1097:aa7 (IPv6). Unlike a phone book, DNS can be quickly updated, allowing a service’s location on the network to change without affecting the end users, who continue to use the same host name.
— https://en.wikipedia.org/wiki/Domain_Name_System

In lab ten, I explore security exploits in the DHCP and DNS protocols. The idea is to save a copy of the target website, modify it and to use DualServer to spin up an instance of a server to look like we're the server of origin. In this lab, I did not manage to meet a result a was happy with, and I can only assume the lab wanted me to see the amended version of the site. 

End of lab review

I am confused. The lab hints for me to be able to see the amended version, but does not explicitly tell me what the expected outcome is supposed to be. Apart from chatting with other students, I've gone ahead and posted on the forums to see what experiences other people have wit this lab.  http://ecampus.nmit.ac.nz/moodle/mod/forum/view.php?f=29198


Telnet and FTP (file transfer protocol) are both protocols used to access files on remote or local machines. The fact both have in common is that both, but specifically FTP has been adapted into a new era of computing, one in where security would play a much larger role than when they were developed. Even the creators of the FTP protocol wrote a paper explained it was not developed with security in mind, and even listed the biggest risks: Brute force attacks, FTP bounce attacks, Packet captures, Port stealing, spoofing and Username enumeration attacks

In this lab, I will specifically look at how both protocols expose data on the wire. 



As the theme for this lab is key-logging, I recount my unsuccessful attempts at retrieving my friend's world of warcraft account credentials. Aside from experimentation as an act desperation back in the day, I have no personal experience using or being exposed for keyloggers. Keyloggers are most known for stealing other peoples passwords without their knowledge, but I've also been involved in companies where they use them as a tool for monitoring and alerting, compliance regulation and After-the-fact investigations. 

Good keyloggers record intricate information such as autocompleting commands in the cmd window by use of the tab key, copy-pasting, spell checkers and spin and drop down controls that change values. Advanced keyloggers will also save screenshots, files and watch the clipboard. 


How can we conceal files and potential threats inside other files in windows? In this lab, we step into the realm of steganography, the practice of concealing a file, message, image or video. 

End of lab Review

I found this lab less interesting than previous ones, but it nonetheless showed me that hidden executables were a greater risk in the past, as it is now mainly done through symbolic links that need to be run through the command line.

I stumbled significantly during the beginning of subtask three when I was asked to set up the symbolic link between the rtd and the setup of odysseus as the required files did not exist. I figured it had to be something left from an earlier exercise, so I went ahead and reinstalled the VM. It worked. 


Microsoft builds and maintains some of the largest data and information systems in the world. To manage the security of the windows OS in these environments, they have made the Baseline Security Analyzer. It checks the members of a domain against a pre-configured template. The templates being services and configuration options in the member os.  

In this lab, we simply installed the analyzer and made a cross-domain computer scan to check for risks. Here is the result:


During this session, I'll investigate themes such as footprinting, spoofing and denial of service attacks. 


Lab Two Review

As sniffing and man-in-the-middle attacks get more usual, I think this was a good primer on the subject. It provides a basic understanding of how one plans, initialises and executes an attack, but it also shows how easy it is for these tools to end up in wrong hands. Be it a teenager shutting down a school website, or a terrorist wanting to hurt someone. 

Overall I really learned a lot from this lab, and am definately looking forward to the next one!


Imagine working on a grand big project, hundreds if not thousands of interlinked files. If you're the sole author of the project, you would within a reasonable amount of time be able to remember what you have done and where you had done it. More realistically however, you work in a team of multiple people and the code base reflects your team's diversity. Trying to change one file or even one function can turn out to be a several hour long odyssey into the depths of your repository. 

What are components?

React is a technology that lets you create a view for every state in your app.  Each component is responsible for it's own state, making them incredibly easy to change or modify. You are the director of the react ensemble - filling in and pulling out components from your composition where you deem it necessary. 

A component is a reusable block on your website that follows the single responsibility principle. The principle states that one component should only be responsible for one thing. HTML as a language has multiple different tags you can use to identify different parts of your page, including <h1> for headers, <script> to indicate inline scripts, <body> to encapsulate the main content of a webpage and <html> to encapsulate the whole document. React takes this to the next level by letting you make such components and implement them anywhere. Into these tags, you can feed information through the props object for the components to pick up and push into the view. 

A component can be stateless and stateful, maintaining internal state which is totally encapsulated. React figures out when data changes, and re-renders the view returned from your component

A simple stateless component

export default class HelloWorld extends Component {
  render() {
    return <div>{this.props.msg}</div>;

Exporting wit the use of ES7, we are able to import the component into the main render function, where we define a mounting point in a normal html file. in this case it is a div with a class of  'outerLayer'.

import HelloWorld from './relevant/file/path'

 <HelloWorld msg='First msg!'>
  , document.querySelector('.outerLayer'));





Once upon a time, there was a grand ole wizard called Model, he was the source of all knowledge in the small town of Hobshire. He often kept for himself and didn't like the inconsistencies of the common folk, called the views, them being an unreliable source of information. Their information stayed unprotected and mutable where they lived, out on the streets or in their puny homes. Model only talked to the mayor of the humble town, Ms. Controller. As faithful as he was, Model answered all of Controller's requests, and controller decided what information should be handed out to the views, the common people, of the village. This way, Model and Controller had the sole right to information and the control of this information, together they ruled for a long, long time. 

This tale is, of course, based on the common MVC design- a super popular architectural pattern for software, particular web applications. Although most languages do not come with MVC capabilities, there are popular frameworks that include this functionality out-of-the-box. MVC works by separating the different parts of an application, making it easier to reuse and rebase code. 

Comics retrieved from https://pragtob.wordpress.com/2013/02/23/710/

CategoriesWEB601, SDV602

As I've touched on earlier, there is no single way of designing an application. It's standard for the big players in the industry to push their own design pattern, and also for open source communities to embrace different architectures. This is both because no problem is the same, so there is always a different solution to fit these problems, and because some design patterns make code easier to understand and to maintain.  

Coming from a c# background, I had no problem getting started with Unity programming. I did, however, have some hiccups with how the architecture is organised. Unity does not make many assumptions about the structure of your code, as long as it compiles the engine is happy. This situation brings with it the typical dilemma of having too much choice vs. too little - having this power often makes me a bit confused. In my quest to enlightenment, I stumbled upon Eduardo Dias Da Costa's MVC with Unity Guide, which gives a good insight on how to use the MVC pattern in Unity. 

By default, Unity uses the Entity-Component pattern. In this familiar pattern, you start by outlining all the elements (entities) we want in our application. Once this is mapped out, you go in to define the logic and data that each entity should use (components). In c# this would be the equivalent of an object, containing an array or ArrayList of components. Eduardo argues that since developing games is not different from other software development in that you detect input, act upon the data collected only to push this raw or processed information back into the view. You might also store this data or maybe the entire sequence of user inputs in a database. It is no different from any conventional MVC application. 

Data, interface and decisions. Models views and controllers. This is the correlation I struggled to apprehend in unity, or rather, how to put MVC on top of the already existing EC pattern Unity is built on. Luckily, the flexibility of MVC allows us to do just this. The caveat with the general MVC pattern laid on top EC in Unity, as Eduardo explains, is the references you need to scatter all over your component and script files. The typical situation in unity is that you must drag files or objects around to attach them to different entry and exit points in your components. If unity crashes, you forget to save some changes, or you modify the file structure or code hierarchy, reference-hell will reign upon your poor soul. This makes it obligatory to have a single root reference object in where you can programmatically assign references. Because MVC encourages modularity, and each object having its own view, controller and model scripts, we also encounter a problem once we want to reuse code. We want to have the ability to have highly reusable scripts encapsulated so we can attach them to any entity. Eduardo came up with a pattern called AMVCC. 


  • Application - A single entry point to your application and container of all critical instances and application related data. 
  • MVC - ...
  • Component - Small, well-contained script that can be reused.

Usually, then, follows a discussion or insecurity about where something belongs in what part of the MVC structure, often referred to as MVC sorting. Eduardo goes ahead to provide a very helpful guide to just this problem, see under. Keep in mind that analysing a problem beforehand is something totally different than tackling the problem head on.


  • Hold the application’s core data and state, such as player health or gun ammo.
  • Serialize, deserialize, and/or convert between types.
  • Load/save data (locally or on the web).
  • Notify Controllers of the progress of operations.
  • Store the Game State for the Game’s Finite State Machine.
  • Never access Views.


  • Can get data from Models in order to represent up-to-date game state to the user. For example, a View method player.Run() can internally use model.speed to manifest the player abilities.
  • Should never mutate Models.
  • Strictly implements the functionalities of its class. For example:
    • A PlayerView should not implement input detection or modify the Game State.
    • A View should act as a black box that has an interface, and notifies of important events.
    • Does not store core data (like speed, health, lives,…).


  • Do not store core data.
  • Can sometimes filter notifications from undesired Views.
  • Update and use the Model’s data.
  • Manages Unity’s scene workflow.



A mission statement and explanation of my project.

Mission or purpose

It is my goal to make the site communicate with children aged 13-18. The intention is to make them aware of the importance of physical movement and also what they eat, and what the nutrition in this food does in their bodies. 

The mission is to give children a choice in a world where they in an increasing fashion is being pushed down a path that leads to unhealthy lifestyles. It is a fact that both public and private education institutions are decreasing hours spent on educating this these subjects, and also that an increasing amount of parents don't have the required knowledge to inform children about what you eat is being a choice. 

Short and long term goals

The short term goal is to provide an easy to understand interface for learning about nutrition and movement. 

The long term goal is to get the site's content verified by professionals and to extend the scope of the site and reach to include interactive tools that can contribute to the learning effect of the site. Such tools might be an interactive bot you can chat with to get answers, a forum like service for discussion with peers, and small games underlining the principles established by the site.

Intended audiences

The site will target children who have yet their education and that are enrolled in a school system where sufficient information about the system might be limited. It is intended for young children of all genders to use this as a complimentary service to the education they're already receiving and also as a self-study service. 

Why will people come to the web site? 

It's a free informative website openly available on the internet. People will come not come to my site because they feel blame or shame for being unhealthy, but because they are curious and interested in knowing how we are intended to live.  

Why would people first visit the site?

Nutritional information in a historical perspective has not been easily accessible for children. Even though the audience of this site will be in their early to late teens, it is not said these students know enough science to know whats going on and why our bodies need certain nutrients. They will come to this site because it will provide accurate information in a form that is easily understandable for their age group.

Why would people return to the site?

They will come back because of no matter what, this stuff is hard to remember, and there are many facts to remember! The videos and activities will also provide some degree of entertainment, and one might find this fun enough to return to the site. 



Last week, I got a delightful remembrance of the difference between information and information architecture. Maya design cleverly says that information is the message and has no form, meaning information has to reside in a form. This form can be a diagram, raw text as in a book or letter, in visual signs, sound, or brain tissue. Wikipedia, in a more technical way, explains that information is that which informs. Maybe we can talk about information being knowledge? What about data? Information being that of which informs, data is the raw bits and pieces of this information, it is unstructured and might not make any sense at first sight. Another difference is that information in the form of knowledge requires a cognitive observer, whereas data exist beyond the event horizon - it exists everywhere, always. 

We are form-givers to information
— Maya Design

As information has no form, we humans, as organisms capable of processing the information input, are so called form-makers of information. We have the ability to solidify information into something that our peers can understand. There is a vital problem with the solidification of information - human nature and ecology. We speak hundreds of different languages, live in cultures so widely different that even though we talked the same language, there would be a big chance of not understanding each other. Also, we also have a huge societal gap between the rich and the poor of the world. The problem is called communication. All humans are form-makers, we communicate information through oral and written messages, but some are also what I'd call professional form-makers, actually making a living off their form-making and communication skills. Web designers, calligraphers, typesetter, journalists, podcasters and editors. In a job like this, it is their responsibility to solidify and give form to information. Not only should these professionals give it form, but do so in a way that communicates with what is often a carefully selected target audience or demographic.

What makes this communication possible is the science of information architecture. IA, as Wikipedia puts it, is the art and science of organising and labelling information to support usability and findability. It means we can describe characteristics of good websites, construct an architectural understanding around this subject, and yet have endless combinations and varieties present. It's not to say it's easy because it's not. As any successful outcome of science*, it is something that is rooted in solid knowledge about the science itself and understanding of the problem domain at hand. 

* There are exceptions, of course. Penicillin, for example, was discovered by "pure luck" 



Maya design on information
Retrieved 25/27/17 https://vimeo.com/3248432

Maya design on information architecture
Retrieved 25/27/17 https://vimeo.com/3248803