Tests deserve the same love and care as production code.
Tests should avoid making decisions, such as for-loops and if-statements.
Tests are no guarantee that a system works.
Tests should explicitly contain the details you care about. Prefer explicitly setting up data in each test case rather than sharing data fixtures.
When you use the same data for a dozen tests, you can't tell which details are relevant to each test case. Set up the minimal data in each test case, using helper functions to hide irrelevant details and keeping the setup focused on what matters. This also makes the test less brittle when those irrelevant details change.
(This is an example of the general principle that code that changes together belongs together. In this case, relevant setup code and assertions belong close to each other, while setup that's required but unimportant can be put elsewhere.)
Use tests to document intended behaviour.
Instead of a function
Tests should be as reliable as possible. People should trust that a test failure means that the system is broken rather than the tests, even if that means throwing out much of your test suite.
Tests should be fast.
Every test has a cost, in maintenance time and time to run the test suite.
Every test should give you some information that another test doesn't. Ask: "Is there a scenario where this test would fail while others pass?". If not, you can probably get rid of one of your tests.
When should I use unit, integration and system tests?
The conventional view goes something like this: unit tests are faster and more reliable than integration tests. Therefore, you should write unit tests covering all relevant cases, including edge cases. In those unit tests, mock out any external dependencies, such as databases and file systems. Then, you can add a few integration tests to make sure everything is plumbed together correctly. There's no need to test all cases, such as edge cases, in integration tests since the unit tests should catch those cases, and we only want to add tests if they give us new information.
While I usually agree, there are some cases where just writing a highly focused integration test is much simpler than the equivalent unit test. For instance, when writing some code that deals with files, I often allow write the test to use a temporary directory. Accessing the file system still tends to be pretty fast, is closer to the actual use case of the code, is often less brittle than using mocks, and tends to simplify both the production and test code.
There is an argument that although the file system is fast, it's still much slower than pure in-memory tests. Although this may be true, the effects of this are only noticeable in large test suites. I'm a strong advocate of small modules that can be tested in isolation. Once a module has been tested in isolation with its comparatively small but comprehensive test suite, it can be used in a larger application without needing to re-run its tests as part of the application's tests.
Imagine you have a project that is mainly concerned with interacting with external dependencies, and has little internal logic. An example might be a library that provides an abstraction on top of multiple source control systems, such as git and Mercurial. In this case, it might be quite reasonable to have many integration tests, and few, if any, unit tests. Most of the complexity of the system arises from interaction with those source control systems, and so most of the bugs are likely to come from those interactions. Unit tests are unlikely to reveal those bugs, so integration tests are probably more appropriate.