This is exactly what I did on the second node. If this is not the correct / best procedure to adopt in these cases, please advise:
1. Removed all the data, including the system table (rm -rf data/ commitlog/ saved_caches).
2. Configured the node to replace itself, by adding the following line to cassandra-env.sh: JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=<node own IP address>"
3. Start the node.
Noticeably, I did not do nodetool decommission or removenode. Is that the recommended approach?
Given what I did, I am mystified as to what the problem is. If I query the system.schema_columnfamilies on the affected node, all CF IDs are there. Same goes for the only other node that is currently up. Also, the other node that is currently up has data for all those CF IDs in the data folder.