DBpedia is at the core of the Linked Open Data Cloud and widely used in research and applications. However, it is
far from being perfect. Its content suffers from many flaws, as a result of factual errors inherited from Wikipedia or incomplete
mappings from Wikipedia infobox to DBpedia ontology. In this work we focus on one class of such problems, un-typed entities.
We propose a hierarchical tree-based approach to categorize DBpedia entities according to the DBpedia ontology using human
computation and paid microtasks. We analyse the main dimensions of the crowdsourcing exercise in depth in order to come up
with suggestions for workflow design and study three different workflows with automatic and hybrid prediction mechanisms to
select possible candidates for the most specific category from the DBpedia ontology. To test our approach, we run experiments
on CrowdFlower using a gold standard dataset of 120 previously unclassified entities. In our studies human-computation driven
approaches generally achieved higher precision at lower cost when compared to workflows with automatic predictors. However,
each of the tested workflows has its merit and none of them seems to perform exceptionally well on the entities that the DBpedia
Extraction Framework fails to classify. We discuss these findings and their potential implications for the design of effective
crowdsourced entity classification in DBpedia and beyond.