We study the model of metric voting initially proposed by Feldman et al. In this model, experts and candidates are located in a metric space, and each candidate possesses a quality that is independent of her location. An expert evaluates each candidate as the candidate’s quality less the distance between the candidate and the expert in the metric space. The expert votes for her favorite candidate. Naturally, the expert prefers candidates that are ``similar’’ to herself, i.e., close to her location in the metric space, thus creating bias in the vote. The goal is to select a voting rule and a committee of experts to mitigate the bias. More specifically, given $m$ candidates, what is the minimum number of experts needed to ensure that the voting rule selects a candidate whose quality is at most $eps$ worse than the best one? Our first main result is a new way to select the committee using exponentially less experts compared to the method proposed in Feldman et al. Our second main result is a novel construction that substantially improves the lower bound on the committee size. Indeed, our upper and lower bounds match in terms of $m$, the number of candidates, and $eps$, the desired accuracy, for general convex normed spaces, and differ by a multiplicative factor that only depends on the dimension of the underlying normed space but is independent of other parameters of the problem. We further extend the nearly matching upper and lower bounds to the setting in which each expert returns a ranking of her top $k$ candidates and we wish to choose $l$ candidates with cumulative quality at most $eps$ worse than that of the best set of $l$ candidates, settling an open problem of Feldman et al. Finally, we consider the setting where there are multiple rounds of voting. We show that by introducing another round of voting, the number of experts needed to guarantee the selection of an $eps$-optimal candidate becomes independent of the number of candidates.