#### Question :

So far I’ve only found questions that show seconds. I’d like to find it in Milliseconds.

Here is the code I tried:

```
import time
start = time.time()
def firstDuplicate(a):
dic={}
for x in a:
if(x in dic):
return x
dic[x]=1
return -1
firstDuplicate([1,2,2])
print("--- %s seconds ---" % (time.time() - start))
```

#### Answer :

You can more accurately calculate the run time for a particular operation by sampling.

You repeat the same operation for% of% times and calculate the total time spent, so you are able to calculate the average time the test operation takes to complete.

```
from datetime import datetime
# Quantidade de amostras
n = 1000000;
# Funcao em teste
def firstDuplicate(a):
dic={}
for x in a:
if(x in dic):
return x
dic[x]=1
return -1
# Registra o momento antes do teste
t0 = datetime.now()
# Repete a operacao em teste por N vezes...
for i in range( n ):
# Operacoes em teste
firstDuplicate([1,2,2])
firstDuplicate([1,2,2,5,6,7,8,9,10,11,12,13,14,15,16,17])
# Registra o momento apos teste
t1 = datetime.now()
# Calcula o tempo de execucao das N operacoes executadas
diff = t1 - t0
# Calcula a media de tempo de execucao em milissegundos de cada operacao
med = (diff.total_seconds() * 1000) / n
# Exibe resultado do teste
print( "Tempo da operacao: " + str(med) + " ms" )
```

Output:

```
Tempo da operacao: 0.002130244 ms
```

After a lot of research, a lot of documentation on the additional modules, and searches on other Stackoverflows, I got my answer.

In `Estrutura de Dados`

we make `Análise Assintótica de Algoritmos`

, in case the best case `O(1)`

and other cases are verified.

In fact, this is the best way to look at the behavior of an algorithm and see how it behaves based on the number of comparisons it makes.

This is the human way to do Algorithm comparisons.

Now there are `Estruturas de Dados`

that are or close to O (1), the `Hash`

structure in particular, because it uses a unique key instead of scouring an entire list. Either it exists or it does not exist, there is no doubt.

In Python we have the Hash Structure based on two types: `Dicionários e Conjuntos`

Any query made on these structures will be O (1), and this makes the Algorithm very fast.

In fact, the algorithm becomes so fast that its execution is not even in microseconds (I tested it in code and this was confirmed).

Now the clutter begins: Most operating systems can only guarantee accuracy of 1 second , since there is a risk of even absurd values (the value of a measured time after being less than the time value measured before). The machine can not express nanoseconds, which makes the measurement of fast Algorithms somewhat impossible. The timing of an Algorithm will depend on what is happening to the machine at that time.

So sadly, the best way to measure Algorithm execution time is through asymptotic analysis.

Follow the code I tried in Nanoseconds.

```
from datetime import datetime
dt = datetime.now()
antes = dt.microsecond
def firstDuplicate(a):
dic={}
for x in a:
if(x in dic):
return x
dic[x]=1
return -1
firstDuplicate([1,2,2])
print(firstDuplicate([1,2,2,5,6,7,8,9,10,11,12,13,14,15,16,17]))
ps = datetime.now()
depois = ps.microsecond
print("Microsegundos:")
print(depois-antes)
```