What is a Unix timestamp?
A Unix timestamp is the number of seconds elapsed since January 1, 1970, 00:00:00 UTC (the Unix epoch), not counting leap seconds.
Unix timestamps provide a universal, timezone-independent way to represent time. They're widely used in programming, databases, and APIs because they eliminate timezone ambiguity and make time calculations straightforward. For example, the Unix timestamp 1609459200 represents January 1, 2021, 00:00:00 UTC.
What is epoch time?
Epoch time refers to the Unix epoch: January 1, 1970, 00:00:00 UTC. It's the reference point for Unix timestamps.
The term "epoch time" is often used interchangeably with "Unix timestamp," though technically the epoch is the starting point (January 1, 1970) while the timestamp is the number of seconds since that point. This date was chosen because it predates the creation of Unix and provides a convenient reference for modern computing.
How do I convert a timestamp to a date?
Multiply the Unix timestamp by 1000 and pass it to new Date() in JavaScript: new Date(timestamp * 1000). Other languages have similar functions.
In JavaScript, you multiply by 1000 because Date expects milliseconds, while Unix timestamps are in seconds. In Python, use datetime.fromtimestamp(timestamp). In Java, use new Date(timestamp * 1000L) or Instant.ofEpochSecond(timestamp). Most programming languages provide built-in functions to convert timestamps to human-readable dates.
How do I get the current timestamp?
In JavaScript, use Math.floor(Date.now() / 1000). In Python, use int(time.time()). Most languages have similar built-in functions.
Each programming language provides utilities to get the current Unix timestamp. JavaScript's Date.now() returns milliseconds, so divide by 1000 and floor it. Python's time.time() returns seconds directly. In Java, use Instant.now().getEpochSecond(). These functions always return the current UTC time as a Unix timestamp.
Are timestamps timezone-aware?
No, Unix timestamps are always in UTC and timezone-agnostic. They represent an absolute point in time, not a local time.
A Unix timestamp like 1609459200 represents the same moment everywhere in the world—it's always UTC. When you display this timestamp to users, you convert it to their local timezone for presentation. This is why timestamps are ideal for storing time in databases: they're unambiguous and don't require timezone metadata.
What is the difference between timestamp and datetime?
A timestamp is a numeric value (seconds since 1970). A datetime is a human-readable representation like "2024-01-15 10:30:00".
Timestamps are integers optimized for storage and calculation. Datetimes are formatted strings or objects optimized for human readability. Timestamps are timezone-independent, while datetimes often include timezone information. In databases, timestamps are typically stored as integers or TIMESTAMP types, while datetimes may be stored as strings or DATETIME types.
Why does my timestamp have 13 digits?
A 13-digit timestamp is in milliseconds, not seconds. Divide by 1000 to convert it to a standard Unix timestamp.
JavaScript's Date.now() and some APIs return timestamps in milliseconds (13 digits) rather than seconds (10 digits). For example, 1609459200000 milliseconds equals 1609459200 seconds. When working across different systems, always check whether timestamps are in seconds or milliseconds to avoid off-by-1000 errors.
How do I convert between timezones?
Convert the timestamp to a datetime object, then use your language's timezone library to display it in the desired timezone.
In JavaScript, use libraries like Luxon or date-fns-tz. In Python, use pytz or pendulum. The pattern is: parse timestamp to UTC datetime, then convert to target timezone. Never add or subtract hours from timestamps directly—use proper timezone libraries that handle DST transitions and regional rules.
What is ISO 8601 format?
ISO 8601 is an international standard for date-time representation, like "2024-01-15T10:30:00Z" or "2024-01-15T10:30:00-05:00".
ISO 8601 provides unambiguous, sortable date-time strings. The format is YYYY-MM-DDTHH:mm:ss with an optional timezone offset. "Z" means UTC (zero offset). "+05:00" means 5 hours ahead of UTC. ISO 8601 is widely used in APIs and JSON because it's human-readable, machine-parseable, and internationally standardized.
Can Unix timestamps be negative?
Yes, negative timestamps represent dates before January 1, 1970. For example, -86400 represents December 31, 1969.
While most modern applications deal with dates after 1970, negative timestamps are valid and useful for historical data. However, some systems (particularly older 32-bit systems) may not handle negative timestamps correctly. When working with historical dates, test your code with negative timestamps to ensure proper handling.
What is the Year 2038 problem?
On 32-bit systems, Unix timestamps will overflow on January 19, 2038, at 03:14:07 UTC, causing timestamps to become negative.
This occurs because 32-bit signed integers max out at 2,147,483,647 seconds after the epoch (January 19, 2038). After this point, the timestamp wraps to negative values. Modern 64-bit systems are unaffected and can represent dates billions of years into the future. However, legacy systems and embedded devices may still face this issue.
Should I store timestamps or dates in my database?
Store timestamps (Unix time) for timezone-independent absolute moments. Store dates for timezone-specific or local time information.
Use timestamps when you need to know exactly when something happened globally (user signups, API requests, logs). Use dates/datetimes when you need local time context (appointment scheduling, business hours, recurring events). Many databases offer both TIMESTAMP and DATETIME types—choose based on whether you need absolute time or local time semantics.
How accurate are Unix timestamps?
Unix timestamps are accurate to the second. For higher precision, use millisecond timestamps (13 digits) or microsecond timestamps.
Standard Unix timestamps count seconds, sufficient for most applications. JavaScript uses milliseconds (Date.now()). Some systems use microseconds or nanoseconds for high-precision timing. Note that Unix timestamps don't count leap seconds—they assume every day has exactly 86,400 seconds for simplicity.
What is the maximum Unix timestamp value?
On 64-bit systems, timestamps can represent dates up to year 292,277,026,596. On 32-bit systems, the limit is January 19, 2038.
A 64-bit signed integer can store values up to 9,223,372,036,854,775,807, which corresponds to dates far beyond any practical use. However, 32-bit systems are limited to 2,147,483,647 seconds after the epoch (2038). Always use 64-bit integers for timestamp storage to avoid future compatibility issues.
How do I handle Daylight Saving Time with timestamps?
Unix timestamps automatically handle DST because they're always in UTC. DST only affects the display of local times.
Since timestamps represent absolute UTC time, they're unaffected by DST transitions. When converting timestamps to local time for display, timezone libraries automatically apply DST rules. This is why you should always store timestamps in UTC and convert to local time only for presentation—it eliminates DST complexity from your data layer.