错误日志如下:
IMP-00020: long column too large for column buffer size (7)
Oracle手册中的错误信息参考大致是这样的:
IMP-00020long column too large for column buffer size num(x)
Cause: The column buffer is too small. This usually occurs when importing LONG data.
Action: Increase the insert buffer size 10,000 bytes at a time (for example)
up to 66,000 or greater. Use this step-by-step approach because a buffer size
that is too large may cause a similar problem.
测试了几次 buffer size 调整还是报告同样的错误, 怀疑是 export dmp 文件坏掉了. 重新 export , imp 还是有问题. 逼得我没有办法, 不得不跑到 Metalink 上搜索(访问 Metalink 速度那叫一个慢啊! 好半天,总算看到结果了, 居然我是遇到了 Bug 2417643!
Note:286597.1
Oracle9i: IMP-20 on Import of Table with TIMESTAMP Column that was
Created via Database Link
CAUSE
请问老大如何解决的
“修改一下通过 CTAS 跨 database link 建立的表的 timestamp 字段的精度”,
如何设置timestamp字段的精度?Oracle的文档上没有明确的例子.
TIMESTAMP [(fractional_seconds_precision)]
where fractional_seconds_precision optionally specifies the number of digits Oracle stores in the fractional part of the SECOND datetime field. When you create a column of this datatype, the value can be a number in the range 0 to 9. The default is 6. When you specify TIMESTAMP as a literal, the fractional_seconds_precision value can be any number of digits up to 9, as follows:
TIMESTAMP’1997-01-31 09:26:50.124′
请问老大在这里指定的fractional_seconds_precision为多少?
thanx!
还是我 呵呵
刚做了一个实验,明白了fractional_seconds_precision的作用,
fractional_seconds_precision表示时间秒后的精度.
SQL> create table test(d1 timestamp(3),d2 timestamp(9)) tablespace users;
Table created.
SQL> insert into test values (sysdate,sysdate);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from test;
D1
—————————————————————————
D2
—————————————————————————
12-APR-06 11.53.51.000 PM
12-APR-06 11.53.51.000000000 PM
但是问题随之而来了,”As a result, import will fail because import
expects a field of 7 bytes (used to store the value of a timestamp without
any precision), but actually encounters values of 11 bytes (used to store
timestamps with a precision)”
以上我的实验可以看出timestamp字段without any precision的情况下,至少也要占用18bytes吧 “12-APR-06 11.53.51” ,那”expects a field of 7 bytes”是如何得到的呢?
thanx!
先建表, 将原来的timestamp字段改为date, 说不定就可以导入了.
都已经开始用timestamp了.
先建表, 将原来的timestamp字段改为date, 说不定就可以导入了.
都已经开始用timestamp了.
看来metalink就是好,
可惜马上要改变注册方式了
不知道以后还有没有的用.
如果 “将原来的timestamp字段改为date” 那样会损失精度.
如果远程的表字段类型为Timestamp(6) 精度为6 , 则 CTAS 之后在本地需要手工指定一下精度
修改字段类型
expects a field of 7 bytes 指的是 export 的时候 dmp 文件的Metadata 数据有问题。
不是修改为 date
因为date的长度是7个字节, 所以我这样想, 估计修改一下dmp文件的头部也可以的